September 18, 2018 | Investment Themes

“What is the society you want to achieve?”

Rachel Quon

Written by

Rachel Quon

Image for post
Mehran Sahami and Hilary Cohen from Stanford University discussing how leaders in tech are confronting the ethical challenges of AI & ML.

Since its inception, Silicon Valley flourished by innovating relentlessly to build transformative technologies that would change the world. If it broke things along the way, well, that was the price of revolutionary progress. But as the 2016 election results showed, algorithms have an enormous impact on what we see and decisions we and technology make. The implications for how swiftly algorithms are shifting decision-making for society at large as well as the moral responsibility for those who code are enormous challenges, according to Mehran Sahami, a Computer Science Professor at Stanford University.

Sahami and Hilary Cohen, Pre Doctoral Research Fellow at Stanford, spoke last week for our second tech meetup as they offered an insider’s view of some of the ethical and social challenges presented by the development and use of artificial intelligence and machine learning. They examined the responsibility of students, technologists, and policymakers and provided a framework to think through the challenges of autonomous vehicles, data privacy, algorithmic decision making, bias, and “fairness.” Here were some of the highlights:

Technologists today are where physicists were in 1945.

They created new technology with great hope that ended a war as a result. People appreciated the role they played helping to end WW2 but then we all had to live under a potential nuclear nightmare for the next 44 years. That’s the way modern technology is starting to be viewed by some people both in and outside Silicon Valley. More recently, Silicon Valley, once universally applauded for creating revolutionary technologies that created all sorts of new possibilities, is now being criticized for creating tech without thinking through the social impact. Which is why many now ask, if the social impact is this big, must regulation accompany it?

Technology is advancing far faster than even experts thought possible.

In 2011, Sebastian Thrun — often considered the brainchild of autonomous driving — shocked people when he predicted that cars will drive themselves in 50 to 60 years. People thought that was too aggressive. Little less than a year later, Nevada became the first state to issue an autonomous license for driverless cars.

The challenge lies in programming for mutually incompatible notions of fairness and inherent bias.

Apple’s facial ID was programmed against caucasian norms and was not as good at distinguishing Asian faces, which meant it was not secure or useful for a large portion of the world. When programming algorithms, the attributes used to create predictions and what is equitable as it relates to predictive correctness shifts based on the assumptions used.

The data is what impacts what the algorithm does.

Algorithms make decisions and learn based on the data they process, so the data set is almost more important in shaping outcomes than the algorithms. When it comes to ethics, we tend to focus on the algorithms but the quality of the data and where it comes from matters just as much.

“We are creating stuff without thinking about the long-term societal impact.”

Silicon Valley is doing a great job in terms of innovating and creating new things but the biggest point in leadership failure is in thinking through the social implications of technology. How do we build technologies for good? For public interest? For social impact as opposed to just building technology for generating revenue? “Unless we pay some serious attention to what we are doing, the regulation that will invariably come will be potentially much worse than anything we can do to please ourselves,” Sahami said.

There’s a role that technologists can play.

Sahami and Cohen urged technologists to reach out to policy makers as they navigate through what are demonstrably complex, technical questions and proactively curb potentially “bad societal outcomes.” Sahami concluded his talk with a call to action for increased engagement. “We do understand the technology. Let us play a role.”

Sahami and Cohen noted there are no simple answers and that the best starting point is looking at the societal outcomes and to continue to learn from each other to develop ways to think about the implications of the technology being created.