Across industries and functions, artificial intelligence (AI) and machine learning (ML) are the technologies that inspire the most excitement about the future. But what’s possible right now? How are data scientists and engineers actually applying AI and ML in their industries today? And what does the future hold?
Along with McKinsey, Bloomberg Beta, and True Ventures, we hosted the AI Innovators Forum, bringing more than 50 senior AI practitioners from top start-ups, U.S. high techs, and Fortune 500 companies to answer these questions, share insights, compare challenges and opportunities, and learn from each other in both vertical- and horizontal-focused sessions.
We used Chatham House Rule, meaning all participants are free to share the knowledge received without attribution. In that spirit, here are five takeaways and examples that came up that were illuminating for the group.
Enabling AI sometimes means humans are still required
It’s still early days for both machine learning and AI, which means humans are still essential in training models on the nuances that are truly only known to people. Often the challenge is where a human should be inserted in the loop.
One product, for example, reviewed complex contracts to extract key dates and terms. But often, the nuances of terms were in complex legal language, not in the dates, so it required experienced attorneys to apply their legal expertise to help properly model contract terms and implications of termination clauses.
COVID has massively accelerated digital transformation and what companies use to train models
COVID recently accelerated digital transformation, taking whole industries and workforces and shifting how they collect and train models. One example discussed was the shift to synthetic data as a reliable alternative to train and test algorithms for autonomous cars. Instead of having humans out in the real world driving hundreds of miles to collect, train, and test data, companies are increasingly using synthetic data as a more efficient and safer way to develop autonomous systems.
Good data is better than the best models
The reality is AI and ML can’t be applied to every scenario. For example, one company created a sophisticated ML approach to examining marketing analytics and performance so AI could predict the most effective marketing channels, but it required an incredible number of data integrations, and it wasn’t predictably better at suggesting marketing channels than existing experts using their expertise.
Likewise it is not yet possible to train enough scenarios of kids running in the street or a car swerving into your lane for self driving cars to learn and act accordingly. What is the safe default? Should the car stop when it doesn’t know what to do? Should it revert to a manual mode? These are just some of the ongoing challenges in training the ‘last 10%” of AI despite the vast majority of driving decisions being more efficient and accurate than a human driving on the road today.
Beyond the more obvious self-driving car scenarios, many AI and ML use cases bring ethical considerations that should be taken seriously. Like any other technology at scale, there must be frameworks and guardrails to help understand potential impact, mitigation paths, and when to forgo the use of these technologies altogether. One participant noted they recently rejected a use case based on ethical considerations. Although the automation would have been effective, it also would have put hundreds of people out of work, an extraordinarily negative human cost and terrible impact for the local economy. A balance is required.
There is no one right organizational structure to facilitate AI and ML
The takeaways around how best to structure teams working on AI and ML were reassuring: there are many models that can work. The trick, of course, is identifying the right one for your team and organization by knowing what you’re trying to do, where and why.
A great ‘starter’ model when expertise is a limited resource known by a few is the horizontal “expert desk” approach, which consolidates rare expertise in one place that others consult. Over time, this can transition into a vertical or integrated model that spreads expertise directly into teams, a fully distributed structure that works best if planned and implemented eventually over time. Conversely, others are embracing a best-of-both-worlds approach; one tech company centralized their AI teams and has also placed small data teams in each cloud business unit.
The AI Innovators Forum made it possible for like-minded AI practitioners to come together virtually and dive deep into topics ranging from the automation of enterprise processes to machine learning on the edge to operationalizing AI for industry use cases. All our thanks to the subject matter experts and moderators for leading the breakouts and facilitating learning. We’re looking forward to hosting the next one in 2021.