Benevolent Bots is Lemonade’s new podcast series about the overlap of issues that are always on our minds: Artificial intelligence, insurance, and ethics.

Co-hosted by Lemonade CEO Daniel Schreiber and our in-house AI Ethics and Fairness Advisor Tulsee Doshi, Benevolent Bots takes a deep dive into big questions about technology and responsibility.

Our fifth episode features a conversation with Navrina Singh, a former Microsoft and Qualcomm leader who now serves as a member of the US Department of Commerce’s National Artificial Intelligence Advisory Committee.

Singh is also the founder and CEO of Credo AI, which she describes as “the industry’s first responsible AI platform, providing comprehensive oversight and accountability across the entire AI life cycle.”

“Today’s episode is all about fairness and governance. What does it mean to build a fair model? Why do we need accountability?”

Tulsee Doshi, Lemonade’s AI Ethics and Fairness Advisor

AI Ethics is a complex field, sizzling with debate. As such, it’s not very suited to easy soundbites. Topics covered in this episode include informed consent; global data trusts; and how the ethical use of data can become a differentiator to set one’s business above the competition.

While you should definitely listen to the episode in its entirety, we’ve gathered a few teasers below, edited for length and clarity.

A necessary slowdown

“Credo AI really started with my work in the past 18 years in the tech industry, building AI products,” Singh says. “I saw this massively increasing oversight deficit that started to cause us to put AI applications out in the world which weren’t aligned with the ethical values that we as developers were trying to bring to the market. My teams in the past were data science, product management, machine-learning focused—and very rarely would we bring in the perspective of compliance and policy and risk.

Every time we brought in that perspective, things would slow down. A lot of checks and balances were put in place. 10 years ago it was seen as a hindrance—but more and more, as we started to build out large-scale AI applications, what became very clear was that these are living systems that we are putting out in the world. And it is really essential for us to start thinking about [whether they’re] performing in alignment with values, regulations, and standards, and the intention that we’ve created these applications for.”

A “beautiful opportunity” to build trust (and from that, sales)

“We actually don’t see regulations as the only tipping point to getting on a responsible AI journey,” Singh continues, countering any idea that AI Ethics is something companies are forced into. “There are multiple forces that these enterprises are trying to get ahead of. First and foremost, as you can imagine, consumers are becoming more aware of where algorithms are touching their lives, and are becoming more vocal and educated about the need for transparency.

“When you think about the social movements that we’ve seen in the past couple of years, enterprises now have to hold themselves to higher standards to meet the consumer demand. So that’s just one pressure. What is fascinating is now investors are saying, ‘Hey, I am willing to invest in an organization. But, you know, in addition to your financial reporting, we would also like to see reporting on the ethical use of data—not only compliant with GDPR, but also how you’re creating these responsible applications.’

High tech companies—Google, Microsoft, Amazon all included—are recognizing that one of the ways that their peers are differentiating in this market is not only through technological innovation. It is through this trust-building exercise: “Wow, I am actually transparent with my consumer base as to where am I using any AI application, where am I sourcing my data from. What kind of assessments have I done internally, why did I do those assessments. What does governance really look like?’ And they are finding very early on the benefits of this transparency that they’re bringing out in the market.”

“There’s this beautiful opportunity in using responsibility as a differentiation by engendering trust. It’s going to unlock more sales, it’s going to engage your stakeholders longer, and it’s going to keep your employees, your board, and your investors much more excited about the technologies that your enterprise is building.”

Who’s actually at the table?

“I’m a big believer in human potential and the power of humans,” Singh says. “You’ve heard terms like ‘human in the loop,’ ‘human over the loop.’ I really think human is the loop and human is in command. 

“I think we should take that agency back. One of the things [Credo AI] encourages our customers to do is to bring in multi-stakeholder views, especially from impacted populations, and try and see: Are those impacted populations represented at the right phases of the data and AI life cycle pipelines? I believe it’s a critical aspect of building responsible machine-learning applications right now. 

“It starts with something as simple as, when you’re first deciding to use machine learning in your application, really looking across your team and seeing who the people at the table are. And it should certainly not just be the machine-learning engineers and data scientists—not to say that there is a lack of moral compass there. It’s just a lack of the higher order understanding of the risk that the enterprise might face. It’s really critical at the stage of designing and thinking about these applications to really engage with risk compliance policy stakeholders.”

Room for feedback & improvement

“Have mechanisms for end users who are getting impacted to provide feedback to the company,” Singh adds. “They can say, in a live production environment, how these applications are working and impacting [them]. I think this is going to be a really important mechanism for redress, as well as making sure that the machine-learning applications are thought about intentionally when it comes to those impacted populations.”


Listen and subscribe to Benevolent Bots on Spotify, Apple Podcasts, or wherever you get your podcasts. Stay tuned for new episodes in the coming weeks. And check out our broad Q&A on AI Ethics with Tulsee Doshi here.

categories: #Lemonade101 #transparency

Share