Benevolent Bots is Lemonade’s new podcast series about the overlap of a few issues that are always on our minds: Artificial intelligence, insurance, and ethics.

Co-hosted by Lemonade CEO Daniel Schreiber and our in-house AI Ethics and Fairness Advisor Tulsee Doshi, Benevolent Bots takes a deep dive into big questions about technology and responsibility. Our second episode features a conversation with Ivana Bartoletti, Global Chief Privacy Officer at Wipro.

“Our goal is to talk about privacy and the challenges in the insurance space specifically—where we want to balance pricing risk accurately with the privacy implications of collecting data and leveraging it via machine learning.”

Tulsee Doshi, Lemonade’s AI Ethics and Fairness Advisor

Now, AI ethics is a nuanced field—and one that’s very much not suited to quick soundbites. Topics covered in this episode include storytelling and Privacy by Design; the wave of privacy-focused legislation sweeping over the globe; telematics; and much more.

But if you’re looking to quickly glean some insights before fully committing to the episode, we’ve summarized a few key points below (edited and condensed for length and clarity).

Dual demands, dual concerns

The issue that we’re going to have for years to come is going to be privacy and innovation, privacy and personalization, privacy and efficiency,” explains Ivana Bartoletti. “I say privacy and—I don’t say privacy or.

A company that wants to innovate—and upholds the rights and dignity of people—is a company that’s going to last for a long time.

Citizens, consumers, we want good products, we like having the product that allows us to make our life easier. A product which is tailored to our needs. But we want these two things [a tailored product, and privacy] to go together.”

Stop talking about necessary trade-offs

“In my view, a company that avoids the language of ‘trade-off’ is a company that does things right,” Bartoletti says. “Too often, I hear this sort of language of trade-off too often. ‘We have to choose between privacy and innovation.’ We’ve seen it during the pandemic: ‘We have to choose between privacy, and the ability to trace individuals. To monitor the pandemic, we have to choose between privacy and security.’ It doesn’t have to be like that.

“We can do both…. Of course, there are different cultures, different thresholds of what people feel comfortable with, but there are also values that hold us together—values around fairness in the way that data is treated, values about transparency. Control, to an extent, over one’s data.”

Maybe not a trade-off… but at least a balancing act

“In broad strokes, more data leads to greater precision,” says Daniel Schreiber. “The more I know about you, the more I am able to discern whether you are similar or dissimilar to the person standing next to you, in terms of the risks that you represent for whatever insurance I’m offering you. And again, there’s a mode of fairness here, and of ethics, because I avoid using proxies.

“I don’t just judge by gender or by credit score. And I start looking really at the content of your character…. The more I know about you, the more precisely I can price…. That’s really the balance that we need to think through.” 

Fairness, like ethics, doesn’t have a single definition

“You have to navigate what your concept of fairness is,” says Bartoletti, harkening back to Meg Mitchell’s discussion of ethics in the debut episode of Benevolent Bots. “What do you think is fair for you as an organization?…. The definition of fairness is different mathematically, philosophically, legally. Teams speak different languages. If I say fairness in statistical terms, it’s very different from what a lawyer would understand around fairness…. The issue is also ethical, it goes beyond and above ‘the law.'”

Moving beyond the trade-off phase

“If we really want to harness all this data—which I understand is important and actually benefits the consumer—that benefit for the consumer needs to go alongside an increased effort to protect privacy,” Bartoletti says simply. “Let’s experiment with things like federated learning or something that allows us to decentralize the way that we harness this information.”

Explain, don’t nudge

“How you communicate to the customer what’s happening, what kind of data you’re collecting about them?” Bartoletti ponders. “Without the ‘marketing tone,’ it means really being able to explain to them what it means to collect data, at which point it’s collected, where it’s going, how long it’s going to be held…. Having new ways to present this information is really important—like using legal design, where you express complicated concepts in an eclectic way….

“I would never use the language that says, ‘if you don’t allow us to do that, then we won’t be able to price this. That’s the trade-off language. Then you are, to an extent, using dark pattern methods, nudging people into making a choice rather than saying to them, ‘look, this is what we are, this is what we do, this is how we collect data’—and then informing the consumer to make the best choice.”

“….We have a duty to allow consumers to make that choice. We can’t nudge individuals [and] say, ‘look, if you don’t give us the data, you’re going to get crap service. But if you give you, give us all your data, you’re going to get fantastic service. That’s not the point…. [The goal is to have] users make an informed choice, not feel that they’ve been tricked into having to accept the terms and conditions.”


Listen and subscribe to Benevolent Bots on Spotify, Apple Podcasts, or wherever you get your podcasts. Stay tuned for new episodes in the coming weeks.

categories: #Lemonade101 #transparency

Share