Benevolent Bots is Lemonade’s new podcast series about the overlap of issues that are always on our minds: Artificial intelligence, insurance, and ethics.
Co-hosted by Lemonade CEO Daniel Schreiber and our in-house AI Ethics and Fairness Advisor Tulsee Doshi, Benevolent Bots takes a deep dive into big questions about technology and responsibility. Our fourth episode features a conversation with Genevieve Bell: director of the School of Cybernetics at the Australian National University; director of the 3A Institute; and senior fellow at Intel Corporation.
“Today we’re exploring transparency and history and stories. How do we learn from the past to inform the way that we build for the future?”Tulsee Doshi, Lemonade’s AI Ethics and Fairness Advisor
AI ethics is a complicated field, alive with debate. As such, it’s not very suited to quick soundbites. Topics covered in this episode include terms-and-conditions that are as long as Hamlet; how bias was built into the history of photography; CEOs as storytellers; and what consumers actually expect out of their insurance companies.
While you should definitely listen to the episode in full—it’s a fascinating and wide-ranging conversation—we’ve gathered a few teasers below.
Technology doesn’t arise from thin air
“Sometimes we tell stories about technology that say, ‘Look, we’ve just solved this marvelous new problem. Here’s this great thing.’ There’s a seduction in imagining you’re starting with a clean slate, that you are starting with no history, no baggage,” Genevieve Bell explains. “Here’s this wonderful, shiny new thing that’s going to solve this wonderful problem.…
We do a good job telling stories about technology, but those stories tend to be very presentist. They’re very grounded in the now, and it means that we aren’t good at unfolding all the perils and pitfalls that have already come before us, that we could learn from.
We tend to unfold stories about, ‘How cool is this? It will do all this amazing stuff,’ and we don’t go, ‘Yeah, also, here’s all the other things that happened along the way.’ Telling the story of a camera isn’t just telling a story of compression and speed and pixels. It’s also telling a story about Mathew Brady and Kodachrome and the guys at Samsung.
“For me, those stories are really powerful ways of saying that we have to actually understand where [the technology] came from, what all the other decisions were that were being made.”
Imagining an ideal operator
“There are lines of inequity along which most technologies cleave,” Bell says. “Those tend to be about things that are not surprising: gender, race, class, national status, sometimes even religion, sometimes country of origin, frequently able-bodiedness, increasingly other kinds of what we would think of as intersectionalities.
“Most technologies have in them an imagined ideal scenario, or state, or body, or person that is their operator or the subject of their operation. It is often the case that this body is not an abstract body. It’s imagined as something, but we’re really bad at being able to articulate what that something is.”
The pitfalls of recommendation engines
“The history of recommendation engines [like those used by Netflix, Amazon, etc] is very much about imagining that humans are stable through time, rather than adaptive and changeable. One of the unexpected perils of those recommendation engines is that they want us to be who we have been, not who we are going to become. There’s something about stabilizing our histories as the predictor of our future behavior that is actually really troubling….
One of the challenges about the way big data functions is that data is by its very nature in the past, and as a result, is also conservative. It’s a fixed, rigid thing.
One of the impulses that humans have is their relationships to each other, too, and so one of the things that’s much harder to work out in data about any individual is their relationships to the broader whole—all of which is a complicated insurance problem. Whilst you can reasonably [make statistical assumptions about how safe male drivers are], they’re also on a road full of other people who will help monitor and regulate them.
Hyper-specificity doesn’t always do a good job of triangulating our social context. For a lot of things in both the medical and psychological space, your broader social context is as important as your own individual state. There’s an interesting tension in how you manage all that stuff.”
Editing the world…
“Why it is that we pay for Netflix or Amazon or the New York Times?” Bell asks. “What those services do is curate and edit the world for us. They manage down some of the noise and just give us the signal, and we know in purchasing them what the signal looks like, so we’re making some decisions about what we do and don’t want to hear…. There is someone else managing the complexity.”
…and giving up some control
“Growing up, I was part of a community of people who built their own PCs, that whole hobbyist world that we moved in,” Bell recalls. “Part of what happens in the ’80s and ’90s, and certainly into the ’00s, is that people have been willing to pay for someone else to do that for them. They were willing to say, ‘I no longer need to solder all those bits together. I can trust someone else to do it for me because I get some other benefit in them putting the plastic wrapper around it.’
One of the tensions that we [have] in this moment is that we don’t yet know how to make sense of the people that are doing the ‘wrapping,’ as it were, or the editing. And we aren’t as clear about their motives, the ways they are regulated or self-regulated. There’ve been stories that suggest their curatorial editing practices are suspect, enough to make us wonder.
“There’s this really interesting challenge, particularly for tech companies—and I say that as someone who sits inside one. We went from being makers of a technology that was then bundled by others to do things—to also actively acting on the world in very different kinds of ways.”
Do Androids Dream of Electric…Cats?
“It’s not actually about AI [singular], it’s about AIs [plural],” Bell says. “I can guarantee you have multiple algorithms running inside Lemonade. There are multiple, different kinds of workloads. [By] the point at which you have something that is genuinely autonomous, self-learning and functioning—you won’t just have one, you’ll probably have several.
The AI of which we are speaking is not the one that science fiction gave us. It’s not singular, monolithic, and wanting to take over the world. It’s small and fragmented and wanting to vacuum your floors and possibly change your traffic lights and occasionally send content to your phone predictively.
[These AIs] don’t all speak to each other. At least, I’m hoping the traffic lights and the robots are not in a constant dialogue. Not yet. That would cause me some concern. And if they were, what would they be talking about? Cats, probably.”
Show. Me. The. Algorithm.
“When I say, ‘Could you be more transparent or more explicable?’ is what I’m actually saying, ‘Tell me what your motive is here?'” Bell muses. “Sometimes a request for transparency or explicability is really less about, ‘tell me the tool,’ but [rather] ‘tell me the intent.’ I want to know what you’re doing with it….
Most people, when they say, ‘I want to understand what the algorithm is doing,’ don’t want to be told how the microprocessor works. They don’t want to understand Claude Shannon’s information theory or notions about throughput and lithography or silicon photonics. That’s not what they’re asking about. I don’t actually think they’re asking to have the math explained to them, either….
“The ask is more about, ‘Explain to me the moral equivalent of your curatorial editorial process. Explain to me your motive.’ As in, ‘Why are you doing those things? And be willing to answer it in a way that makes sense to me.’
“I don’t think the ask is, ‘Print your code,’ because that doesn’t help anyone. One of the challenges that we absolutely have in this space is that we also need to bring not just our citizens along, but our regulators.
“Having sat in conversations in multiple countries and jurisdictions with regulators who themselves didn’t understand the objects we were talking about makes it really quite difficult. There’s a project on the part of technology companies and universities and governments to work out how to bring each other up to speed in these conversations, and be willing unpack the pieces of the puzzle.”