Most people reading this will have a pretty good, albeit vague, idea of what “artificial intelligence” means. And most also have an intuitive understanding of ethics, in the broad sense of making individual choices in line with however we define “being good.” 

But what happens when we train an ethical lens on AI? And why is doing so imperative and important, rather than just a fun exercise in sci-fi daydreaming? 

We sat down with Tulsee Doshi, a true expert in this niche field. She’s Head of Product at Google’s Responsible AI & Machine Learning Fairness team, as well as an AI Ethics and Fairness Advisor for Lemonade—where she co-hosts our podcast series on this topic, Benevolent Bots.  

Tulsee Doshi, Lemonade’s AI Ethics and Fairness Advisor.

Let’s start by defining a few basic terms. What do we even mean when we talk about ‘artificial intelligence’ (AI)? How is it different from what’s known as Artificial General Intelligence (AGI)?

Artificial Intelligence really encompasses any application or use case that does something smart. When we talk about Artificial Intelligence, it’s often easy to imagine what we see in movies: sentient beings like Vision in the Marvel universe, or Ava in Ex Machina. This type of Artificial Intelligence is called Artificial General Intelligence (AGI)—the idea that one day machines will possess intelligence equal to or surpassing that of humans.

If we ever achieve that reality, it’s still very very far away. Today, it lives only in our novels and films. The way AI manifests itself in our world now is much more commonplace. In fact, if you’ve used Instagram today, or YouTube, or Netflix, or even unlocked your phone with your face—you’ve used AI. 

We interact with AI to filter out spammy emails, add filters to our reels & stories, and get recommendations of news articles. AI is also increasingly being used for medicine, law, finance, insurance, employment, and education: for example, helping doctors by providing initial diagnoses from symptoms, or, on a scarier note, determining whether someone should get bail. 

Often, the way AI actually manifests is through machine learning (ML), the practice of training machines to learn from a set of data points to predict an outcome. With machine learning, we can train a model, for example, to learn whether a picture is of a cat or a dog, by showing it a bunch of pictures of cats and dogs. We can also train models to complete more complex tasks, such as answering a question you ask or recommending you a movie to watch. 

Why is it so important that there is an ethical component to AI?

Interestingly, the field of Tech Ethics is not new, nor is it specific to AI. 

Historically, when humans have developed new technologies, we have had a tendency to build with our own conscious and unconscious biases, and discover the consequences later. One of the most significant examples of this is, in fact, the seat belt. Until 2011, women were 47% more likely to be significantly injured in a car accident because seat belts had only been tested on male crash test dummies. These concerns were not a result of malintent, but rather the result of new technology being designed for those who built it. 

As the seatbelt example shows, we reflect our own biases in our thoughts and actions, and beyond that, the systemic biases that exist around us affect the way that we design and build products…even when we don’t intend for them to. What makes machine learning even more susceptible to this is the fact that it learns from past patterns and behaviors, and is therefore likely to predict outcomes that can be biased or harmful if not intentionally understood and addressed. And AI is enabling use cases at an unprecedentedly large scale.

The field of AI Ethics is about understanding how AI-based systems interact with the world around them, and how we can build in ways that are inclusive, safe, privacy preserving, and accountable—especially for communities that are often historically marginalized. 

What are some of the optimistic and inspiring uses you can see A.I. being put to in the next 10 years, within the insurance sphere specifically?

Lemonade’s Crypto Climate Coalition is the kind of initiative I’m most excited about for the usage of AI. If we can leverage machine learning to better predict weather patterns and understand crop conditions, and then use these insights to insure farming communities, we’re not only using AI to provide insurance, but to in fact increase equity in the way insurance is provided. 

One of the most potentially frightening aspects of AI might be that this technology will generate conclusions or offer advice, but without being able to “explain” its decision-making process. In other words: We might just have to “trust the AI” in some senses, which is quite a big leap of faith.

Trust is a central tenet to every product experience. I have to be able to trust that I’m going to be able to complete the task I set out to do, and a large part of that trust comes from understanding. 

Explaining complex AI-based models is still a field of early research. Techniques exist, but many of them are early and built for developers to be able to debug what they’re building. So, they’re still not easily understandable by an everyday consumer of technology. 

The field is continuing to invest in AI explainability, but in the meantime, there are ways that we can continue to build understanding and by extension, trust. One way, of course, is to use simpler models that are, in fact, explainable. And we see a lot of industries, insurance included, continue to invest in this approach. 

Another lies in the design of the product. It’s important that all AI-based experiences provide users with information and control. For example, if the model makes a mistake, can the user provide that feedback and/or affect the experience in some way? 

AI research and development is very expensive. How can the technology remain somewhat equitable, in that it isn’t just used to enact the whims and desires of the billionaire class? And what about on the consumer end—won’t access to AI products simply widen the opportunity gap between social classes?

It’s critical that the ability to train and deploy models be democratized to enable more access, and that education about computer science and machine learning be open to more schools and communities. We’re already seeing moves in this direction with nonprofits like AI4ALL and efforts like AutoML, which enables any developer to train and deploy ML models. 

But given that the bulk of AI R&D is going to continue to live in larger organizations that have more resources, it is critical that these organizations both improve their own diversity and actively engage with the communities affected by the technology. Only with voices that represent different lived experiences will we be able to reflect on and build for those experiences. 

Beyond this, building best practices and guardrails for developing responsible AI is critical. The more we can build starting points, metrics, and insights into the development process, the more we’ll see developers engage meaningfully in identifying and addressing considerations. 

If we aren’t intentional about the AI-based products we build, we will absolutely widen the opportunity gap—especially if some models or products work well for certain communities, but don’t for others. If, however, we build products for communities that have been traditionally underserved, maybe, just maybe, we can create opportunities rather than creating a gap.

One challenge that does remain is access. Today, 37.5% of the world’s population still doesn’t have consistent internet access, and this is even larger for women (52%). As AI brings forward more opportunities, these gaps become even more pronounced. In the next 2 to 3 years, I’d like to see the tech industry focusing not just on building more capable AI technologies, but also figuring out how to scale these technologies to solve the needs, use cases, and access considerations for communities across the globe.

When most ordinary people think of the dark side of A.I., they likely imagine something akin to the “Singularity”—computers becoming sentient, throwing off their shackles, and deciding to rule their weak humans. Is this just sci-fi paranoia?

Today, at least, the Singularity is definitely a distant (if ever) future. Sentience implies a level of inherent knowledge and emotion that I think is truly and distinctly human. That doesn’t mean that AI can’t conduct human-like tasks. GPT-3 and LaMDA, for example, have demonstrated the ability to process language and structure responses that are incredibly nuanced. As we train models to do more complex tasks, we risk associating their ability to complete these tasks with “sentience.” 

Can you give us an example of a success story from your own career in terms of how ethical considerations helped make a piece of AI more equitable?

In October of last year, Google launched Real Tone, its effort to improve the way cameras on the Pixel Phones captured darker skin tones. 

The team built responsible AI and inclusion into every step of their product-development process. They convened a group of experts—photographers, cinematographers, directors—who specialized in developing beautiful images of people of color; collected large and representative datasets; and then drove improvements across their features to ensure a high-quality final experience. 

Ethical considerations manifested in an intentional design and development process. They weren’t an afterthought, but rather an integral part of the product. And that makes all the difference.

“The need for an ethic that comprehends and even guides the AI age is paramount,” the authors of Age of AI write. “But it cannot be entrusted to one discipline or field.” This is obviously a real challenge: Who will create these boundaries, ethics, and regulations? Will it be a national government? A consortium of governments akin to the UN? A private company?

I believe that the only way we’ll create successful standards and regulation is through the combination of government, policymakers, academics, and private companies. If we limit this thinking to the public sector, we risk developing policies and guidance without meaningful understanding of the technical practicalities on the ground. If limited to the private sector, we may hit tradeoffs between a company’s goals and the goals of the broader set of global communities. 

Like any product development process, developing standards and guidelines requires research, testing, validation, and iteration. 

What sort of ethical considerations come into play in terms of labor markets and AI? For instance, it’s clear that AI will make many jobs obsolete (while creating new jobs).

When we talk about AI, we often focus on the replacement of human tasks, when I think the focus should be on how we could augment human tasks.

For example, instead of replacing a doctor with AI that would predict a diagnosis from a set of symptoms, a more viable, reliable, and desirable  outcome is providing suggestions to a doctor that may make their work more efficient, while still providing the oversight of a human who has critical experience & nuance about how symptoms manifest for different patients…and who can provide a patient with an in-person connection to discuss and address their condition.

The reality is that machines learn from patterns, and that machines make mistakes. Often, these patterns generalize in ways that make machines useful. But the world doesn’t always behave predictably (especially against a set of training data that may not represent the whole world), and the ability to be creative in problem solving is still a uniquely human behavior. 

One concept that is useful to touch on, here, is the idea of ‘human in the loop.’ Often, despite a model’s ability to predict a task well, we still want a human to oversee the eventual decision to ensure that human nuance and judgment is still a part of the process, especially when the model makes an error. 

Of course, there are jobs that are at higher risk of complete replacement: for example, the trucking industry as a result of self-driving cars. These discussions often touch on issues like upskilling – providing workers with the training needed to do a different set of jobs, or Universal Basic Income (UBI) – the idea that we should provide every individual with a regular standard  income independent of having or doing a job. These are both important topics, and one worth spending more time discussing in a future blog post.

Let’s fast forward to the year 2030. What role did you see AI playing in your own life, on a day to day basis?

By 2030, I expect AI to be a more and more integral part of my daily routine: sleep and fitness trackers that predict outcomes; recommendations for content; a self-driving car that takes me to and from work; a quick message to a virtual bot to set up a dinner reservation. 

AI will be augmenting, simplifying, and streamlining multiple parts of my day, while introducing me to new insights about my life and the world. Most importantly, I hope that I will be using AI that truly works for and represents me and my needs…and that this is true for every individual.


Looking to brush up on your AI Ethics knowledge or learn how to address these concerns in your organization? Check out Benevolent Bots, Lemonade’s podcast series dedicated specifically to the topic of A.I. ethics. It’s co-hosted by Doshi along with Lemonade CEO Daniel Schreiber. Recent episodes look at AI’s performance versus humans, and wade into the great privacy debate.

Doshi also recommends Race after Technology by Ruha Benjamin, which speaks to how technologies often embed the systemic discrimination in the world around them. We’d suggest a few recent books ourselves, including the recently published Age of AI (which informed much of our thinking for this conversation) and Everyday Chaos by David Weinberger.

categories: #Lemonade101 #transparency

Share