TL;DR: We do not use, and we’re not trying to build, AI that uses physical or personal features to deny claims.
There was a sizable discussion on Twitter around a poorly worded tweet of ours (mostly the term ‘non-verbal cues,’) which led to confusion as to how we use customer videos to process claims. There were also questions about whether we use approaches like emotion recognition (we don’t), and whether AI is used to automatically decline claims (never!).
The term non-verbal cues was a bad choice of words to describe the facial recognition technology we’re using to flag claims submitted by the same person under different identities. These flagged claims then get reviewed by our human investigators.
This confusion led to a spread of falsehoods and incorrect assumptions, so we’re writing this to clarify and unequivocally confirm that our users aren’t treated differently based on their appearance, behavior, or any personal/physical characteristic.
AI is non-deterministic and has been shown to have biases across different communities. That’s why we never let AI perform deterministic actions such as rejecting claims or canceling policies.
Let’s be clear:
- AI that uses harmful concepts like phrenology and physiognomy has never, and will never, be used at Lemonade.
- We have never, and will never, let AI auto-reject claims.
Here’s why: We do not believe that it is possible, nor is it ethical (or legal), to deduce anything about a person’s character, quality, or fraudulent intentions based on facial features, accents, emotions, skin-tone, or any other personal attribute.
So, why do we ask for a claim video in the first place?
The simple answer: Because it’s better for our customers, making it easier for them to describe what happened in their own words.
We also believe it may reduce fraud. Behavioral economics research, much of what inspired Lemonade’s business model and B-corp status, has shown that we humans are less prone to lying when we’re looking at ourselves speaking in a mirror/selfie camera.
Coupled with the Pledge of Honor we ask customers to sign, and the fact that your unclaimed premium goes to a charity you believe in, we think this brings out the best behavior in all of us (insurer and insured), and allows us to pay legitimate claims faster while keeping costs down.
In the past few years, we have had ongoing conversations with regulators across the globe about fairness and AI. We have published about this topic, and run an internal program (together with external AI Ethics Advisors) to look into the ways we use AI today and the ways in which we’ll use it going forward. We’ve even launched a podcast series around the topic of AI Ethics—it’s called Benevolent Bots, and you can check it out on Apple or Spotify.