Reid Blackman, founder and CEO of the ethics consultancy firm Virtue, is the author of Ethical Machines. It’s a practical new guidebook to AI ethics—or, as he puts it, toward a form of AI that is “totally unbiased, transparent, and respectful.” He doesn’t waste much time with philosophical noodling; indeed, Ethical Machines is a hands-on tool for C-suite leaders, or anyone else who wants to revamp how their organizations think about artificial intelligence.

We spoke with Blackman about why thinking about AI Ethics is good for business, and how companies can have a structure in place to deal with related risks. (A quick and obvious caveat that Blackman’s opinions are his own, and might not exactly align with how Lemonade thinks about all of these issues.)

What’s one piece of orthodoxy in the AI Ethics community that you take issue with, or that you think is just flat-out wrong or silly?

Well, I don’t know if this counts as an orthodoxy, but it’s quite common to think about AI Ethics as an attempt to have a positive social impact or achieve some kind of ethical good. But for businesses, that simply cannot be their #1 priority. AI Ethics in business should primarily be concerned with avoiding ethical, reputational, and legal risks. We need to be using the language of AI ethical risk mitigation, not simply AI ethics.

Your book makes a persuasive case for the importance of thinking through ethical considerations of AI, especially before a company begins training machine-learning algorithms. While it’d be nice to think that companies would enact ethical AI practices simply out of a desire to be good actors, capitalism obviously doesn’t really work that way. How do you convince corporate players that AI Ethics is also good for the bottom line?

I spend a few pages in Ethical Machines explaining why this is so important for businesses from a business perspective. So here are three things that businesses really don’t like: First, having their reputation tarnished because, for instance, they’ve discriminated against a race or ethnicity at massive scale using AI. Second, being investigated by regulators or being sued for (allegedly) running afoul of the law. And third, losing the trust of their clients or consumers.

The thing is, when you realize some ethical risk of AI, it’s never small. AI is built for scaling things: operational efficiencies, judgments about insurance premiums, fraud detection, etc. When you realize an AI ethical risk, it always has massive impact, and that includes massive negative impacts to your brand, your finance and legal department, and to the trust you’ve worked so hard to build and maintain with clients and customers.

It’s clear that you think the role of AI Ethics is definitely not the purview of the engineers who are actually building the AI models, and that instead it requires an additional infrastructure of experts who can weigh in on various factors—like what specific definition of “fairness” the AI is meant to achieve in the first place. What sort of team is required to do this the right way, and what would you say to companies who feel that such a thing is cost-prohibitive?

The team should include an array of people already employed by organizations: people from the data science/engineering team, risk, compliance, cyber, and product.

The addition I recommend is an ethicist, for at least two reasons. First, you want to keep things moving as fast as possible, and ethicists are very fast at spotting the ethical risks that need to be dealt with. And second, you want your risk identification to be accurate, and ethicists are very skilled at seeing what the risks are.

In terms of cost, for most organizations I don’t think there is yet a need to have a team of ethicists. One may be enough. In fact, for some organizations, an external ethicist may be brought in when necessary—that is, when issues get elevated to the relevant (risk) board on which that ethicist serves.

In what sense are current confusions surrounding AI Ethics the fault of legislation and regulation that simply hasn’t caught up with the technology?

I don’t think the lack of regulations is causing confusion. I think that the people who understand AI don’t know much about ethical risks; the people who know about ethical risks usually work inside academic institutions and don’t know much about AI anyway; and the executives and members of the board responsible for protecting their brands are neither technologists nor ethicists and so really don’t understand AI ethics.

And that, really, is why I wrote this book. I’m trying to show the non-technologists that AI isn’t so difficult to understand, and to show both the non-technologists and technologists how to understand ethics, and AI ethics in particular. If I’ve done my job right, the fog of confusion will be lifted.


Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI is published by Harvard Business Review Press and is available now.

For more on AI Ethics at Lemonade, check out our conversation with Tulsee Doshi (Lemonade’s AI Ethics & Fairness Advisor), and subscribe to Benevolent Bots, our AI Ethics podcast.

categories: #Uncategorized

Share