(Unsplash, 2019)

This article is brought to you thanks to the collaboration of The European Sting with the World Economic Forum.

Author: Dharmesh Syal, Chief Technology Officer, BCG Digital Ventures

Everyone from Stephen Hawking to Bill Gates and Elon Musk have discussed the philosophy of AI. Now that companies around the world are creating AI products at an incredible rate, it’s increasingly urgent that we stop talking about how to implement ethical safeguards into AI and start doing it.

The race to build the first fully autonomous vehicle (AV) has brought this issue front and centre. The death of a pedestrian in March has raised concerns not only about the safety of AVs but also their ethical implications. How do you teach a machine to “think” ethically? And who decides who lives and who dies? While this is an obvious (and impending) example, ethical questions about AI are all around us.

Why are ethics so important?

The areas where AI stand to benefit us the most also have the most potential to harm us. Take healthcare, an industry where decisions are not always black and white. AI is far from being able to make complex diagnoses or replicate the “gut feelings” of a human. Even if it could, are AI doctors ethical? Could AI be trained to increase profits at the patient’s expense? And in the case of malpractice, who would the patient sue? The robot?

AI has been projected to manage $1 trillion in assets by 2020. As in healthcare, not all financial decisions can be made on logic alone. The variables that play into managing a portfolio are complex and one false move could lead to millions in losses. Could AI be used to exploit customer behaviour and data? What about hacking? Would you trust a machine to manage your money?

AI has been projected to manage $1 trillion in assets by 2020.

AI warfare raises the most concerning ethical flags. Fully autonomous “mobile intelligent entities” are coming and they promise to change warfare as we know it. What happens when an AI missile makes a mistake? How many errors are “acceptable”?

These are the questions that keep me up at night. The good news is, it’s not too late; we’ve only seen a glimpse of what AI is capable of. The only way to make sure we don’t create a monster that could turn against us is to incorporate ethical safeguards into the architecture of the AI we’re creating today.

Here are three strategies anyone currently building AI should consider:

1. Bring in a human in sensitive scenarios

In all the scenarios above, the question remains: when and to what extent do we bring in a human? While there’s no definitive answer, AI that employs a “human-in-the-loop” (HITL) system, where machines perform the work and humans assist only when there is uncertainty, yield more accurate algorithms. If a machine finds a misleading set of metadata, it could learn lessons a reasonable human would avoid.

Establishing ethical practices around metadata will give structure to the HITL scenario and potentially automate the “human factor” over time. Human conscience and moral code must also be codified as part of the AI metadata that drives interactions and sometimes decisions.

2. Put safeguards in place so machines can self-correct

We’ve all read about Facebook’s fake news problem, but the tech giant has recently come under fire once again, prompting it to remove more than 5,000 targeting options in their ad platform that could be used to discriminate against certain ethnicities and religious groups. These kinds of ethical features should ideally be integrated as the product is being built, but it’s better late than never.

I had the opportunity to do this firsthand when BCG, BCG Digital Ventures and a Fortune 100 company partnered to build Formation, an AI platform for personalized experiences. During the product build, we implemented safeguards at three checkpoints to ensure we did not breach users’ trust.

3. Create an ethics code

This may seem obvious, but you’d be surprised how few companies are actually doing this. Whether it’s about data privacy, personalization or deep learning, every organization should have a set of standards it operates by. According to Apple CEO Tim Cook, “the best regulation is self regulation”. For Apple, this means carefully examining every app on its platform to make sure they aren’t violating users’ privacy.

This is not a one-size-fits-all solution; the ethical code you enact must be dictated by the way you’re using AI. If your company breaks (or nears) a standard, employees should be encouraged to raise the flag and you, as its leaders, are responsible for taking these concerns seriously.

Here are some recommendations for creating an ethics code:

⦁ When personal data is at stake, we pledge to aggregate and anonymize it to the best of our ability, treating consumers’ data as we would our own.

⦁ We pledge to enact safeguards at multiple intervals in the process to ensure the machine isn’t making harmful decisions.

⦁ We pledge to retrain all employees who have been displaced by AI in a related role.

As the architects of the future, we have a responsibility to build technologies that will enhance human lives, not hurt them. We have an opportunity now to take a step back and really understand how these product decisions can impact human lives. By doing so we can collectively become stewards of an ethical future.