UN artificial Intelligence 2018

WIPO/Emmanuel Berrod. Nono-Y the robot was one of the highlights of the 2012 Geneva Inventions Fair.

This article is brought to you based on the strategic cooperation of The European Sting with the World Economic Forum.

Author: Alan Finkel, Chief Scientist of Australia

Humans and AI can learn to get along, says Australia’s Chief Scientist Dr Alan Finkel. But it will take trust – and a Turing Certificate.

Every day, I put my life in the hands of hundreds of people I will probably never meet. The men and women who designed and manufactured my car. Those who prepared my shop-bought lunch, and installed the power sockets in my office. There are countless more.

The capacity to trust in other humans we don’t know is why our species dominates the planet. We are ingenious, and our relentless search for new and better technologies spurs us on. But ingenuity alone would not be sufficient without our knack for cooperating in large numbers, and our capacity to tame dangerous forces, such as electricity, to safe and valuable ends.

To invent the modern world, we have had to invent the complex web of laws, regulations, industry practices and societal norms that make it possible to rely on our fellow humans.

So what will it take for you to trust artificial intelligence? To allow it to drive your car? To monitor your child? To analyse your brain scan and direct surgical instruments to extract your tumour? To spy on a concert crowd and zero in on the perpetrator of a robbery five years ago?

Trust by default

Today, we tend to answer that question by default. Few of us read the terms and conditions before we tick the consent box and invite virtual assistants, like Alexa or Google Home, into our homes. Nor do we think too deeply about the algorithms working behind the scenes in police forces, credit unions and welfare agencies to shape the communities we live in.

We can be confident that AI technology is evolving rapidly; that the well-financed ambitions of nations like China and France will accelerate its development; and that the concentration of massive datasets in the hands of Amazon, Google and Facebook will enable experiments on a scale we may well find discomforting.

Some applications of AI, such as medical devices, will run into existing regulatory regimes. But few of those regimes were designed with AI in mind. And the applications of AI that most concern the public today – those deployed in social media – have exposed the limitations of current laws.

How long can the default assumption of trust in AI hold? Scientists have been here before. For decades, conversations about biotechnology have been dogged by widespread mistrust of the word “gene”, just as pharmaceutical and agricultural industries have been assailed by a fear of “chemicals”. If consumers could be brought to clamour for “gene-free food” and “chemical-free water”, it’s not so hard to imagine calls for an “AI-free internet”. Or an AI ban.

True, we could never enforce such a ban. But we could certainly shut down AI development in the places where we most want to see it – in ethical and reputable organisations. We must be more nuanced in our approach. We can be, if we remember our signature achievement as a species: securing trust.

Manners matter

Think of the sophisticated spectrum of controls that operate in human societies – let me call them HI societies, for Human Intelligence.

At the extreme left end of the spectrum, we have societal norms – manners, if you like – and incentives for good behaviour. Moving along the spectrum, we have organisational rules that govern how to act in classrooms or the workplace, and commercial regulations for interactions in the market. Further along, there are penalties for misdemeanours such as poor car parking, then punishments covering crimes such as robbery and assault. Approaching the right-hand end of the spectrum, we have severe punishments for the worst civil crimes such as premeditated murder or terrorism. At the very right-hand end, there are internationally agreed conventions against weapons of mass destruction.

We will need a similar control spectrum for AI. At the left end, in particular, there is a worrying void. After all, what constitutes “good behaviour” in a social media company’s use of AI? Where is it documented? Who codes it into AI, so that HI and AI can grow peaceably and productively together?

We could conceivably come to a set of norms by trial and error – or scandal and response. But in a febrile environment, intellectual coherence is unlikely to emerge by lurching from one crisis of confidence to the next.

We need the equivalent of the manners we teach to children. We don’t wait for kids to do something objectionable, then make up a rule as we punish them. We reinforce the expectation of ethical behaviour from day one.

Manners in HI societies differ, but they are all designed to make individuals feel comfortable through mutual respect. Developers of AI need to keep this in mind, integrating it in their company model from the start and consistently reinforcing it as the reality of doing business.

Raising standards

Any proposal to regulate in a sector that trades on its reputation for “moving fast and breaking things” will meet resistance. I understand the impulse. When I was expanding my company, Axon Instruments, to make medical devices, I looked at compliance requirements with a nervous eye. I appreciated that it was right for the regulators to set exacting standards for an electrical device intended to be surgically inserted into a living human brain. I was prepared for immense frustration.

What I ultimately realised was that the standards – and the entire compliance process – were the framework I needed for building a competitive company that traded on quality. True quality is achieved by design, not by testing and rejecting. The ISO 9000 quality management systems (from the International Organisation for Standardization) ensure that high expectations are built into design and business practices, from product conceptualisation, through production, to maintenance and replacement. Compliance is assured by a combination of internal and external audits. We maintained these exacting design and business practices for our non-medical products too, because they made us a better company and gave us a commercial edge.

In HI societies, there are consequences for falling short of societal standards. We need the same for AI developers – a way for consumers to recognise and reward ethical conduct.

A new way to trust

The most straightforward method would be an AI trustmark. Let’s call it the Turing Certificate, for the great Alan Turing. A Turing Stamp would be the symbol that marks a vendor and product as bearers of the Turing Certificate, meaning they are worthy of trust.

The combination of the two – trustworthy organisation, trustworthy product – is critical. This model has long prevailed in the manufacturing sector. Indeed, if it were not for the implementation of agreed standards for design, manufacturing and business practices, our society would have completely rejected all electrical devices years ago on safety grounds, and we would not be worrying about AI today.

In the manufacturing sector, standards are both mandatory and enforceable. Manufacturing is a highly visible process with an obvious geographical footprint. AI development is more elusive. Mandatory Turing certification would be cumbersome. A voluntary Turing Certificate would allow responsible companies to opt in, including start-ups. It’s better to grow up with standards than bump into them in your teens. Done right, the costs of securing certification should be covered by increased sales, from customers willing to pay a premium.

But there would only be a commercial incentive to comply if all players – companies, consumers and governments – could trust the system. Every Turing-certified organisation would be, in effect, a Turing ambassador. Smart companies, trading on quality, would welcome an auditing process that weeded out poor behaviour, just as Axon Instruments and thousands of other companies accept internal and external quality audits on a regular basis in exchange for the privilege of selling medical and non-medical devices.

At the same time, citizens would surely welcome the opportunity to make informed decisions, and to tread the middle path between accepting a free-for-all or excluding AI from their lives.

If we can cooperate in HI societies, surely we can cooperate in the vital work of nurturing HI and AI together.

Note: This piece is part of wider conversation between Kay Firth-Butterfield, the Head of Artificial Intelligence and Machine Learning at the World Economic Forum’s Center for the Fourth Industrial Revolution, and the author of this piece, Chief Scientist of Australia Alan Finkel.