These rules could save humanity from the threat of rogue AI

robot 19

(Franck  V., Unsplash)

This article is brought to you thanks to the collaboration of The European Sting with the World Economic Forum.

Author: Johnny Wood, Writer, Formative Content


The possibility of man-made machines turning against their creators has become a trendy topic these days. Undoubtedly, Isaac Asimov’s Three Laws of Roboticsare no longer fit for purpose. For the sake of the global public good, we need something serious and more specific to safeguard our limitless ambitions – and humanity itself.

Today, the internet connects more than half the world’s population. And although the internet provides us with convenience and efficiency, it also brings threats. This is especially true in an age in which a good deal of our daily life is driven by big data and artificial intelligence. Algorithms have been widely used to determine what we read, where we go and how we get there, what music we listen to, and what we buy at what price. Self-driving cars, automatic cancer diagnosis and machine writing have never been so close to large-scale commercial application.

If data is the new oil, then AI is the new drill – and to extend this analogy, malfunctioning algorithms are the new pollution.

 Where is AI going to be most profitable?

Image: Statista / Tractica

It is important to note that malfunction does not equal malevolence. Likewise, good intentions do not guarantee a lack of legal, ethical and social troubles. Already with AI we have seen numerous examples of such issues, namely unintended behaviours, lack of foresight, difficulties in monitoring and supervision, distributed liability, privacy violation, algorithmic bias and abuse. Moreover, some researchers have started to worry about a potential rise in the unemployment rate caused by smart machines replacing human labour.

Troubles are looming

Misbehaving AI is increasingly prevalent these days. A facial recognition app tagged African-Americans as gorillas; another one identified 28 US Members of Congress as wanted criminals. A risk-assessment tool used by US courts was alleged as biased against African Americans; Uber’s self-driving car killed a pedestrian in ArizonaFacebook and other big companies were sued for discriminatory advertising practices; and lethal AI-powered weapons are in development.

We are marching into unmapped territory – which is why we urgently need rules and guiding principles as a compass to guide us in the right direction. Technology ethics are more important now than they have ever been, and must be at the core of this set of rules and principles.

We should acknowledge some of the early efforts to build such a framework. Notable examples include the Asilomar AI Principles, and IEEE’s ethics standards and certification programme.

And in late 2018, Pony Ma, the founder and CEO of Tencent, proposed an ethical framework for AI governance, namely ARCC (available, reliable, comprehensive and controllable).

Available, Reliable, Comprehensible, and Controllable: ARCC

Ma’s framework can become a foundation for the governance of AI systems in China and beyond. Its aim is to secure a friendly and healthy relationship between humanity and machinery in the thousands of years to come.

 AI should be available to all

Image: Tencent

Available.

AI should be available to the masses, not just the few. We are so used to the benefits of our smartphones and laptops, but more often than not we forget that half the world remains cut off from this digital revolution.

Advances in AI should fix this problem, not exacerbate it. We need to bring those living in developing areas, the elderly and the disabled into this digital world. The well-being of humanity as a whole should be the sole purpose of AI development. That is how we can ensure that AI will not advance the interests of some humans over the rest.

Take the recent development of medical bots as an example. Miying, Tencent’s AI-enabled medical diagnostic imaging solution, is currently working with radiologists in hundreds of local hospitals. This cancer pre-screening system has studied billions of medical images and detected hundreds of thousands of high-risk cases. The bot then refers these cases to experts. In doing so, it frees doctors from the daily labour of watching pictures and gives them more time to attend their patients.

Moreover, an available AI is a fair AI. A completely rational machine should be impartial and free of human weaknesses such as being emotional or prejudicial – but this should not be taken for granted. Recent incidents, like the vulgar language used by a Microsoft chatbot demonstrate that AI can go seriously wrong when fed by inaccurate, incomplete or biased data. An ethics by design approach is preferred here – that is, to carefully identify, solve and eliminate bias during the Ai development process.

Regulatory bodies are already formulating guidelines and principles addressing bias and discrimination. Firms such as Google and Microsoft have already set up their own internal ethical boards to guide their AI research.

 AI has to be safe and reliable

Image: Tencent

Reliable.

Since AI is already installed in millions of households, we need them to be safe, reliable, and capable of safeguarding against cyberattacks and other accidents.

Take autonomous driving cars as an example. Tencent is developing a Level 3 autonomous driving system and has obtained the license to test its self-driving cars on public roads in Shenzhen. But before getting the test license, its self-driving cars have been tested in a closed site for more than a thousand kilometres. Today, no real self-driving car is being commercially used on our roads, because the standards and regulations concerning its certification are still to be established.

Besides, for AI to be reliable, it should ensure digital, physical and political security, especially around privacy. We have seen cases where personal data is collected for training AI systems without the user’s consent. Therefore, AI should comply with privacy requirements, protect privacy by design, and safeguard against data abuse.

 AI should be understood better by all

Image: Tencent

Comprehensible

The enormous complexity of AI systems means this is easier said than done. The hidden layers between the input and output of a deep neutral network make it impenetrable, even for its developers. As a result, in case of a car accident involving an algorithm, it may take years for us to find the reason behind the malfunction.

Fortunately, the AI industry has already done some research on explainable AI models. Algorithmic transparency is one way to achieve comprehensible AI. While users may not care about the algorithms behind a product, regulators require deep knowledge of its technical details. Nonetheless, good practice would be to provide users with easy-to-understand information and explanations in respect of the decisions assisted or made by AI systems.

To develop a comprehensible AI, public engagement and the exercise of individuals’ rights should be guaranteed and encouraged. AI development should not be a secret undertaking by commercial companies. The public as end users may provide valuable feedback which is critical for the development of high-quality AI.

Tech companies should be required to provide their customers with information concerning the AI system’s purpose, function, limitations and impact.

 We need to be in charge - always

Image: Tencent

Controllable

The last – but not the least – principle is to make sure that we, human beings, are in-charge. Always.

Every innovation comes with risks. But we should not let worries about the extinction of humanity by some artificial general intelligence prevent us from pursuing a better future with new technologies. What we should do, is tmake sure that the benefits of AI substantially outweigh the potential risks. Top achieve that, we must establish appropriate precautionary measures to safeguard against foreseeable risks.

For now, people often trust strangers more than AI. We frequently hear that self-driving cars are unsafe, filters are unfair, recommendation algorithms restrict our choices, and pricing bots charge us more. This deeply embedded suspicion is rooted in information shortage, since most of us either don’t care or don’t have the necessary knowledge to understand an AI.

What should we do?

I would like to propose a spectrum of rules started from an ethical framework that may help AI developers and their products to earn the trust they deserve.

On the one side, we have light-touch rules, such as social conventions, moral rules and self-regulation. The ethical framework mentioned above fits here. At the international level, Google, Microsoft and other big companies have come up with their own AI principles, while Asilomar AI Principles and IEEE’s AI ethics programme are well praised.

As we move along the spectrum, there are mandatory and binding rules, such as standards and regulations. We wrote a policy report on self-driving cars this year and find that many countries are making laws to both encourage and regulate self-driving cars. In the future, there will be new laws for AI.

 

Further along the spectrum, there are criminal laws to punish bad actors for malicious use of AI. To the extreme right, there are international laws. For example, some international scholars have been pushing the United Nations to come up with a convention on lethal autonomous weapons, just like the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons.

Any new technology, whether a controlled nuclear reaction or a humanoid bot, is neither inherently good or bad. Ensuring it is the former is down to us.

Advertising

Advertising

Advertising

Advertising

Advertising

Advertising

Advertising

Advertising

Advertising

the sting Milestone

Featured Stings

Can we feed everyone without unleashing disaster? Read on

These campaigners want to give a quarter of the UK back to nature

How to build a more resilient and inclusive global system

Stopping antimicrobial resistance would cost just USD 2 per person a year

Hydrogen isn’t the fuel of the future. It’s already here

Manufacturing is finally entering a new era

Foreign investment to be screened to protect EU countries’ strategic interests

‘Bring to life’ precious moments caught on film or tape, UN agency urges on World Day

A seafood fraud investigation DNA tested fish sold in the US. Here’s what they found

MWC 2016 LIVE: T-Mobile US reveals 5G trial plans

EU Budget: A Reform Support Programme and an Investment Stabilisation Function to strengthen Europe’s Economic and Monetary Union

Modern society has reached its limits. Society 5.0 will liberate us

Security Council discusses chemical weapons use in Syria following latest global watchdog report

Why tourism policy needs to use more imagination

Trade surplus up production down in Eurozone

Cleantech innovation is being stifled. Here’s how to unlock it

From social entrepreneurship to systems entrepreneurship: how to create lasting change

Is Erdogan losing game and match within and without Turkey?

Brussels wins game and match in Ukraine no matter the electoral results

These are the world’s 10 most innovative economies

End racist discrimination against Afro-European people in the EU

Landmark EU Parliament – ECB agreement on bank supervision

Judicial independence under threat in Nigeria, warns UN rights expert

“Health and environment first of all”, EU says with forced optimism after 7th round of TTIP talks

Niger population’s suffering ‘increasing with each passing month’: UN Refugee Agency

An American duel in Brussels: Salesforce against Microsoft over Linkedin deal

Systems leadership can change the world – but what exactly is it?

The time for cities to get smart is now

Brexit: new European Parliament reaffirms wholehearted support for EU position

Worldwide UN family celebrates enduring universal values of human rights

UN Chief ‘strongly rejects’ Guatemala decision to expel anti-corruption body

Superconductors: the miracle materials powering an energy revolution

EU and China seize momentum to enhance trade agreements in response to Trump’s administration

The challenges of mental health among the Syrian medical students

Vital food crops destroyed in Syria amid upsurge in fighting across Idlib, Hama

The Parliament paves the way for the creation of the European Banking Union

Air quality: Commission takes action to protect citizens from air pollution

EU Commission: Germany can make Eurozone grow again just by helping itself

China will be the world’s top tourist destination by 2030

Eurozone: Austerity brings new political tremors

In Tanzania visit, UNHCR official stresses freedom of choice is crucial for refugee returns

UN political chief calls for dialogue to ease tensions in Venezuela; Security Council divided over path to end crisis

If Macron defies Britain about the banks, Paris and London to clash over ‘La Manche’

Road use charges: reforms aim to improve fairness and environmental protection

5 ways to boost sustainable trade in the world’s poorest countries

To win combat against HIV worldwide, ‘knowledge is power’, says UNAIDS report

Global ageing is a challenge – and an opportunity

The quality of health education around the globe

These are the best cities for tech

Parliament to ask for the suspension of EU-US deal on bank data

Five cities short-listed to become the European Youth Capital 2017

Does the EU want GMOs and meat with hormones from the US?

Rising political extremism in Europe escapes control

European Commissioner for Youth wants young people to be at heart of policy making

Minority governments ‘à la mode’ in Europe but can they last long?

SMEs are driving job growth, but need higher investment in skills, innovation and tech to boost wages and productivity

Where America’s refugees came from in 2018

Measles cases nearly doubled in a year, UN health agency projects

Google’s bare truth: Europe’s Chief denies EU accusations but admits they “don’t always get it right”

Where will evolution take us in the Fourth Industrial Revolution?

A Sting Exclusive: EU Commissioner Mimica looks at how the private sector can better deliver for international development

Climate activist Greta Thunberg urges MEPs to put words into action

Parliament pushes for cleaner cars on EU roads by 2030

Parliament votes reform for better European Co2 market but critics want it sooner than later

More Stings?

Speak your Mind Here

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s