These rules could save humanity from the threat of rogue AI

robot 19

(Franck  V., Unsplash)

This article is brought to you thanks to the collaboration of The European Sting with the World Economic Forum.

Author: Johnny Wood, Writer, Formative Content


The possibility of man-made machines turning against their creators has become a trendy topic these days. Undoubtedly, Isaac Asimov’s Three Laws of Roboticsare no longer fit for purpose. For the sake of the global public good, we need something serious and more specific to safeguard our limitless ambitions – and humanity itself.

Today, the internet connects more than half the world’s population. And although the internet provides us with convenience and efficiency, it also brings threats. This is especially true in an age in which a good deal of our daily life is driven by big data and artificial intelligence. Algorithms have been widely used to determine what we read, where we go and how we get there, what music we listen to, and what we buy at what price. Self-driving cars, automatic cancer diagnosis and machine writing have never been so close to large-scale commercial application.

If data is the new oil, then AI is the new drill – and to extend this analogy, malfunctioning algorithms are the new pollution.

 Where is AI going to be most profitable?

Image: Statista / Tractica

It is important to note that malfunction does not equal malevolence. Likewise, good intentions do not guarantee a lack of legal, ethical and social troubles. Already with AI we have seen numerous examples of such issues, namely unintended behaviours, lack of foresight, difficulties in monitoring and supervision, distributed liability, privacy violation, algorithmic bias and abuse. Moreover, some researchers have started to worry about a potential rise in the unemployment rate caused by smart machines replacing human labour.

Troubles are looming

Misbehaving AI is increasingly prevalent these days. A facial recognition app tagged African-Americans as gorillas; another one identified 28 US Members of Congress as wanted criminals. A risk-assessment tool used by US courts was alleged as biased against African Americans; Uber’s self-driving car killed a pedestrian in ArizonaFacebook and other big companies were sued for discriminatory advertising practices; and lethal AI-powered weapons are in development.

We are marching into unmapped territory – which is why we urgently need rules and guiding principles as a compass to guide us in the right direction. Technology ethics are more important now than they have ever been, and must be at the core of this set of rules and principles.

We should acknowledge some of the early efforts to build such a framework. Notable examples include the Asilomar AI Principles, and IEEE’s ethics standards and certification programme.

And in late 2018, Pony Ma, the founder and CEO of Tencent, proposed an ethical framework for AI governance, namely ARCC (available, reliable, comprehensive and controllable).

Available, Reliable, Comprehensible, and Controllable: ARCC

Ma’s framework can become a foundation for the governance of AI systems in China and beyond. Its aim is to secure a friendly and healthy relationship between humanity and machinery in the thousands of years to come.

 AI should be available to all

Image: Tencent

Available.

AI should be available to the masses, not just the few. We are so used to the benefits of our smartphones and laptops, but more often than not we forget that half the world remains cut off from this digital revolution.

Advances in AI should fix this problem, not exacerbate it. We need to bring those living in developing areas, the elderly and the disabled into this digital world. The well-being of humanity as a whole should be the sole purpose of AI development. That is how we can ensure that AI will not advance the interests of some humans over the rest.

Take the recent development of medical bots as an example. Miying, Tencent’s AI-enabled medical diagnostic imaging solution, is currently working with radiologists in hundreds of local hospitals. This cancer pre-screening system has studied billions of medical images and detected hundreds of thousands of high-risk cases. The bot then refers these cases to experts. In doing so, it frees doctors from the daily labour of watching pictures and gives them more time to attend their patients.

Moreover, an available AI is a fair AI. A completely rational machine should be impartial and free of human weaknesses such as being emotional or prejudicial – but this should not be taken for granted. Recent incidents, like the vulgar language used by a Microsoft chatbot demonstrate that AI can go seriously wrong when fed by inaccurate, incomplete or biased data. An ethics by design approach is preferred here – that is, to carefully identify, solve and eliminate bias during the Ai development process.

Regulatory bodies are already formulating guidelines and principles addressing bias and discrimination. Firms such as Google and Microsoft have already set up their own internal ethical boards to guide their AI research.

 AI has to be safe and reliable

Image: Tencent

Reliable.

Since AI is already installed in millions of households, we need them to be safe, reliable, and capable of safeguarding against cyberattacks and other accidents.

Take autonomous driving cars as an example. Tencent is developing a Level 3 autonomous driving system and has obtained the license to test its self-driving cars on public roads in Shenzhen. But before getting the test license, its self-driving cars have been tested in a closed site for more than a thousand kilometres. Today, no real self-driving car is being commercially used on our roads, because the standards and regulations concerning its certification are still to be established.

Besides, for AI to be reliable, it should ensure digital, physical and political security, especially around privacy. We have seen cases where personal data is collected for training AI systems without the user’s consent. Therefore, AI should comply with privacy requirements, protect privacy by design, and safeguard against data abuse.

 AI should be understood better by all

Image: Tencent

Comprehensible

The enormous complexity of AI systems means this is easier said than done. The hidden layers between the input and output of a deep neutral network make it impenetrable, even for its developers. As a result, in case of a car accident involving an algorithm, it may take years for us to find the reason behind the malfunction.

Fortunately, the AI industry has already done some research on explainable AI models. Algorithmic transparency is one way to achieve comprehensible AI. While users may not care about the algorithms behind a product, regulators require deep knowledge of its technical details. Nonetheless, good practice would be to provide users with easy-to-understand information and explanations in respect of the decisions assisted or made by AI systems.

To develop a comprehensible AI, public engagement and the exercise of individuals’ rights should be guaranteed and encouraged. AI development should not be a secret undertaking by commercial companies. The public as end users may provide valuable feedback which is critical for the development of high-quality AI.

Tech companies should be required to provide their customers with information concerning the AI system’s purpose, function, limitations and impact.

 We need to be in charge - always

Image: Tencent

Controllable

The last – but not the least – principle is to make sure that we, human beings, are in-charge. Always.

Every innovation comes with risks. But we should not let worries about the extinction of humanity by some artificial general intelligence prevent us from pursuing a better future with new technologies. What we should do, is tmake sure that the benefits of AI substantially outweigh the potential risks. Top achieve that, we must establish appropriate precautionary measures to safeguard against foreseeable risks.

For now, people often trust strangers more than AI. We frequently hear that self-driving cars are unsafe, filters are unfair, recommendation algorithms restrict our choices, and pricing bots charge us more. This deeply embedded suspicion is rooted in information shortage, since most of us either don’t care or don’t have the necessary knowledge to understand an AI.

What should we do?

I would like to propose a spectrum of rules started from an ethical framework that may help AI developers and their products to earn the trust they deserve.

On the one side, we have light-touch rules, such as social conventions, moral rules and self-regulation. The ethical framework mentioned above fits here. At the international level, Google, Microsoft and other big companies have come up with their own AI principles, while Asilomar AI Principles and IEEE’s AI ethics programme are well praised.

As we move along the spectrum, there are mandatory and binding rules, such as standards and regulations. We wrote a policy report on self-driving cars this year and find that many countries are making laws to both encourage and regulate self-driving cars. In the future, there will be new laws for AI.

 

Further along the spectrum, there are criminal laws to punish bad actors for malicious use of AI. To the extreme right, there are international laws. For example, some international scholars have been pushing the United Nations to come up with a convention on lethal autonomous weapons, just like the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons.

Any new technology, whether a controlled nuclear reaction or a humanoid bot, is neither inherently good or bad. Ensuring it is the former is down to us.

Advertising

Advertising

Advertising

Advertising

Advertising

the sting Milestone

Featured Stings

Can we feed everyone without unleashing disaster? Read on

These campaigners want to give a quarter of the UK back to nature

How to build a more resilient and inclusive global system

Stopping antimicrobial resistance would cost just USD 2 per person a year

Parmesan cheese on shelves in Italy (Copyright: European Union, 2014 / Source: EC - Audiovisual Service / Photo: Daniela Giusti)

CETA at risk again: Italy says it won’t ratify EU-Canada trade deal over product protection fears

‘Rare but devastating’ tsunamis underscore need for better preparation, UN chief urges on World Day

‘Moral obligation and political imperative’ to support Syria on path to peace: Guterres

UN rights experts call on Philippines Government to halt ‘unacceptable attacks’ on Victoria Tauli-Corpuz

A Sting Exclusive: “Climate change-the biggest global health threat of the 21st century, yet overlooked in climate negotiations?” IFMSA wonders from COP21 in Paris

UN’s Bachelet addresses progress and setbacks in human rights worldwide

EU to Google: How to dismantle European search engines in 13 steps

Latest leaked TTIP document confirms EU sovereignty may be under threat

MEPs to debate priorities for 28-29 June EU summit

‘Counter and reject’ leaders who seek to ‘exploit differences’ between us, urges Guterres at historic mosque in Cairo

UN forum spotlights cities, where struggle for sustainability ‘will be won or lost’

Cultural tolerance is a must: “No sir, I’m not inferior!”

This South African lawyer is reading while running marathons – for book donations

Bold, innovative measures for refugees and their hosts sought, at first ever Global Forum

Mergers: Commission opens in-depth investigation into proposed acquisition of GrandVision by EssilorLuxottica

Snowden is the “EU nomination” for this year’s Oscars

Brexit casts a shadow over the LSE – Deutsche Börse merger: a tracer of how or if brexit is to be implemented

“None of our member states has the dimension to compete with China and the US, not even Germany!”, Head of EUREKA Pedro Nunes on another Sting Exclusive

It’s getting harder to move data abroad. Here’s why it matters and what we can do

A Wholesome Health Care for Transgenders: Sex Reassignment Surgery

How digital can transform healthcare in Asia for millions of people

A record one million Syrians displaced over six months, during six key battles: UN investigators

EU gas market: new rules agreed will also cover gas pipelines entering the EU

Global Goals offer ‘special opportunity’ to change course of development, Bosnian leader tells General Assembly

Forty-two countries adopt new OECD Principles on Artificial Intelligence

Community Manager – 1289

The children’s continent: keeping up with Africa’s growth

Croatian Presidency outlines priorities to EP committees

How a possible EU budget deficit affects the migration crisis

EU Budget: A Reform Support Programme and an Investment Stabilisation Function to strengthen Europe’s Economic and Monetary Union

Tax revenues in Asian and Pacific economies rebound

European Commission statement on the adoption of the new energy lending policy of the European Investment Bank Group

‘Perseverance is key’ to Iraq’s future, UN envoy tells Security Council

Technology is delivering better access to financial services. Here’s how

15 years of risk: from economic collapse to planetary devastation

Member States and Commission to work together to boost artificial intelligence “made in Europe”

The European Sting at the Retail Forum for Sustainability live from Barcelona

How artificial intelligence is redefining the role of manager

These are the world’s best countries to retire in, as of 2019

The EU will always have a stable partner in Montenegro, says President Đukanović

Link between conflict and hunger worldwide, ‘all too persistent and deadly’, says new UN report

Korea should improve the quality of employment for older workers

Tools of asset development: Renewable Energy Projects case

Century challenge: inclusion of immigrants in the health system

How curiosity and globalization are driving a new approach to travel

EU’s judicial cooperation arm, Eurojust, to become more effective with new rules

A machine din

Christine Lagarde: This is what we can still learn from the Great War

Rich economies not a promise of education equality, new report finds

Food supply chain: A step closer to ending unfair trading

Eurozone: A crucial January ahead again with existential questions

Strong multilateral institutions key to tackling world’s dramatic challenges, UN chief says In Moscow

Inclusion, equality a must for ‘long-lasting peace and sustainable development’, UN official tells high-level event in Baku

Markets are more sensitive to Greece’s woes than Merkel

Listen to the future – how 26 youth-led organizations are supercharging the UN’s Global Goals

There are now four competing visions of the internet. How should they be governed?

Syria: A bloody tracer of Trump – Putin rapprochement

Vulnerable children face ‘dire and dangerous’ situation on Greek island reception centres, UNICEF warns

Brexit ‘no-deal’ preparedness: Final Commission call to all EU citizens and businesses to prepare for the UK’s withdrawal on 31 October 2019

More Stings?

Speak your Mind Here

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s