Bias in AI is a real problem. Here’s what we should do about it

AI 2018

WIPO/Emmanuel Berrod
Nono-Y the robot was one of the highlights of the 2012 Geneva Inventions Fair.

This article is brought to you thanks to the strategic cooperation of The European Sting with the World Economic Forum.

Author: Robert GrynChief Executive Officer, Codewise & Pawel RzeszucinskiChief Data Scientist, Codewise

Hardly a day goes by without a new story about the power of AI and how it will change our world. However, there are also dangers alongside the possibilities. While we should be rightly excited by the idea of AI revolutionizing eye disease and cancer diagnostics, or the potential transport revolution promised by autonomous vehicles, we should not ignore the risks – and one of the most pressing of these is bias.

Unlike killer time-traveling robots, AI bias is unlikely to be the stuff of the next Hollywood blockbuster – and yet it is a potentially huge problem.

How can AI be biased?

Although it’s comforting to imagine AI algorithms as completely emotionless and neutral, that is simply not true. AI programmes are made up of algorithms that follow rules. They need to be taught those rules, and this occurs by feeding the algorithms with data, which the algorithms then use to infer hidden patterns and irregularities. If the training data is inaccurately collected, an error or unjust rule can become part of the algorithm – which can lead to biased outcomes.

Some day-to-day examples might be worse performing facial recognition software on non-white people, or speech recognition software that doesn’t recognize women’s voices as well as men’s. Or consider the even more worrying claims of racial discrimination in the AI used by credit agencies and parole boards.

The algorithm used by a credit agency might be developed using data from pre-existing credit ratings or based on a particular group’s loan repayment records. Alternatively, it might use data that is widely available on the internet – for example, someone’s social media behaviour or generalized characteristics about the neighborhood in which they live. If even a few of our data sources were biased, if they contained information on sex, race, colour or ethnicity, or we collected data that didn’t equally represent all the stakeholders, we could unwittingly build bias into our AI.

If we feed our AI with data showing the majority of high-level positions are filled by men, all of a sudden the AI knows the company is looking to hire a man, even when that isn’t a criteria. Training algorithms with poor datasets can lead to conclusions such as that women are poor candidates for C-suite roles, or that a minority from a poor ZIP code is more likely to commit a crime.

As we know from basic statistics, even if there is a correlation between two characteristics, that doesn’t mean that one causes the other. These conclusions may not be valid and individuals should not be disadvantaged as a result. Rather, this implies that the algorithm was trained using poorly collected data and should be corrected.

Fortunately, there are some key steps we can take to prevent these biases from forming in our AI.

1. Awareness of bias

Acknowledging that AI can be biased is the vital first step. The view that AI doesn’t have biases because robots aren’t emotional prevents us from taking the necessary steps to tackle bias. Ignoring our own responsibility and ability to take action has the same effect.

2. Motivation

Awareness will provide some motivation for change but it isn’t enough for everyone. For-profit companies creating a product for consumers have a financial incentive to avoid bias and create inclusive products; if company X’s latest smartphone doesn’t have accurate speech recognition, for example, then the dissatisfied customer will go to a competitor. Even then, there can be a cost-benefit analysis that leads to discriminating against some users.

For groups where these financial motives are absent, we need to provide outside pressure to create a different source of motivation. The impact of a biased algorithm in a government agency could unfairly impact the lives of millions of citizens.

We also need clear guidelines on who is responsible in situations where multiple partners deploy an AI. For example, a government programme based on privately developed software that has been repackaged by another party. Who is responsible here? We need to make sure that we don’t have a situation where everyone passes the buck in a never-ending loop.

3. Ensuring we use quality data

All the issues that arise from biased AI algorithms are rooted in the tainted training data. If we can avoid introducing biases in how we collect data and the data we introduce to the algorithms, then we have taken a significant step in avoiding these issues. For example, training speech recognition software on a wide variety of equally represented users and accents can help ensure no minorities are excluded.

If AI is trained on cheap, easily acquired data, then there is a good chance it won’t be vetted to check for biases. The data might have been acquired from a source which wasn’t fully representative. Instead, we need to make sure we base our AI on quality data that is collected in ways which mitigate introducing bias.

4. Transparency

The AI Now initiative believes that if a public agency can’t explain an algorithm or how it reaches its conclusion, then it shouldn’t be used. In situations like this, we can identify why bias and unfair decisions are being reached, give the people the chance to question the outputs and, as a consequence, provide feedback that can be used to address the issues appropriately. It also helps keep those responsible accountable and prevents companies from relinquishing their responsibilities.

While AI is undeniably powerful and has the potential to help our society immeasurably, we can’t pass the buck of our responsibility for equality to the mirage of a supposedly all-knowing AI algorithms. Biases can creep in without intention, but we can still take action to mitigate and prevent them. It will require awareness, motivation, transparency and ensuring we use the right data.

the sting Milestones

Featured Stings

Can we feed everyone without unleashing disaster? Read on

These campaigners want to give a quarter of the UK back to nature

How to build a more resilient and inclusive global system

Stopping antimicrobial resistance would cost just USD 2 per person a year

UN ‘determined to lead by example’ on disability rights: Guterres

Uzbekistan wins its long fight against malaria, as global rates continue to rise

The European Sting @ Mobile World Congress 2014, Creating What’s Next for the World. Can EU Policy follow?

Sri Lanka PM: This is how I will make my country rich by 2025

UK: Crawley group wins European Citizens’ Prize

Why is black plastic packaging so hard to recycle?

These 4 leaders are working to improve integration in Southeast Asia

EU4FairWork: Commission launches campaign to tackle undeclared work

New EU energy labels applicable from 1 March 2021

How to harness the energy of social innovators for an inclusive recovery

Brexit: No deal without marginalizing the hard Tory Eurosceptic MPs

EU Covid-19 Certificate: a European solution for free testing is needed

Outbreak of COVID-19: The third wave and the people

Drinking water: new plans to improve tap water quality and cut plastic litter

A better answer to the ventilator shortage as the pandemic rages on

New Mozambique storm rips off roofs, brings lashing rain as aid response kicks in

The hidden downside to ocean data and how to make it more sustainable

Two rhythms and a sharpened pencil: how art can help us heal and make sense of the world

Parliament supports European Green Deal and pushes for even higher ambitions

The European Union’s Balkan Double Standard

GSMA Reveals Global Partners for MWC21 Barelona

5 ways to be a better humanitarian

Further reforms needed for a stronger and more integrated Europe

Universal access to energy is a major challenge for the Arab world. Here’s why

Commission and Germany’s Presidency of the Council of the EU underline importance of the European Health Data Space

Energy of African youth ‘propelling’ new development era as UN ties bear fruit

Work Together to Build a New Type of International Relations and a Community with a Shared Future for Humanity

Deal agreed to protect consumers against misleading and unfair practices

Security Council urged to help spare Syrians from ‘devastation’

MWC 2016 LIVE: Getty chief says one in four new images from phones

Link between conflict and hunger worldwide, ‘all too persistent and deadly’, says new UN report

How a ‘fourth-sector economic strategy’ can help us build a better future for all

Amidst ‘high political tension’, UN chief appeals to G20 leaders for stronger commitment to climate action, economic cooperation

‘Think beyond farm jobs’ to reach sustainable development, UN agriculture chief advises African youth

UN rights experts call on Russia to release Ukrainian film-maker whose life is in ‘imminent danger’

Use “blockchain” model to cut small firms’ costs and empower citizens, urge MEPs

Antitrust: Commission provides guidance on allowing limited cooperation among businesses, especially for critical hospital medicines during the coronavirus outbreak

Gender Disparity in Medicine: Why and How Do We Close the Gap?

More than 100,000 people have recovered from COVID-19

Why is Merkel’s Germany so liberal with the refugees? Did the last elections change that?

Why cities hold the key to safe, orderly migration

How India will consume in 2030: 10 mega trends

Europe led by Germany seems vulnerable to Trump’s threats

Sustainable finance: Commission welcomes deal on an EU-wide classification system for sustainable investments (Taxonomy)

These are the 4 most likely scenarios for the future of energy

How do we go about improving mental health in the community and reducing suicide rates in the 15-29 age group?

Reception conditions for asylum-seekers agreed between MEPs and Council

Heart attacks and strokes are more common on high pollution days, data shows

Protecting migratory species in a rapidly changing world

China’s New Normal and Its Relevance to the EU

6 ways China and the United States could jumpstart trade reforms

GSMA Announces New Speakers for Mobile 360 Series – MENA, in association with The European Sting

Employers hold too much power over information. Workers must claim their data rights

New citizenship law in India ‘fundamentally discriminatory’: UN human rights office

Quality Education on the table at the European Parliament

The fight for female medical leadership

EU-China relations under investigation?

Thought AIs could never replace human imagination? Think again

Anti-Semitism ‘toxic to democracy’, UN expert warns, calling for better education

Beating cancer: Better protection of workers against cancer-causing chemicals

More Stings?

Speak your Mind Here

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s