How to ensure fair AI throughout the supply chain

(Credit: Unsplash)

This article is brought to you thanks to the collaboration of The European Sting with the World Economic Forum.

Author: Mark Brayan, Chief Executive Officer, Appen

  • We need to go back to basics as we navigate the hype to make AI fair.
  • Basic fairness applies to all levels of the immense AI development lifecycle.
  • The fair treatment of the people who collect the data is being overlooked.

For some time now, there has been talk about how leaders developing AI applications need to build “fair AI”; this should be unbiased and equitable, ideally improving the quality of life of everyone it touches.

However, most of the thinking around ethical AI has been focused around models, explainability, technical teams and data governance.

But how about considering basic fairness in the AI development lifecycle? This lifecycle is immense. It entails the data collection and annotation contractors, the companies and individuals managing the data, the technology specialists building the AI models, the go-to-market experts building the AI applications, and the businesses and individuals using AI-powered products and services.

Such an approach is the only way we can ensure that technology continues to make the world a better place.

What is fair AI?

When an AI product is deployed in the real world, it must work as expected, deliver equitable results for all intended beneficiaries under all circumstances, and not harm anyone physically, mentally or emotionally. That’s a tall order. And it starts with building an unbiased and comprehensive data set.

While this may seem obvious, it’s easy for development teams to put unbiased data to the bottom of the pile and instead focus on achieving results as quickly as possible. However, doing so creates a long-term responsibility that developers can easily forget to maintain.

When is AI unfair?

Unfairness in the form of bias can appear in AI in multiple ways.

Consider criminal risk assessment algorithms that use behavioural and demographic data to determine the risk of reoffending.

A computer-based annotation system can’t do it alone when it comes to interpreting complex situations and catching subtle errors that could have real-life consequences. —Mark Brayan

One recent study found one such algorithm to be racially biased. For example, an 18-year-old black woman was charged with petty theft, having stolen an $80 bicycle. Despite having only one prior juvenile misdemeanour, she was rated as having a higher risk of reoffending than a 41-year-old white male who was charged with a similar crime but had several prior offenses.

Moving forward in time, the woman committed no other offenses while the male is now serving a prison term for the theft of thousands of dollars of electronics. Clearly, the algorithm was based on poor data.

Image: Proceedings of the National Academy of Sciences of the United States of America

Unfair AI also lives inside everyday technology. If a company sells cars equipped with speech recognition in multiple countries but trains the product using only native male speakers for each language, the system may struggle to understand women or anyone with a different accent.

This could lead to drivers being taken to the wrong destination. Worse still, it could cause distracted driving, leading to accidents. In addition to being unfair to some users, biased data can saddle solution providers with substandard products that can damage their reputation.

Where humans triumph

Developing a comprehensive and unbiased dataset requires data diversity and breadth. This ensures the product is trained in every situation it is likely to encounter in real life, such as all of the accents, voice tones and languages that a car’s speech recognition system may encounter in its target markets.

Achieving this means working with people who resemble the entire customer profile to collect, annotate and validate the AI model training data.

It also means working with a diverse team on the model building itself. A computer-based annotation system can’t do it alone when it comes to interpreting complex situations and catching subtle errors that could have real-life consequences.

For example, a human annotating images or a video for a self-driving car application could interpret that a person with a certain posture walking between two cars may be pushing a buggy that will appear in traffic before the person does.

Even the best computer-based annotation systems would struggle to make this interpretation. Similarly, a human reading a product review is much more likely to detect sarcasm than a machine is.

The people behind the data

Leaders committed to fair AI must include another important link in the AI development lifecycle when building global AI products or services: the millions of people who collect and label the data. Engaging these people in a fair and ethical way is mission-critical and should be part of every organisation’s responsibility charter.

Fair treatment means committing to fair pay, flexible working hours, including people from any and all backgrounds, respecting privacy and confidentiality, and working with people in a way that they feel heard and respected.

Leaders should also inspire their contractors in a way that instils pride in working on the most impactful technology used by the global economy.

Why does fair AI matter, beyond the obvious?

Quite simply, it’s good for society, and it’s good for business. Product teams, for example, are inspired when they’re building products that have a positive impact on their market and the world. But what else do fair products do?

  • They work for the entire target customer base: Products based on representative data will work for all users without bias, and so sell better, reduce frustration and lower returns.
  • They are safer: Comprehensive, unbiased training data will lead to safer, better-quality products, reducing the potential for failure.
  • They build loyalty: Great products and a great reputation are keys to increased customer loyalty.
  • They protect the brand: Products that work as expected often reduce the risk of serious and lasting brand damage.

According to one MIT Sloan study, only about one in ten enterprises currently report obtaining “significant” financial benefits from AI.

In 2021, as boards focus on closing the gap between AI’s potential and its reality, they will increasingly prioritise the adoption of the principles of fair AI. They know it will ensure projects work as designed, deliver expected benefits, and contribute to a better society.

Applications relying on AI are also infiltrating every industry, including the public sector. AI developers therefore have a certain responsibility to ensure their products are built on unbiased and comprehensive data sets that work for everyone.

Business and technology leaders should embrace fair AI as a core tenet to improve their businesses whilst helping society as a whole.

the sting Milestones

Featured Stings

Can we feed everyone without unleashing disaster? Read on

These campaigners want to give a quarter of the UK back to nature

How to build a more resilient and inclusive global system

Stopping antimicrobial resistance would cost just USD 2 per person a year

UN health agency highlights lifestyle choices that can prevent onset of dementia, as millions more succumb each year

Ukraine-EU deal sees the light but there’s no defeat for Russia

Baking The Galette-des-rois Of Egalitarianism

These are the world’s best universities for recycling and sustainability

Time to measure up: 5 ways the fashion industry can be made more sustainable

IMF’s Lagarde: Estimating Cyber Risk for the Financial Sector

Antitrust: Commission accepts commitments by Transgaz to facilitate natural gas exports from Romania

EU Budget 2019 to focus on young people

How the world can ‘reset’ itself after COVID-19 – according to these experts

Coronavirus: rescEU medical materials dispatched to Serbia

State aid: Commission invites interested parties to provide comments on proposed draft Climate, Energy and Environmental State aid Guidelines

MEPs call for decisive action to fight inequalities in the EU

Emergency coronavirus research: Commission selects 18th project to develop rapid diagnostics

Safer products: stepping up checks and inspections to protect consumers

Humanitarian action: New outlook for EU’s global aid delivery challenged by COVID-19

Professional practices of primary health care for Brazilian health and gender inequality

Here’s how one business leader is tackling injustice: It starts with personal commitment

MEPs agree on new rules to tax digital companies’ revenues

More countries are making progress on corruption – but there’s much to be done, says a new report

EU Parliament: The surplus countries must support growth

Commission launches the Fit for Future Platform and invites experts to join

The punishment gap: how workplace mistakes hurt women and minorities most

12 ways a human-centric approach to data can improve the world

The online junk information grows, but so we shall

NextGenerationEU: Commission presents next steps for €672.5 billion Recovery and Resilience Facility in 2021 Annual Sustainable Growth Strategy

UN envoy says he ‘is ready to go to Idlib’ to help ensure civilian safety amid rising fears of government offensive

How do we upskill a billion people by 2030? Leadership and collaboration will be key

Europe fit for the Digital Age: Commission proposes new rules for digital platforms

Coronavirus: EU makes available additional humanitarian funding of €41 million to fight the pandemic

Feeding a city from the world’s largest rooftop greenhouse

Autonomous vehicles could clog city centres: a lesson from Boston

Geographical Indications – a European treasure worth €75 billion

This new initiative aims to make cybercrime harder – and riskier – to commit

Children in crisis-torn eastern Ukraine ‘too terrified to learn’ amid spike in attacks on schools

UN welcomes Angola’s repeal of anti-gay law, and ban on discrimination based on sexual orientation

Solutions for cultural understanding: medical students’ perspective

Cape Town’s crisis shows us the real cost of water

UN Human Rights Council stands firm on LGBTI violence, Syria detainees and Philippines ‘war on drugs’

Here’s what could happen to the global economy this year

Our food system is pushing nature to the brink. Here’s what we need to do

Parliament wants binding rules on common chargers to be tabled by summer

Guatemala Dos Erres massacre conviction welcomed by UN human rights office

Expanding the care for the quality of life and quality of death

Sanctions: Commission further expands Guidance on COVID-19-related humanitarian aid in sanctioned environments

Building cybersecurity capacity through benchmarking: the Global Cybersecurity Index

A short history of climate change and the UN Security Council

How rescheduling debt for climate and nature goals could unlock a sustainable recovery

This app lets you plant trees to fight deforestation

Explained, the economic ties between Europe and Asia

EU Budget 2021 deal: supporting the recovery

Pandemic mental health: the urgency of self-care

State aid: Commission approves €1.25 billion German measure to recapitalise TUI

State aid: Commission approves €511 million Italian scheme to compensate commercial rail passenger operators for damages suffered due to coronavirus outbreak

MEPs adopt greener funds for regional development and cooperation

Data is the oil of the digital world. What if tech giants had to buy it from us?

Coronavirus: First case confirmed in Gulf region, more than 6,000 worldwide

The latest emoji are more inclusive – but who approves them?

G7 Summit: President von der Leyen outlines key EU priorities

Iran: UN rights chief ‘deeply disturbed’ by continuing executions of juvenile offenders

Fighting cybercrime – what happens to the law when the law cannot be enforced?

More Stings?

Speak your Mind Here

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s