Trust in facial recognition technology can be achieved. Here’s how

face recognitions

(Credit: Unsplash)

This article is brought to you thanks to the collaboration of The European Sting with the World Economic Forum.

Author: Kay Firth-Butterfield, Head, Artificial Intelligence and Machine Learning, World Economic Forum & Lofred Madzou, Project Lead, AI & Machine Learning, World Economic Forum & Sébastien Louradour, Fellow, Artifical Intelligence and Machine Learning, World Economic Forum


  • The potential to misuse facial recognition software (FRS) has raised fundamental questions around ethical use, privacy and accessibility.
  • Those developing facial recognition software can develop standards to help ensure ethical and responsible technology design.

Facial recognition software (FRS) has hit the headlines of late and rightly so. Its potential for misuse has raised fundamental questions around ethical use, privacy and accessibility. In some cases, major companies have even halted development and some NGOs have called for total bans.

Such moves have moved some to push for more standardization in the development of facial recognition technology as many realize that it can be used to conduct unacceptable infringement of liberties. The chief of police in one major US city even pointed out that FRS’ low level of accuracy could lead authorities to misidentify someone 96% of the time if officers were to leverage the technology just by itself.

To help mitigate these fears, those developing facial recognition software can develop standards to help ensure ethical and responsible technology design. Such human-centered standards can help forge trust and protect against bias or misapplication.

Artificial Intelligence

What is algorithm bias?

Algorithm bias, also called machine learning bias, is a phenomenon in which algorithms can act in a discriminatory or prejudiced manner due to misplaced assumptions during the learning phase of their development.

Unconscious biases regarding gender, race and social class can make their way into the training data fed by programmers into “machine-learning algorithms”, systems which constantly improve their own performance by including new data into an existing model.

These biases can be observed in the algorithm’s output: erroneous reflected assumptions that can result in embarrassing news coverage.

Some recent stories about accidental algorithm bias include:

Over the past year, the World Economic Forum Artificial Intelligence team found that responsible limits can be set on facial recognition technology. Its AI team worked alongside public and private sectors to develop a unique FRS framework anchored in responsible development for engineers and deployment for policymakers.

The framework is the first to go beyond general principles and to operationalize commercial use cases under the banner of ethical and responsible AI. It comprises four simple steps:

1. Draft principles that define responsible use.

Facial recognition has gained special importance as COVID-19 has pushed contactless services as a safe way to verify identities. While touchless and passwordless technology is the path ahead, companies can pick the direction of travel.

To guide that journey – and prevent misuse – companies can draft a set of ‘Principles for Action’ that define what constitutes responsible facial recognition technology. These principles should focus on 7 key areas to protect users from any misuse of the technology and be developed in consultation with industry players, policymakers, advocacy groups and academics with the goal of reaching a high level of consensus between these stakeholders. These areas include:

  • Privacy: Protecting end user’s personal data
  • Risk assessment: Building processes to mitigate risks of errors and its consequences
  • Proportional use of technology: Assessing the trade-off between usage risks and opportunities
  • Accountability: Defining the responsibility of platform providers and organizations using the technology
  • Consent: Naming the rules for free consent by end-users
  • Accessibility: Making sure the technology accommodates differently-abled people
  • Adaptability: Offering fallback options as situations change and having humans on deck for oversight

To ensure the effective adoption of these Principles for Action, they must be embedded at the core of business operations through internal processes and oversight to lessen potential risks.

Taking these points into consideration can ensure a high level of security and respect of data privacy. Their application could manifest mechanisms such as autodeleting biometric data after 24 hours, conducting impact assessments among designers to identify any potential bias, or designing public signage so passers know when an FRS system is running.

“Human-centered standards can help forge trust and protect against bias or misapplication.”

2. Design systems to support product teams.

In developing ‘responsible by design’ technologies, product teams will need special support, best practices, and systems for testing or quality control. Factors to consider when designing systems to provide this support will include:

  • Facial recognition justification
  • A data plan matching end-user characteristics
  • Bias risk mitigation
  • Methods to inform end-users

To take each factor into account, strong collaborations between the organization using the technology and the platform provider will help ease any risks. For example, in bias risk mitigation, algorithms provided by platform providers must be trained with data that will truly represent the users of the service and the system tested to ensure it acts properly before release. To do so, organizations should assess the diversity of its end-users and share it with the platform provider to train the algorithm accordingly.

 

While acknowledging that biases may still occur, organizations should anticipate this risk by building fallback systems that are robust enough to provide the same level of service to anyone and thus diminish any form of discrimination due to algorithm biases.

3. Auto-assess your work.

Best practices and principles cannot exist in a vacuum. Auto-assessing is a necessary step to check the Principles of Action are being respected and to identify potential blind spots. To that end, an assessment questionnaire can help groups test how well the systems they designed matched the standards they’d set.

For example, when it comes to assessing their proportional use, organizations can check if they have considered alternatives to FRS and documented the reasons they have rejected them. They can also analyse the level of false negatives and false positives and determine if this level is suitable for the use case they deploy. They can also compare their results between skin tones to attest that the system doesn’t produce any form of discrimination.

When gaps are found, the assessment process can drive teams to refer to their best practices to take internal actions to bridge gaps and improve their level of accountability and trustworthiness for their customers.

4. Validate compliance through a third-party audit.

Too often, companies only rely on home-made labels to build transparency among users. The quality of those labels can sometimes be questionable, and also pose a systemic risk of mistrust for the industry that could consequently undermine wider efforts to build transparency.

Being audited by a third and independent party to deliver certification in line with the Principles for Action could be one path forward. Lessons from the accounting industry can be applied, adding additional transparency and safety through independent and accredited agencies. [For example, to draft our FRS framework, the World Economic Forum Artificial Intelligence team partnered with AFNOR Certification, the French leader of certification and assessment services.]

These evaluations should occur right after the deployment of systems for end-users and be conducted regularly to attest to the respect of standards over time. By doing so, certified organizations will be able to communicate among their customers to show their compliance across a range of requirements.

While certification is widely used in many industries and services, facial recognition, despite its high level of scrutiny, is still deployed without any existing certification. We believe that transparency and trust in this field can only be achieved with such mechanisms.

Looking ahead
These four steps can help inform the design of responsible systems for flow management use cases in FRS. They can also ensure that their designers and users are effectively compliant with these achievable principles.

Certification is a reachable step towards the regulation of FRS. The cooperation between industry actors, policymakers, academics and civil society on this policy project has shown a strong willingness for standards for the commercial use of FRS.

Yet, Governments have to step in to adopt bills that will ensure a sustainable regulation of FRS along with international standards defining what a responsible use of the technology should be. They also need to address the thorny question of FRS for law enforcement and determine the right levels of oversight and accountability for its related use-cases.

The recent call for regulation by organizations such as IBM, Microsoft and Amazon should be followed by new regulations or guidance or we’ll likely see a deployment of FRS that will lead to mistrust and consumer avoidance.

the sting Milestone

Featured Stings

Can we feed everyone without unleashing disaster? Read on

These campaigners want to give a quarter of the UK back to nature

How to build a more resilient and inclusive global system

Stopping antimicrobial resistance would cost just USD 2 per person a year

This entrepreneur is helping farmers get food to consumers during lockdown

Switzerland has the most highly skilled workers in the world. This is why

These are the countries best prepared for health emergencies

The essence of care is cosmopolitan

Call to revitalize ‘language of the ancestors’ for survival of future generations: Indigenous chief

Mental Health: Role of the individual for their well-being in the pandemic

‘Fire-fighting approach’ to humanitarian aid ‘not sustainable’: Deputy UN chief

Black babies more likely to survive when cared for by Black doctors, suggests new study

EU Commission spends billions without achieving targets

“The Arctic climate matters: to what degree?”, a Sting Exclusive co-authored by UN Environment’s Jan Dusik and Slava Fetisov

EU-Vietnam free trade deal gets green light in trade committee

Here are five things to know about the future of being human

UN health agency spotlights stalled effort to close health divide across Europe, in new report

IMF v Germany: Eurogroup keeps the fight under control

European Junior Enterprise Network – Ready to take the Step Into the Future?

Here’s what I learned at Davos 2020

Yesterday’s “jokes” and sarcasm by Digital Single Market’s Vice President Ansip on EU member states’ right to protect their telco markets

We need to talk about integration after migration. Here are four ways we can improve it

UN human rights chief fears world has grown numb to Syrian carnage

Health is nothing but the main consequence of climate change

Financial abuse of elderly ‘rampant, but invisible’, says UN expert

Europe should make voice ‘more heard’ in today’s ‘dangerous world,’ says UN chief

Reducing disaster risk is a good investment, and ‘the right thing to do’, says Guterres

5 priorities for leaders in the new reality of COVID-19

How the United States can win back its manufacturing mojo

From low-earth orbit, ‘envoys’ of humanity join UN space forum

iSting: Change Europe with your Writing

Council Presidency: Floundering with the EU 2014 budget

This wall of shoes is for the women killed by domestic violence

Workplace risks: Final vote on protection from carcinogens, including diesel fumes

Copyright: MEPs update rules for the digital age

UN chief calls for ‘united front’ against anti-Semitism after US synagogue mass-shooting

Eurogroup asked to reduce public debts of its member states

Top UN political official updates Security Council on Iran nuclear deal

I created a class to teach zero waste. It turned into so much more

Superbugs: MEPs advocate further measures to curb use of antimicrobials

‘Amid stormy global seas, UN charter remains our moral anchor’, says Guterres on United Nations Day

ECB’s €1.14 trillion again unifies Eurozone; Germany approves sovereign debt risks to be pooled

Grexit no longer a threat but how to manage a “tutti frutti” government if not with fear?

‘Our goal is to democratize the air.’ How aerial transportation will shape cities of the future

Service and Sacrifice: For Ghana, UN peacekeeping is a ‘noble opportunity to serve humanity’

Israeli security forces’ response to Gaza protests ‘a recipe for more bloodshed’, says UN expert

Environmental Implementation Review: Commission helps Member States to better apply EU environment rules to protect citizens and enhance their quality of life

Yemen ceasefire deal: ‘Potential’ now to restore humanitarian lifeline to millions

AI can wreak havoc if left unchecked by humans

Celebrating Gaston Ramon – the vet who discovered vaccinology’s secret weapon

2 trillion drinks containers are made every year – so where do they go?

E-cigarettes are killing us softly with their vapor

Business is a crucial partner in solving the mental health challenge

The Ultimate Career Choice: General Practice Specialist

State aid: Commission approves German scheme for very high capacity broadband networks in Bavaria

These airports are now opening their doors to non-fliers

AI looks set to disrupt the established world order. Here’s how

Why do thousands of migrants need to be drowned for Brussels to wake up?

EU-US resume trade negotiations under the spell of NSA surveillance

There are 3 barriers blocking good menstrual hygiene for all women. Here’s how we overcome them

Denmark’s last circus elephants are retiring – here’s what might take their place

Tax havens cost governments $200 billion a year. It’s time to change the way global tax works

This AI can predict your personality just by looking at your eyes

Here’s how we reboot digital trade for the 21st century

More Stings?

Advertising

Speak your Mind Here

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s