Trust in facial recognition technology can be achieved. Here’s how

face recognitions

(Credit: Unsplash)

This article is brought to you thanks to the collaboration of The European Sting with the World Economic Forum.

Author: Kay Firth-Butterfield, Head, Artificial Intelligence and Machine Learning, World Economic Forum & Lofred Madzou, Project Lead, AI & Machine Learning, World Economic Forum & Sébastien Louradour, Fellow, Artifical Intelligence and Machine Learning, World Economic Forum


  • The potential to misuse facial recognition software (FRS) has raised fundamental questions around ethical use, privacy and accessibility.
  • Those developing facial recognition software can develop standards to help ensure ethical and responsible technology design.

Facial recognition software (FRS) has hit the headlines of late and rightly so. Its potential for misuse has raised fundamental questions around ethical use, privacy and accessibility. In some cases, major companies have even halted development and some NGOs have called for total bans.

Such moves have moved some to push for more standardization in the development of facial recognition technology as many realize that it can be used to conduct unacceptable infringement of liberties. The chief of police in one major US city even pointed out that FRS’ low level of accuracy could lead authorities to misidentify someone 96% of the time if officers were to leverage the technology just by itself.

To help mitigate these fears, those developing facial recognition software can develop standards to help ensure ethical and responsible technology design. Such human-centered standards can help forge trust and protect against bias or misapplication.

Artificial Intelligence

What is algorithm bias?

Algorithm bias, also called machine learning bias, is a phenomenon in which algorithms can act in a discriminatory or prejudiced manner due to misplaced assumptions during the learning phase of their development.

Unconscious biases regarding gender, race and social class can make their way into the training data fed by programmers into “machine-learning algorithms”, systems which constantly improve their own performance by including new data into an existing model.

These biases can be observed in the algorithm’s output: erroneous reflected assumptions that can result in embarrassing news coverage.

Some recent stories about accidental algorithm bias include:

Over the past year, the World Economic Forum Artificial Intelligence team found that responsible limits can be set on facial recognition technology. Its AI team worked alongside public and private sectors to develop a unique FRS framework anchored in responsible development for engineers and deployment for policymakers.

The framework is the first to go beyond general principles and to operationalize commercial use cases under the banner of ethical and responsible AI. It comprises four simple steps:

1. Draft principles that define responsible use.

Facial recognition has gained special importance as COVID-19 has pushed contactless services as a safe way to verify identities. While touchless and passwordless technology is the path ahead, companies can pick the direction of travel.

To guide that journey – and prevent misuse – companies can draft a set of ‘Principles for Action’ that define what constitutes responsible facial recognition technology. These principles should focus on 7 key areas to protect users from any misuse of the technology and be developed in consultation with industry players, policymakers, advocacy groups and academics with the goal of reaching a high level of consensus between these stakeholders. These areas include:

  • Privacy: Protecting end user’s personal data
  • Risk assessment: Building processes to mitigate risks of errors and its consequences
  • Proportional use of technology: Assessing the trade-off between usage risks and opportunities
  • Accountability: Defining the responsibility of platform providers and organizations using the technology
  • Consent: Naming the rules for free consent by end-users
  • Accessibility: Making sure the technology accommodates differently-abled people
  • Adaptability: Offering fallback options as situations change and having humans on deck for oversight

To ensure the effective adoption of these Principles for Action, they must be embedded at the core of business operations through internal processes and oversight to lessen potential risks.

Taking these points into consideration can ensure a high level of security and respect of data privacy. Their application could manifest mechanisms such as autodeleting biometric data after 24 hours, conducting impact assessments among designers to identify any potential bias, or designing public signage so passers know when an FRS system is running.

“Human-centered standards can help forge trust and protect against bias or misapplication.”

2. Design systems to support product teams.

In developing ‘responsible by design’ technologies, product teams will need special support, best practices, and systems for testing or quality control. Factors to consider when designing systems to provide this support will include:

  • Facial recognition justification
  • A data plan matching end-user characteristics
  • Bias risk mitigation
  • Methods to inform end-users

To take each factor into account, strong collaborations between the organization using the technology and the platform provider will help ease any risks. For example, in bias risk mitigation, algorithms provided by platform providers must be trained with data that will truly represent the users of the service and the system tested to ensure it acts properly before release. To do so, organizations should assess the diversity of its end-users and share it with the platform provider to train the algorithm accordingly.

 

While acknowledging that biases may still occur, organizations should anticipate this risk by building fallback systems that are robust enough to provide the same level of service to anyone and thus diminish any form of discrimination due to algorithm biases.

3. Auto-assess your work.

Best practices and principles cannot exist in a vacuum. Auto-assessing is a necessary step to check the Principles of Action are being respected and to identify potential blind spots. To that end, an assessment questionnaire can help groups test how well the systems they designed matched the standards they’d set.

For example, when it comes to assessing their proportional use, organizations can check if they have considered alternatives to FRS and documented the reasons they have rejected them. They can also analyse the level of false negatives and false positives and determine if this level is suitable for the use case they deploy. They can also compare their results between skin tones to attest that the system doesn’t produce any form of discrimination.

When gaps are found, the assessment process can drive teams to refer to their best practices to take internal actions to bridge gaps and improve their level of accountability and trustworthiness for their customers.

4. Validate compliance through a third-party audit.

Too often, companies only rely on home-made labels to build transparency among users. The quality of those labels can sometimes be questionable, and also pose a systemic risk of mistrust for the industry that could consequently undermine wider efforts to build transparency.

Being audited by a third and independent party to deliver certification in line with the Principles for Action could be one path forward. Lessons from the accounting industry can be applied, adding additional transparency and safety through independent and accredited agencies. [For example, to draft our FRS framework, the World Economic Forum Artificial Intelligence team partnered with AFNOR Certification, the French leader of certification and assessment services.]

These evaluations should occur right after the deployment of systems for end-users and be conducted regularly to attest to the respect of standards over time. By doing so, certified organizations will be able to communicate among their customers to show their compliance across a range of requirements.

While certification is widely used in many industries and services, facial recognition, despite its high level of scrutiny, is still deployed without any existing certification. We believe that transparency and trust in this field can only be achieved with such mechanisms.

Looking ahead
These four steps can help inform the design of responsible systems for flow management use cases in FRS. They can also ensure that their designers and users are effectively compliant with these achievable principles.

Certification is a reachable step towards the regulation of FRS. The cooperation between industry actors, policymakers, academics and civil society on this policy project has shown a strong willingness for standards for the commercial use of FRS.

Yet, Governments have to step in to adopt bills that will ensure a sustainable regulation of FRS along with international standards defining what a responsible use of the technology should be. They also need to address the thorny question of FRS for law enforcement and determine the right levels of oversight and accountability for its related use-cases.

The recent call for regulation by organizations such as IBM, Microsoft and Amazon should be followed by new regulations or guidance or we’ll likely see a deployment of FRS that will lead to mistrust and consumer avoidance.

the sting Milestones

Featured Stings

Can we feed everyone without unleashing disaster? Read on

These campaigners want to give a quarter of the UK back to nature

How to build a more resilient and inclusive global system

Stopping antimicrobial resistance would cost just USD 2 per person a year

Mobile 360 Africa 11-13 July 2017

M360 Security for 5G: Security for 5G Predictions 2020, in association with The European Sting

What India’s route to universal health coverage can teach the world

New EU rules cut red tape for citizens living or working in another Member State as of tomorrow

Russia: EU Presidents condemn Russian sanctions against EU nationals

What will higher education in Africa look like after COVID-19?

Make the internet safer: stop using passwords

Can the EU assume the mantle of global leadership?

Refugee crisis update: EU still lacks solidarity as Hungary and Slovakia refuse to accept EU Court’s decision

CDNIFY @ TheNextWeb 2014

Swiss vote approves nationwide paternity leave

Parliament elects the von der Leyen Commission

COVID-19: National authorities should do more to raise awareness of EU action

Here’s why the world’s recovery from COVID-19 could be doughnut shaped

The Commission favours the cultivation of more GMOs in Europe

How COVID-19 and ‘work from anywhere’ can build the city of the future

Civil protection: Parliament strengthens EU disaster response capability

Women in Iceland have walked out of work to dispute the gender pay gap

Stop illegal trade in cats and dogs, says European Parliament

Economic policy priorities for a post-pandemic recovery

Rights of ‘gilets jaunes’ protesters in France, ‘disproportionately curtailed’, say UN independent experts

Commission to decide definitely on genetically modified Maize 1507 seed

Coronavirus: Commission proposes to extend 2020 European Capitals of Culture into 2021

Europe should make voice ‘more heard’ in today’s ‘dangerous world,’ says UN chief

Linking HIV prevention with SRHR

Where does our food come from? Here’s why we need to know

Youth Entrepreneurship Issue of the month: JEN, organisers of JADE October Meeting, on why JEs should come together

MEPs want the EU to play a stronger role in improving public health

‘Worst devastation I have seen,’ says UN refugee envoy Angelina Jolie, as she visits West Mosul

How technology will transform learning in the COVID-19 era

As conflicts become more complex, ‘mediation is no longer an option; it is a necessity’, UN chief tells Security Council

China is adding a London-sized electric bus fleet every five weeks

Neelie Kroes at the European Young Innovators Forum: Unconvention 2014

This is how we can make a global green recovery – that also boosts the economy

Disease slashing global meat output, cereals boom, bananas under watch: FAO

The Commission tries to stop the ‘party’ with the structural funds

Combatting terrorism: EP special committee calls for closer EU cooperation

EU budget 2020: Commission focuses its proposal on jobs, growth and security

EU Parliament shows its teeth in view of 2014 elections

State aid: Commission approves €1.1 billion Polish scheme to further support companies affected by coronavirus outbreak

For climate policies to stay on track we must prepare for transition risks

Towards Responsible #AIforAll in India

These countries are pioneering hydrogen power

Yemen bus attack just the latest outrage against civilians: UN agencies

UN chief lauds Fijians as ‘natural global leaders’ on climate, environment, hails ‘symbiotic relationship’ with land and sea

Don’t believe the hype: offices are here to stay

Help prevent children ‘from becoming victims in the first place’, implores Guterres at campaign launch

Commission proposes to purchase up to 300 million additional doses of BioNTech-Pfizer vaccine

MEPs spell out priorities for the European Central Bank and on banking union

Palliative care and Universal Health Coverage: Do not leave those suffering behind

Every bite of burger boosts harmful greenhouse gases: UN Environment Agency

In a state of war: COVID-19 and psychiatric support

Quantum technologies can transform innovation and mitigate climate change – here’s how

EU Summit/Migration: Parliament calls for joint solutions based on solidarity

Trump to run America to the tune of his business affairs

Financial support for workers affected by no-deal Brexit

Commission welcomes the political agreement on the Common Provisions Regulation for shared management funds

Heat stress spike predicted to cost global economy $2,400 billion a year

We can’t wait to act on emissions. Here’s how to get to net zero

More Stings?

Speak your Mind Here

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s