This underestimated tool could make AI innovation safer

(Credit: Unsplash)

This article is brought to you thanks to the collaboration of The European Sting with the World Economic Forum.

Author: Ashley Casovan, Executive Director, Responsible AI Institute, Var Shankar, Director of Policy , Responsible AI Institute


  • Artificial intelligence (AI) is powering impressive advances in many industries. Ensuring that AI systems are deployed responsibly is an urgent challenge.
  • Certification programmes for AI systems should be a critical part of any regulatory approach.
  • They should be developed by subject matter experts, delivered by independent third parties and have auditable trails.

Ensuring that artificial intelligence (AI) systems are deployed responsibly is an urgent challenge. Private investment in AI doubled between 2020 and 2021 and the AI market is expected to expand at a compound annual growth rate of 38.1% until 2030. AI is rapidly powering impressive advances in many industries. For example, DeepMind’s groundbreaking publication of AI predictions of the structures of nearly every known protein is likely to lead to major breakthroughs in drug discovery, pandemic response and agricultural innovation.

However, AI systems can be difficult to interpret and can lead to unpredictable outcomes. AI adoption also has the potential to exacerbate existing power disparities. As the pace of AI adoption increases, lawmakers are working hard to put appropriate safeguards in place. This requires a balance between promoting innovation and reducing harm, an understanding of AI’s effects in countless contexts and a long-term vision to address the “pacing problem” of AI as it advances faster than society’s ability to regulate its impacts.

Certification programmes for AI systems

Certification programmes for AI systems should be a critical part of any regulatory approach to AI as they can help achieve all of these goals. To be authoritative, they should be developed by subject matter experts, delivered by independent third parties and have auditable trails.

Responsible AI deployment means different things in different contexts. A chatbot for health insurance enrolment is very different to a self-driving car. By having an AI system certified, an organization will be able to prove to consumers, business partners and regulators that the system complies with applicable regulations, conforms to appropriate standards and meets responsible AI quality and testing requirements as relevant.

In other fields, certification programmes and other “soft law” mechanisms have successfully supplemented legislation and helped improve transnational standards. Fairtrade certification for coffee assures buyers that coffee bean farmers were paid an appropriate price and conformed to certain social and environmental standards in their farming practices. Dolphin-safe certification for tuna signals compliance with laws and practices devised to prevent the unintended killing of dolphins while fishing for tuna. Leadership in Energy and Environmental Design (LEED) certification provides Platinum, Gold, Silver, and Certified ratings as proof of green building design, construction, and operation.

AI certification programmes could similarly supplement legislation and improve transnational standards for many industries, while also addressing the added complexities of AI systems.

By certifying their automated lending systems, financial institutions could signal to consumers and regulators that the systems are reliable, fair, auditable and able to explain their operations and decisions to loan applicants in plain language. Companies purchasing automated hiring systems may choose only to purchase certified systems to ensure ongoing bias monitoring, a reasonable accommodation process, compliance with laws and meaningful avenues of notification and recourse. Organizations developing applications that use smartphone cameras to automatically screen for skin disease may use certification to show consumers that the solutions are fair, reliable and in alignment with emerging best practices.

Unlocking the societal benefits of AI certification programmes

Given the scale and pace of AI adoption, many of the AI systems being deployed around the world could prove ineffective, unsafe or biased. Without effective and durable regulatory mechanisms – including soft law mechanisms – people, businesses and regulators will not easily be able to distinguish such systems from trustworthy AI systems.

It is time for civil society organizations, companies and lawmakers to consider the potential of responsible AI certification programmes. Civil society organizations can show leadership by developing certification programmes that consider the dynamic nature of AI and by ensuring that they incorporate the interests of marginalized individuals. Corporate leaders are well positioned to provide expertise, access and resources to further the development of independent certification programmes since they are familiar with implementation gaps and best practices for responsible AI adoption.

Discover

How is the World Economic Forum ensuring that artificial intelligence is developed to benefit all stakeholders?

Artificial intelligence (AI) is impacting all aspects of society — homes, businesses, schools and even public spaces. But as the technology rapidly advances, multistakeholder collaboration is required to optimize accountability, transparency, privacy and impartiality.

The World Economic Forum’s Platform for Shaping the Future of Technology Governance: Artificial Intelligence and Machine Learning is bringing together diverse perspectives to drive innovation and create trust.

  • One area of work that is well-positioned to take advantage of AI is Human Resources — including hiring, retaining talent, training, benefits and employee satisfaction. The Forum has created a toolkit Human-Centred Artificial Intelligence for Human Resources to promote positive and ethical human-centred use of AI for organizations, workers and society.
  • Children and young people today grow up in an increasingly digital age in which technology pervades every aspect of their lives. From robotic toys and social media to the classroom and home, AI is part of life. By developing AI standards for children, the Forum is working with a range of stakeholders to create actionable guidelines to educate, empower and protect children and youth in the age of AI.
  • The potential dangers of AI could also impact wider society. To mitigate the risks, the Forum is bringing together over 100 companies, governments, civil society organizations and academic institutions in the Global AI Action Alliance to accelerate the adoption of responsible AI in the global public interest.
  • AI is one of the most important technologies for business. To ensure C-suite executives understand its possibilities and risks, the Forum created the Empowering AI Leadership: AI C-Suite Toolkit, which provides practical tools to help them comprehend AI’s impact on their roles and make informed decisions on AI strategy, projects and implementations.
  • Shaping the way AI is integrated into procurement processes in the public sector will help define best practice which can be applied throughout the private sector. The Forum has created a set of recommendations designed to encourage wide adoption, which will evolve with insights from a range of trials.
  • The Centre for the Fourth Industrial Revolution Rwanda worked with the Ministry of Information, Communication Technology and Innovation to promote the adoption of new technologies in the country, driving innovation on data policy and AI – particularly in healthcare.

Contact us for more information on how to get involved.

Lawmakers in the European Union, US and Canada are poised to enact broad legal requirements for AI systems, drawing upon responsible AI principles articulated by the international community. By incorporating certification programmes and other soft law mechanisms as complements to legal requirements for AI systems, lawmakers can ensure that their legislative aims are reflected in requirements for specific AI use cases in different industries. Lawmakers can fund pilots of soft law AI instruments in different industries, think carefully about developing markets for AI certification, accreditation and auditing, and direct government departments to lead by example by requiring or developing certification programmes for public-sector procurement.

There are signs of progress on the regulatory front. An early draft of the EU’s proposed AI Act acknowledges a role for soft law mechanisms. According to the draft, aligning AI systems with AI standards developed by European standards organizations could help show compliance with the Act. In December 2021, the UK’s Centre for Data Ethics & Innovation published a roadmap for effective assurance and certification markets for AI systems.

AI’s remarkable and rapidly increasing transformation of our society calls for the adoption of a flexible and durable regulatory response. Certification programmes for AI systems should be a vital element of this response.

Speak your Mind Here

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: