How can we best navigate the frontier of AI regulation?

(Credit: Unsplash)

This article is brought to you thanks to the collaboration of The European Sting with the World Economic Forum.

Author: Kay Firth-Butterfield, CEO, Good Tech Advisory, Satwik Mishra, Vice President (Content), Centre for Trustworthy Technology


/

  • In March 2023, over 33,000 people in the AI industry signed the Future of Life Institute open letter asking for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”The aim was to bring the huge concerns about generative AI into the mainstream and it has succeeded.Steps are being taken to ensure that AI is only used as a force for good, but there are concerns about whether the resulting AI regulation will be enough.

In March 2023, over 33,000 individuals involved with the design development and use of AI signed the Future of Life Institute open letter asking for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” This was never expected to happen, but the aim was to bring the huge concerns about generative AI into the mainstream. In July, the White House unveiled a framework of voluntary commitments for regulating AI. Evidently, American policymakers are paying attention. Central to these safeguards are the principles of promoting ‘safety, security and trust.’ Seven prominent AI companies have consented – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI.They have agreed upon: Internal and external independent security testing of AI systems before public release; sharing of best practices; investing in cybersecurity; watermarking generative AI content; publicly sharing capabilities and limitations and investing in mitigating societal risks, such as bias and misinformation.

Discover

How is the World Economic Forum creating guardrails for Artificial Intelligence?

In response to the uncertainties surrounding generative AI and the need for robust AI governance frameworks to ensure responsible and beneficial outcomes for all, the Forum’s Centre for the Fourth Industrial Revolution (C4IR) has launched the AI Governance Alliance. The Alliance will unite industry leaders, governments, academic institutions, and civil society organizations to champion responsible global design and release of transparent and inclusive AI systems.

The positive takeaways

This announcement sends a resounding message to the market that AI development shouldn’t harm the social fabric. It follows through on demands from civil society groups, leading AI experts and some AI companies emphasizing the need for regulation. It reveals an upcoming executive order and legislation on AI regulation. Finally, it highlights ongoing international-level consultation, both bilaterally with several countries and at the UN, the G7 and the Global Partnership on AI led by India. This paved the way for meaningful outcomes at recent and upcoming international summits, including the upcoming G20 summit in India this week and the AI Safety summit in the UK in November.However, can we afford to be complacent? The White House announcement demands an unwavering follow-through. It shouldn’t be an eloquent proclamation of ideals, failing to drive any significant change in the status quo.

The concerns

These are voluntary safeguards. They don’t enforce accountability on the companies for all purposes, but merely request action. There is very little that can be done if a company doesn’t or only reluctantly enforces these safeguards. Further, many of the safeguards, enlisted in the announcement are found in documents published by these companies. For instance, security testing or what is called ‘red teaming’ is carried out by Open AI before it releases its models to the public and yet we see the problems writ large.These seven companies do not encompass the entire industry landscape, for example, Apple and IBM are missing. To ensure a collective and effective approach, mechanisms should hold every actor, especially potentially bad actors, accountable and incentivize broader industry compliance. Adhering to the voluntary safeguards doesn’t comprehensively address the varied challenges that AI models present. For instance, one of the voluntary safeguards announced by the White House is “investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights.” Model weights are the core components determining functionality. Access to them is considered a proxy for being able to reconstruct the model with threshold compute and data. This is just one source of vulnerability, however. Models trained on biased or incorrect data, for instance, can still lead to vulnerabilities and malfunctioning systems when released to the public. Additional safeguards need to be designed and implemented to tackle these intricate issues effectively.

Urging companies to invest in trust and safety is ambiguous. AI safety research at companies substantially pales in comparison to development research. For example, of all the AI articles published till May 2023, a mere 2% focus on AI safety. Within this limited body of AI safety research, only 11% originates from private companies. In this context, it is difficult to anticipate that voluntary guidelines alone will be enough to alter this pattern.

Finally, AI models are rapidly being developed and deployed globally. Disinformation, misinformation and fraud, amongst other harms, perpetuated by unregulated AI models in foreign countries have far-reaching repercussions, even within the US. Merely creating a haven in the US might not be enough to shield against the harms caused by unregulated AI models from other nations.Hence, more comprehensive and substantive steps are needed within the US and in collaboration with global partners to address the varied risks. Firstly, an agreement on a standard for testing AI model safety before its deployment anywhere in the world would be a great start. The G20 summit and UK summit on AI safety are critical forums in this regard.Secondly, we need enforceability of any conceived standards via national legislation/executive action as deemed fit by different countries. The AI Act in Europe can be a great model for this endeavour. Thirdly, we need more than a call to principles and ethics to make these models safe. We need engineering safeguards. Watermarking generative AI content assuring information integrity is a good example of this urgent requirement. Implementing identity assurance mechanisms on social media platforms and AI services, which can help identify and address the presence of AI bots, enhancing user trust and security could be another formidable venture. Finally, national governments must develop strategies to fund, incentivize and encourage AI safety research in the public and private sectors. The White House’s intervention marks a significant initial action. It can be the catalyst for responsible AI development and deployment within the US and beyond, provided, this announcement is a springboard to push forth more tangible regulatory measures. As the announcement emphasizes,

implementing carefully curated “binding obligations” would be crucial for ensuring a safe, secure and trustworthy AI regime.

Leave a Reply

Go back up

Discover more from The European Sting - Critical News & Insights on European Politics, Economy, Foreign Affairs, Business & Technology - europeansting.com

Subscribe now to keep reading and get access to the full archive.

Continue reading

Discover more from The European Sting - Critical News & Insights on European Politics, Economy, Foreign Affairs, Business & Technology - europeansting.com

Subscribe now to keep reading and get access to the full archive.

Continue reading