As Rishi Sunak pledges to pioneer global AI safety, what is the UK’s current approach to AI regulation?

News

13 Jun 2023
Twitter
Linked In

As Rishi Sunak pledges to pioneer global AI safety, what is the UK’s current approach to AI regulation?

By Sarah Beddoes, Adarga General Counsel and Responsible AI Committee Co-Chair

This week Prime Minister Rishi Sunak announced his ambitions for the UK to become an innovation hub and global AI leader. With that came the commitment to pioneer international AI safety as we strive to navigate the rapid advancement and fast-moving nature of the technology.

As a British AI champion, Adarga is proud to be supporting the UK government with this important endeavour. The transparent, responsible and trustworthy development of AI lies at the very heart of our mission.

As we look to the future of AI regulation, hear from Adarga General Counsel and Responsible AI Committee Co-Chair Sarah Beddoes on the current state of play for AI regulation in the UK and its potential outlook.

Last year, the Department for Digital, Culture, Media and Sport (DCMS) published a policy paper setting out an overview of the UK’s emerging approach to the regulation of AI. Interestingly, it demonstrated some notable departures from the EU’s proposed Artificial Intelligence Act (AIA), published in April 2021, which served as the world’s first concrete proposal for regulating AI. Read more about our take on this in our blog ‘What is the UK’s emerging approach to the regulation of AI?.’

Adarga was one of the stakeholders invited to provide views on the proposals set out in the policy paper. This stakeholder feedback (from over 130 companies) was in turn used to shape the government’s subsequent White Paper setting out its latest position on the critical topic. Entitled “A pro-innovation approach to AI regulation”, this was published at the end of March.

“A pro-innovation approach to AI regulation”: a summary of the UK government’s latest White Paper on AI regulation

The White Paper’s proposed framework is positioned to bring clarity and coherence to the AI regulatory landscape. The framework is designed to “make responsible innovation easier… strengthen the UK’s position as a global leader in AI, harness AI’s ability to drive growth and prosperity and increase public trust in its use and application.” AI is developing at an exponential rate and the White Paper reinforces that the government recognises this and it is therefore taking a deliberately agile and iterative approach.

The framework is built around four key elements:

1. An agile definition of AI

The White Paper defines AI by reference to two characteristics: “adaptivity” and “autonomy”. Given the pace of development of AI it is impossible to predict what technologies will appear in years to come. By avoiding defining AI rigidly the framework should be able to apply to technologies that we simply cannot predict.

2. A context-specific approach

The government wants the focus to be on high-risk applications of AI. Instead of assigning rules or risk levels to entire sectors or technologies, the White Paper talks about regulating the use rather than regulating the technology. In June 2022 the Ministry of Defence published its own AI ethical principles and policy. The White Paper states that the government will ensure appropriate cohesion and alignment in the application of this policy through the context-specific approach and will consider in due course whether an exemption is needed for areas such as national security.

3. A set of cross-sectoral principles

The White Paper sets out five principles to guide and inform the responsible development and use of AI. These principles are based on the principles set out in the policy paper:

  1. Safety, security and robustness
  2. Appropriate transparency and explainability
  3. Fairness
  4. Accountability and governance
  5. Contestability and redress

The UK’s principles build on and reflect the government’s commitment to the Organisation for Economic Co-operation and Development (OECD) AI principles which promote the ethical use of AI. They will be issued on a non-binding basis and implemented by existing regulators. The White Paper states that the government anticipates in due course introducing a binding duty on regulators requiring them to have due regard to the principles. However, the government will not do so if their monitoring of the framework shows that it is working without the need for such a binding duty.

4. New central functions

The government is intending to create various central functions to support regulators in delivering the AI regulatory framework:

  1. Monitoring and evaluating the framework’s effectiveness and the implementation of the principles.
  2. Assessing and monitoring risks across the economy arising from AI.
  3. Conducting horizon scanning and gap analysis to ensure the government is responding to emerging AI trends.
  4. Supporting testbeds and sandbox initiatives to help get new AI technologies to market.
  5. Providing education and awareness.
  6. Promoting interoperability with international regulatory frameworks.

The framework will be supported by a variety of tools for trustworthy AI such as assurance techniques, voluntary guidance and technical standards. The government will promote the use of these tools and is collaborating with the UK AI Standards Hub in this regard. The Hub’s mission is to advance trustworthy and responsible AI with a focus on the role that standards can play as governance tools and innovation mechanisms.

AI regulation: What next?

Adarga welcomes the approach to regulating AI set out in the latest White Paper. It is a flexible approach which is designed to focus on applications of AI that are actually risky. This will enable innovation to thrive in what could be deemed less risky areas of AI. This is positive for SMEs as it is critical that regulation does not stifle or hinder innovation. SMEs have a fundamental role to play in supporting the UK’s ambition to become a global tech superpower.

The challenge now is to implement the framework as quickly as possible. Publication of the White Paper was originally planned for late 2022 and therefore there has already been delay. Since the White Paper was published, there has been increasing concern that AI poses an existential risk to humanity. This has been predominantly brought about by ChatGPT and other large language models which have been dominating the news. It was recently reported that as a result the Prime Minister is reconsidering the proposed UK AI regulatory approach. This coincides with the Prime Minister’s recent announcements to launch a global AI watchdog and host the first major global summit on AI safety in line with his ambition to make the UK a worldwide hub for international AI regulation.

However, continuing to debate how to regulate AI risks further delay. Delay causes uncertainty for businesses and therefore risks stifling the innovation that the government is so keen to promote. The framework is designed to be flexible and more prescriptive regulation can be introduced in due course if necessary. The greater risk at the moment, given the pace at which AI is developing, is to delay any longer.

Adarga invites the British tech sector and UK government leaders to join us in endorsing the Prime Minister’s call-to-arms to become a global AI superpower and world-leader in pioneering safe AI. Please get in touch via hello@adarga.ai to unite in supporting this bold ambition.

Read more about Adarga’s responsible AI committee here

Cookie Policy

We'd like to set Analytics cookies to help us to improve our website by collecting and reporting information on how you use it. The cookies collect information in a way that does not directly identify anyone.

For more detailed information about the cookies we use, see our Legal Page

Analytics Cookies