
What is the UK's emerging approach to the regulation of AI?
The UK’s first artificial intelligence (AI) strategy marked a step change in the country’s approach to the fastest growing emerging technology in the world. It set out a plan to strengthen the UK’s position as a global AI superpower, and with it the commitment to develop a leading governance approach that drives prosperity while protecting the public and our fundamental values.
Over the summer, the Department for Digital, Culture, Media and Sport (DCMS) published a policy paper setting out an overview of the UK’s emerging approach to the regulation of AI. Interestingly, it demonstrates some notable departures from the EU’s proposed Artificial Intelligence Act (AIA), published in April 2021, which served as the world’s first concrete proposal for regulating AI.
We’ve provided some key takeaways from the UK policy paper for you below:
- The proposed approach is decentralised, light-touch and pro-innovation.
- The UK is not currently planning to introduce AI legislation. Instead, the UK is proposing to introduce the following six cross-sectoral AI governance principles:
- Ensure that AI is used safely
- Ensure that AI is technically secure and functions as designed
- Make sure that AI is appropriately transparent and explainable
- Embed considerations of fairness into AI
- Define who is responsible for AI governance
- Clarify routes to redress or contestability
- The UK’s current regulators will be asked to apply these principles in their particular sector or domain.
- The policy paper makes it clear that the regulators’ focus should be on what they deem to be high-risk concerns. It also advocates for lighter-touch options to be used by the regulators, such as guidance or voluntary measures, instead of compulsory measures.
- There is no universally applicable definition of AI proposed. Instead, DCMS proposes to set out the core characteristics and capabilities of AI with regulators being guided to set out more detailed definitions at the level of application.
Enabling innovation to thrive
The DCMS policy paper is a welcome step forward in the UK’s journey towards regulating AI. The proposed context-driven approach delivered through the UK’s established regulators is a far gentler, less prescriptive approach to regulation than that taken by the EU with its proposed AI Act. It will allow regulators to focus on applications of AI that are actually risky and will enable innovation to thrive in other areas of AI.
This is positive for AI innovation and SMEs. SMEs do not have the resources of larger companies, making excessive regulation particularly burdensome. If the UK had chosen to follow the EU’s more prescriptive regulatory approach, this would likely have led to the UK’s development of AI being led by larger companies that would find it easier to comply with onerous regulatory obligations, at the expense of SMEs and innovation.
Furthermore, AI is developing at such a fast pace and it is impossible to predict where it will be in years to come. The UK’s proposed approach is adaptable and should allow for flexibility in line with technological change. This is particularly important when it comes to defining AI. The flexible approach which has been proposed, with core characteristics and capabilities being set out rather than a strict definition, is important. AI may be very different in the future and trying to define AI too strictly now will cause problems later on.
Challenges
There will however be some challenges with DCMS’ proposed approach. There is still a lack of clarity for businesses. There are six broad cross-sectoral principles which are welcomed but it is not currently clear how these principles, which are at the heart of the proposed regulatory regime, will be interpreted and implemented by the existing regulators. There is clearly a risk of multiple regulators interpreting and implementing the principles differently, leading to contradictory statements or guidance from different regulators.
A key challenge will therefore be regulatory coordination. This will be vital in order to give businesses confidence that confusion and contradiction will not happen. For example, there may need to be a specific cross-regulator group focussed on AI regulation. The need for regulatory coordination is accepted by the policy paper but much more detail of the proposed coordination is needed.
Monitoring and evaluating the approach will be key to ensuring that future unforeseen risks are addressed and it is good to see that although the UK is not currently planning to introduce AI legislation, the government has not ruled this out where and when needed to ensure effectiveness of the framework.
The Importance of collaboration
Regulators will need to have access to the right skills and expertise to effectively regulate AI. It is imperative that the regulators really understand AI in order to get it right and engaging the business community will be crucial in order to do this. Frequent dialogue with industry will be essential and secondments from regulators to industry and vice versa will help to ensure that the regulators are keeping up with the rapid pace of AI development.
Sector-specific approaches
Finally, there needs to be clarity on how the proposed approach will apply in different sectors including defence and national security. In June 2022, the Ministry of Defence outlined ethical principles for AI in its policy statement on delivering AI-enabled capabilities. Those principles differ from those outlined by DCMS and there will need to be clarity whether both sets of principles will apply to companies supplying AI to the MOD and, if so, how any differences in approach will be dealt with.
Accountability for responsible AI
As ever, as we navigate the fast-changing demands of AI, it is critical that businesses hold themselves accountable for its responsible design, development and deployment. Adarga’s Committee for Responsible AI (ACRAI), for example, guides our own approach to the responsible creation, good operation and human-centric governance of AI, ensuring transparency and accountability are ever-present in our work at the cutting-edge of AI – you can read more about this here.
Sign up for the Adarga Newsletter today to receive updates and thought pieces on the latest news surrounding AI regulation and more.