Where are we now and what’s next: Three months on from President Biden’s Executive Order on AI

News

19 Feb 2024
Twitter
Linked In

Where are we now and what’s next: Three months on from President Biden’s Executive Order on AI

By Chris Breaks, Adarga legal counsel 

On 30 October 2023, President Joe Biden signed an Executive Order (EO) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The EO seeks to advance a coordinated US-led, government-wide approach towards the safe and responsible development, deployment, and use of AI.  

Despite mixed initial reactions – praised for its ambition but criticised for its perceived lack of “teeth” – on 29 January 2024 the Biden Administration announced significant progress in implementing the EO’s far-reaching directives.

Key achievements include the creation of comprehensive risk assessments for use of AI in critical infrastructure and compulsory reporting for organisations involved in the development of dual-use foundation models. The EO is likely to have far-reaching consequences beyond just government agencies. Companies are navigating a shifting and increasingly complex regulatory landscape, as updated guidance, rules, and policies on AI use and development are released by the US government with increasing frequency. 

This is just the beginning. With over 100 directives contained in the EO, it sets out a forward-looking agenda that both public and private organisations will need to be mindful of, emphasising the need for ongoing adaptation and engagement in the evolving regulatory landscape. 

Public reception 

Following its announcement, the EO was dubbed by many commentators as the most significant action taken by the US government to address the risks and challenges posed by the proliferation of AI technologies.  

That’s not to say this was a universally held opinion.   

While some viewed the EO as a necessary step towards regulating a rapidly advancing technology, others criticised it as "vaporware", hinting at concerns over the actual implementation and impact of the EO.  

Others pointed to the fact that the EO is “only” an executive order, and therefore does not have the durability or weight of legislation passed by Congress and could easily be discarded following a change of administration.  

Another perceived limitation of the EO is the fact that its immediate effect is largely limited to government agencies, contracts, and projects.  

That said, there are important provisions contained in the EO that extend well beyond government agencies. What’s more, on 29 January, the Biden Administration announced the timely completion by various agencies of several directives outlined in the EO, demonstrating that progress has been made since its inception.  

Diving a little deeper 

The EO sets out eight guiding principles and priorities which direct government agencies to take specific actions to address privacy, security, and governance issues raised not only by their own use of AI technologies, but also the use of AI by critical infrastructure providers, infrastructure-as-a-service providers, financial institutions, and others.  

 The eight guiding principles are: 

  • Ensuring the Safety and Security of AI Technology 
  • Promoting Innovation and Competition 
  • Supporting Workers 
  • Advancing Equity and Civil Rights 
  • Protecting Consumers, Patients, Passengers and Students 
  • Protecting Privacy 
  • Advancing Federal Government Use of AI 
  • Strengthening American Leadership Abroad  

 The EO established a White House AI Council to oversee its implementation and coordinate activities and directives across government agencies. In total, the EO contains more than 100 directives to different federal agencies directing (or in some cases, encouraging) them to adopt rules, issue guidance or develop standards on the development, deployment, and use of AI technologies. The EO also imposes deadlines for agencies to implement the applicable directives. For some directives, agencies had between 45 to 90 days from the inception of the EO to complete the directive; for others, the agencies will be given more time, in some cases as long as 365 days.  

 Three months on 

The EO directed a sweeping range of actions to be implemented by government agencies within the first 90 days of its inception.  

The White House statement released on 29 January confirmed that agencies had completed all of the 90-day actions tasked by the EO (and delivered other directives that the EO had tasked over a longer timeframe) demonstrating significant progress in achieving the EO’s mandate to protect US citizens while ensuring continued innovation. 

Crucially, the following directives have now been implemented: 

  • The Secretary of Commerce (SoC) has compelled companies developing or demonstrating an intent to develop potential “dual use foundation models” to report and provide information to the SoC on: the ownership of the model weights used in dual-use foundation models; the safeguards in place to assure the integrity of the training process against sophisticated threats; and measures taken to protect the model weights.
  • Nine government agencies (including the Department of Defense, the Department of Transportation, and the Department of Treasury) have submitted to the Secretary of Homeland Security (SoHS) their respective risk assessments related to the use of AI in critical infrastructure sectors. As per the relevant directives, these risk assessments include ways in which deploying AI may make critical infrastructure systems more vulnerable to failures, physical attacks, and cyberattacks, and outlines ways to mitigate these vulnerabilities. 
  • The SoHS has implemented changes to streamline the processing times of visa applications for non-US citizens seeking to travel to the US to work on, study, or conduct research in AI or other critical and emerging technologies. Further, an AI Talent Surge has been launched to accelerate the hiring of AI professionals across the federal government. It is hoped that these directives will help attract and retain talent in AI and other critical and emerging technologies in the US economy. 
  • The National AI Research Resource (NAIRR) pilot programme was launched to deliver computing power, data, software, models, and training resources to researchers and students in support of AI-related research and development. 
  • The Attorney General’s office has convened meetings to discuss ways of addressing and preventing potential discrimination in the use of automated systems. These discussions aim to increase coordination between the Department of Justice’s Civil Rights Division and federal civil rights offices while also improving external stakeholder engagement to promote public awareness of potential discriminatory uses and effects of AI. 

 You can find out more about the progress made by the relevant government agencies here. 

 Impact on the private sector  

Although a number of the directives are primarily focussed on government agencies, the effects of these directives will have far-reaching consequences for businesses, both directly (in the case of the SoC’s reporting requirements on dual-use foundation models) and indirectly (in the case of, for example, the streamlining of visa applications, establishment of the NAIRR, and risk assessments in critical infrastructure sectors).   

While some companies involved in the AI sector – like Adarga – have established structures that put responsible development of AI and AI governance at the core of their work, and are actively engaged in the development of regulatory frameworks, guidance issued via directives under the EO will likely inform best practices for the private sector with respect to the development, deployment, and use of AI technologies and mitigating risk associated with AI.  

In light of the above, private sector companies with interests in the US, whether through US-based customers or business operations, should consider taking the following steps:  

  • The EO directs government agencies to track and report on AI use cases. Private companies should similarly conduct appropriate due diligence of existing and anticipated uses of AI. Regulators will likely require organisations to understand how AI is being used in their business, so an understanding of AI systems and processes employed by the private sector will be vital as regulation develops at pace.  
  • The EO requests that certain agencies appoint Chief AI Officers and Artificial Intelligence Governance Boards, and specifically references the NIST AI Risk Management Framework throughout. The private sector should similarly define and implement AI strategy and risk management frameworks to establish potential high-risk use cases and a process for managing the associated risks.  They should also consider creating appropriate AI-focussed roles, including for cross-functional AI governance.   
  • Companies should ensure they are making use of every opportunity to engage with government stakeholders. The EO provides a number of opportunities for the private sector to engage with government, ensuring their voices are heard and influencing policy decisions and regulatory frameworks. Combined with this, companies should also ensure they have effective systems in place for regulatory and consultation horizon scanning and ongoing internal stakeholder engagement.  

What’s next? 

The EO contained more than 100 directives for various government agencies to implement. We've only scratched the surface by examining a handful of the directives that required implementation within the first 90 days. As we’ve discussed, the roadmap laid out by the EO extends far beyond this 90-day period with numerous other directives scheduled for implementation over the next 180, 270, 365 days, and beyond.  

Examples of these include: the NIST’s requirement to develop further guidelines and best practices for deploying AI systems to supplement its existing NIST AI Risk Management Framework (within 270 days); and the Secretary of Labor publishing guidance to prevent unlawful discrimination related to the use of AI involved in hiring (within 365 days).  

The EO provides a robust roadmap for US government agencies to follow and is influencing the path that the private sector is taking. As advances in AI move at break-neck speed, so too must our ability to harness its huge potential and at the same time mitigate risk. As a company operating at the forefront of defence and national security, we are committed to the responsible deployment of AI and look forward to continuing to support and contribute to the development of regulation, policy, and ethics – a central tenet that teamed with world-class innovation will ensure we can stay ahead in the AI race. 

Sign-up to our newsletter for more insights on the development of AI and regulation.

To read more about unfolding regulation development in the UK, read our latest blog.

Cookie Policy

We'd like to set Analytics cookies to help us to improve our website by collecting and reporting information on how you use it. The cookies collect information in a way that does not directly identify anyone.

For more detailed information about the cookies we use, see our Legal Page

Analytics Cookies