Pioneering AI Regulation: Analyzing the Impact of President Biden’s Executive Order on U.S. AI Policy

https://pixabay.com/illustrations/earth-network-blockchain-globe-3537401/
https://pixabay.com/illustrations/earth-network-blockchain-globe-3537401/

The United States, which is already considered the leading power in artificial intelligence (AI) technology, is now taking the lead in its regulation, with President Biden signing a far-reaching executive order on October 30, 2023.[1] The regulation is intended to be the first step in a “new era of regulation for the United States.”[2] With this executive order being the pioneer in AI regulation, I will analyze what the new regulations entail and what the next steps in regulation look like for the United States and the impact they may have globally.

The focus of the executive order is to reduce the risks that AI poses to consumers, workers, minority groups, and national security.[3] The order covers national security, individual privacy, equity and civil rights, consumer protections, labor issues, AI innovation and U.S. competitiveness, international cooperation on AI policy, and AI skill and expertise within the federal government.[4]

The new order, building on previous actions, goes beyond those previous principles and guidelines and now requires specific action on part of the tech companies and federal agencies.[5]  For instance, the newly established standards will mandate that creators of the most advanced AI systems disclose their safety test findings and other crucial information to the U.S. government.[6] The Order will stipulate that companies working on any foundational model that presents a significant threat to national security, economic well-being, or public health and safety must inform the federal government when training the model and share the results of all red-team safety tests.[7] Red-teaming tests are methods used to evaluate and test the security measures of a system by simulating real-world attacks.[8] These measures will ensure that AI systems are reliable, secure, and safe prior to their public release by companies.[9]

Moreover, the National Institute of Standards and Technology will establish demanding criteria for thorough red-team testing to guarantee safety prior to public release.[10] The Department of Homeland Security will then enforce these standards in critical infrastructure sectors and establish the AI Safety and Security Board.[11] The Departments of Energy and Homeland Security will also tackle the challenges posed by AI systems to critical infrastructure, as well as risks related to chemical, biological, radiological, nuclear, and cybersecurity.[12] Collectively, these represent the most substantial steps ever taken by any government to progress the field of AI safety.[13]

Although this is a large first step for AI regulation in the United States, there is more that will need to follow to strengthen regulations. First, something more stable than an executive order will need to be implemented so it will not have the possibility to be reversed by future administrations. In order to bring stronger and new AI law in the U.S., Congress will need to unite on this contentious issue to pass legislature.[14] Unfortunately, the European Union (EU) is set to finalize comprehensive AI regulations this year, but they will not be in effect before 2025.[15] With the passing of the EU Act, it will make it harder for the U.S. to pass its own laws due to companies not wanting to comply with two different set of rules for two different markets.[16] Since the EU is on track to implement more aggressive AI regulations legislatively before the U.S., U.S. lawmakers might need to turn to the EU Act to look for standards that would align with the U.S. goals of AI regulation to make compliance with two markets easier.[17]

Overall, the steps taking by the U.S. are great first steps to understanding what will be needed to further strengthen AI regulation nationally and internationally. Turning the American lawmakers’ eyes to the EU’s implementation of their Act and seeing how the tech companies respond could help the U.S. create legislative that furthers the common goal of the EU, tech companies and the U.S. in safe and consistent manner.


[1] Cecilia Kang and David E. Sanger, Biden Issues Executive Order to Create A.I. Safeguards, N.Y. Times (Oct. 30, 2023), https://www.nytimes.com/2023/10/30/us/politics/biden-ai-regulation.html.

[2] Id.

[3] Jeff Manson et all., Biden Administration Aims to Cut AI Risk with Executive Order, Reuters (Oct. 30, 2023), https://www.reuters.com/technology/white-house-unveils-wide-ranging-action-mitigate-ai-risks-2023-10-30/.  

[4] Lauren Leffer, Biden’s Executive Order on AI Is a Good Start, Experts Say, but Not Enough, Scientific American (Oct. 31, 2023), https://www.scientificamerican.com/article/bidens-executive-order-on-ai-is-a-good-start-experts-say-but-not-enough.

[5] Id.

[6] The White House, Fact Sheet: President Bidden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, The White House (Oct. 30, 2023), https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence.

[7] Id.

[8] Red Team Security Assessment, Dionach, https://www.dionach.com/services/assurance/red-team-security-assessment (last visited Oct. 31, 2023).

[9] The White House, supra note 6.

[10] Id.

[11] Id.

[12] Id.

[13] Id.

[14] Ryan Heath, What’s in Biden’s AI Executive Order – and What’s Not, Axios (Nov. 1, 2023), https://www.axios.com/2023/11/01/unpacking-bidens-ai-executive-order.

[15] Id.

[16] Shana Lynch, Analyzing the European Union AI Act: What Works, What Needs Improvement, HAI Stanford University (Jul. 21, 2023), https://hai.stanford.edu/news/analyzing-european-union-ai-act-what-works-what-needs-improvement.

[17] Id.