Current and Emerging AI Regulations
Here are statistics on AI security and safety regulations, including both existing and upcoming frameworks, with their sources and dates.
Existing and Recently Passed Regulations
- The EU AI Act, a landmark regulation for AI, was officially adopted on May 21, 2024. It categorizes AI systems by risk level and imposes strict requirements on high-risk systems, including cybersecurity provisions and a ban on certain applications.1
- In the US, the National Institute of Standards and Technology (NIST) released its AI Risk Management Framework (AI RMF 1.0) in January 2023.2 It provides voluntary guidance for organizations to manage the risks of AI.3
- The UK announced a £100 million commitment to establish an AI Safety Institute in November 2023to develop new tools for evaluating AI safety and security.
Upcoming Regulations and Legislative Efforts
- As of May 2024, 15 US states had enacted laws related to AI, and another 30 states had introduced or were considering legislation. A key focus of these bills is AI transparency and government use of AI.
- The G7 is working on a Code of Conduct for AI developers, a voluntary international framework focused on safe and responsible AI development.4 The code was initially proposed in October 2023.
- China is developing a comprehensive set of AI regulations, with draft rules on generative AI released in July 2023. These rules focus on content moderation, security, and a system for registering algorithms with the government.5
- In the US, there are several federal legislative proposals aimed at AI, including bills that would mandate risk assessments for high-risk AI systems and require transparency regarding the use of synthetic media.6 These bills remain under consideration as of mid-2024.
Governments and international organizations are rapidly increasing their efforts to regulate AI, with a significant surge in legislative activity over the past couple of years.1 The focus has been on establishing new laws and frameworks to address the risks associated with AI development and deployment.2
Some Statistics on AI Regulations
- Growing Number of Laws: In 2024, U.S. state lawmakers introduced over 600 AI-related bills, with 99 of them enacted into law.3 This marked a significant jump from 2023, where fewer than 200 such bills were introduced. Similarly, U.S. federal agencies introduced 59 AI-related regulations in 2024, more than double the number from 2023.4 Globally, legislative mentions of AI have risen 21.3% across 75 countries since 2023.5
Organizations Involved in AI Regulation
A wide range of government bodies and international organizations are actively involved in creating AI regulations.6
United States:
- National Institute of Standards and Technology (NIST): Released the AI Risk Management Framework to provide guidance for managing AI risks.7
- Small Business Administration (SBA) and National Science Foundation (NSF): They provide grants for AI R&D through programs like SBIR and STTR.
- Federal Trade Commission (FTC), Equal Employment Opportunity Commission (EEOC), and Department of Justice (DOJ): These agencies are applying their existing legal authority to AI-related issues, such as discrimination and consumer protection.8
International:
- The European Union (EU): The EU AI Act, adopted in May 2024, is the most comprehensive AI regulation to date, categorizing AI systems by risk level.9 The EU has also established new bodies like the EU AI Office to enforce these regulations.1
- The G7: Working on a Code of Conduct for AI developers, a voluntary international framework for safe AI development.
- The United Nations (UN) and UNESCO: They have developed global standards and recommendations for AI ethics, such as UNESCO's "Recommendation on the Ethics of Artificial Intelligence."11
- The Council of Europe: Developed a legally binding treaty on AI to safeguard human rights and the rule of law.12
Other Important Details
- Risk-Based Approach: Many recent regulations, including the EU AI Act, use a risk-based framework that imposes stricter rules on AI systems that pose a higher risk to human rights and safety.13 This approach is being adopted in other regions as well.14
- Voluntary vs. Mandatory: While some international frameworks, like the G7's Code of Conduct, are voluntary, there is a clear trend toward mandatory, legally binding regulations at the national and regional levels.
- Focus Areas: Recent legislation is often focused on specific harms, such as political deepfakes, non-consensual sexual imagery, and biometric data privacy.15 This shows a shift from broad principles to targeted rules.16
- Lack of Harmonization: The absence of a single federal framework in the US has led to a "patchwork" of laws at the state level, creating challenges for companies trying to comply with multiple, often conflicting, regulations.17
Sources and Dates
- EU AI Act: European Parliament and Council of the European Union, adopted May 21, 2024.7
- NIST AI Risk Management Framework: National Institute of Standards and Technology, released January 26, 2023.8
- UK AI Safety Institute: UK Government Announcement, November 1, 2023.9
- US State-Level Legislation: National Conference of State Legislatures (NCSL), data updated through May 2024.
- G7 AI Code of Conduct: G7 Leaders' Statement on AI, released October 30, 2023.
- China AI Regulations: Cyberspace Administration of China, draft rules published July 2023.