United States: broader AI regulation is in the pipeline
28 November 2024
The US may be leading the charge globally on developing the technology behind leading AI models, with California home to 32 of the world's 50 leading Al companies, but it has not been as successful as Europe in trying to introduce AI-specific regulation with broad, sector-agnostic application. Like the UK and Australia, the US has advanced with narrow regulation (see below) which has the potential to impact AI or is industry specific. It is also clear that the US is determined to impose broader conditions on the use of algorithms and AI, which will soon see companies needing to navigate an influx of rules and regulation.
Examples of regulations which, when introduced, were not explicitly intended to regulate AI, but which might impact the sector in any event include:
This briefing focuses on three recent examples of successful or pending horizonal AI regulation introduced by different branches of the US government (a Presidential Executive Order, a proposed federal law and a proposed bill in California) to see what these might signal for the future of AI in the US.
Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the "Executive Order"): passed by President Biden in October, 2023, the Executive Order sets the stage for substantial federal oversight over the development and use of AI. Because it is an Executive Order, rather than legislation passed by Congress or a state, its immediate effect is largely limited to the government agencies, contracts, projects, and benefit programs over which the President has authority as head of the Executive Branch. The Executive Order can still reach into the private sector (e.g. infrastructure, healthcare, education, defense etc.) and it also sets the tone for how the government is approaching AI regulation more broadly – although see below for how the forthcoming change in administration may affect this.
The Executive Order seeks to regulate the federal government agencies (and those involved from the private sector) in three main ways:
Given the material value of government contracts, the Executive Order has a far reaching effect within the AI industry, including on cloud service providers developing large language models or developers that build AI applications using LLMs used by federal agencies or those in the private sector delivering services to the government. Although one of its stated objectives is fostering innovation and entrepreneurship, the Executive Order is likely to initially benefit the larger incumbent technology companies, with smaller technology companies having concerns over compliance costs stifling innovation. Regulatory bodies have praised the Executive Order for aligning with international efforts to regulate AI, facilitating global cooperation and standardization and making it easier for companies to operate cross-border. Although much detail was left out of the Executive Order, including how best to implement and enforce it, during the first 12 months since the Executive Order was issued, we have seen federal agencies meet all of the deadlines set out. There is clearly a mandate across the federal agencies to progress with AI regulation as a top priority.
The Algorithmic Accountability Act of 2023 (S. 2892, the "Federal Bill"): the Federal Bill, if passed, would grant the Federal Trade Commission (FTC) the power to regulate businesses that use AI systems that involve consumers' personal information, requiring companies to conduct impact assessments to identify and fix discrimination, bias and security issues. The Federal Bill has been introduced three times (first in 2019, then in 2022 and 2023) and is once again sitting with Congress to consider. The main rationale behind the introduction of the Federal Bill is to ensure that algorithms aren’t exempt from US anti-discrimination laws.
The scope of the Federal Bill and the range of companies to which it will apply is broad, being any company that uses an AI system to help make a decision or judgement that has a significant consumer impact. These include decisions such as: (i) who has access to, (ii) the availability of, or (iii) the cost of; education, employment, critical utilities, financial services, healthcare, housing and legal services. The FTC (and therefore the Federal Bill) governs medium to large business, those with annual revenue of more than $50 million or holding personal information on at least one million people or devices.
The Federal Bill would require companies to conduct an impact assessment of any augmented critical decision processes and automated decision systems and submit a summary report to the FTC. Similar to the Executive Order (above), which would involve evaluating the system and its training data for impacts on accuracy, fairness, bias, discrimination, privacy, and security. The FTC may determine more than one impact assessment is necessary and, where possible, it be performed in consultation with external third parties such as independent auditors or technology experts.
Whether the Federal Bill is passed this time round is not clear, but there is clearly a recognition of the value in some of the principles and rationale behind the Executive Order, and an intention to extend its reach by adopting a similar regime through the proper democratic process of Congress. When deciding, Congress will now have the opportunity to also consider the approaches taken in the EU AI Act and whether the US should continue down its own path or adopt a form of the EU AI Act, in what is an increasingly global regulatory landscape of AI players.
Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047, the "California Bill"): given the volume of AI technology coming out of and the wealth of capital going in to California, it is worth a quick update on recent regulatory developments within the state. The California Bill was passed on August 29, 2024, but was vetoed by California's governor a month later (see Gavin Newsome's veto letter) and it is sitting back with the California Senate for further consideration.
As well as establishing a new state entity, the Board of Frontier Models, to oversee AI model development, the California Bill aimed to regulate AI models and developers in three main ways:
Key to the debate is whether the threshold for AI regulation should be, as was the case with the California Bill in its current form, based on the size of the AI model (e.g. the cost or number of computations needed to develop an Al model) or based on an evaluation of an AI system's actual risks, regardless of size and scale. The governor believed that the California Bill's focus only on the most expensive and large-scale models could create a false sense of security and argued that smaller, specialized models could be equally or more dangerous than large models.
California is trying to balance the need for protecting the public with fostering innovation and some argue the veto of the California Bill (irrespective of the reasoning) is unsurprisingly leaning towards the latter. There was some positive messaging in the governor's letter, such as the ongoing efforts by the US AI Safety Institute to develop evidence based guidance on national security risks and performing risk analyses of potential threats to California's critical infrastructure that uses AI. This approach may need to be treated with some scepticism. In its effort to first have empirical analysis of AI systems, Californian regulation is at risk of failing (once again) to keep pace with the technology it seeks to regulate.
Administration and enforcement of AI regulation: President Biden has encouraged each of the federal regulatory agencies subject to the Executive Order to use their authority to the fullest extent to protect consumers. Agencies are at different stages of considering how they can introduce guidance and enforce existing rules to apply to AI systems. It remains to be seen how the other proposed legislation may deal with enforcement, if or when the various measures may be passed.
If it is difficult to predict what the incoming Republican Party and President Donald Trump will mean of AI regulation for the next four years. During the campaign trail, the Republican Party were critical of the Executive Order noting that it hinders innovation and there may be an attempt in 2025 to unwind the federal measures in this area that were introduced by President Biden. The Trump administration has publicly announced that deregulation and private-sector innovation will be a focus, while stating this may be countered by state regulatory efforts. Although not a broad regulation as proposed by the California Bill (see above), a new "Physicians Make Decisions Act" (restricting insurers from using AI for benefits coverage decisions) was recently signed into law in California, demonstrating the divergence that can still occur between the national and state level approach to regulating AI.
Overseas regulation: the scope of the EU AI Act is on its face limited to high-risk AI systems developed or deployed by EU institutions, but it does apply to providers and deployers of AI systems that are located or established outside of the EU where the output produced by the system is used in the EU. As noted above, the EDPB has already launched the Chat GPT Task Force to enforce EU laws on Open AI (a US company).
On the flip side, the US has a long history of extraterritorial application of its laws and regulations and has already imposed restrictions on AI that have an extraterritorial effect, such as limitations on the import or exports of emerging technologies and hardware involved in the AI supply chain.
Key contacts: Paul Dawson, Counsel
The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.
Readers should take legal advice before applying it to specific issues or transactions.