United Kingdom: combining a central drive for growth with sectoral pro-innovation regulation
20 March 2025

20 March 2025
The UK has steered away from adopting any specific AI legislation, preferring instead, in the 2023 White Paper, to task the relevant regulatory bodies – primarily the Competition and Markets Authority (CMA); the Information Commissioner's Office (ICO); the Office of Communications (Ofcom) and the Financial Conduct Authority (FCA) - with the promotion of safe, responsible AI innovation in accordance with key cross-sectoral principles. Although it was anticipated by some that the election of a Labour government in 2024 might signal a more prescriptive approach, the current mood seems to favour a light legislative touch, but with ambitions to harness the power of AI to boost economic growth and establish the UK as a leader in the field.
When the current government came to power, it stated its intention to regulate developers of the "most powerful" AI models, but gave no indication of plans to legislate further to create a separate AI regulatory regime. It is still unknown what regulation in the UK of the "most powerful" AI models will look like – for example, whether the UK will assess an AI model's power according to risk (as in the EU), or according to computational power (as in the US).
Other proposed legislation could have a specific impact on AI. Changes are set out under the Product Safety and Metrology Bill which aims to enhance consumer protection with particular reference to technological advances, such as AI. The Crime and Policing Bill will criminalise the use of AI for purposes connected with child sexual abuse. In addition, in December 2024, a consultation was launched on copyright and AI with a view to resolving tensions between rights holders and AI developers, with speculation that legal reform might result. It is also possible that trade unions will push for legislation for enhanced employment rights to address risks posed by AI in the workplace. Although the Data (Use and Access) Bill currently on its way through Parliament proposed transparency and copyright obligations for web crawlers and general-purpose AI models, these have recently been removed.
In January 2025, the government announced its AI Opportunities Action Plan with the emphasis firmly on embracing AI to drive productivity, transform the delivery of public sector services and put the UK at the forefront of innovation. The Plan sets out an ambitious vision with AI at the heart of the government's mission. To achieve these aims, it is recognised that the development and adoption of AI must be scaled up significantly, and quickly. Regulators have been urged to rethink their appetite for risk and adopt a more pro-innovation approach, with the possibility of more radical measures to override sector regulation if it might stand in the way of progress.
Also announced are plans to boost data processing capacity in the UK to accommodate the rise in AI-related data use. As part of this, the government has declared plans for "AI growth zones" to accelerate the building of data centres in selected areas of the country – typically those in need of regeneration and with large existing or planned power connections or major energy infrastructure. These growth zones will benefit from streamlined planning processes and, potentially, additional financial reliefs and support.
Public confidence is seen as a key factor in the government's ambitions and it has shown its commitment to drive public sector adoption of AI through its AI Playbook for the UK Government which it sees as a launchpad to transform public services and develop public trust in AI technology. Drawing on input from numerous government departments, public bodies, industry and academia, the Playbook is based on 10 principles, building on the five set out in the 2023 White Paper and described below. AI is seen as having the potential to transform service delivery in the public sector, provided security and ethical standards can be met, particularly where data protection is at stake - and the Playbook is designed so that AI can be put to work across all central government departments and public sector bodies.
While changes to the law are still awaited, the cross-sectoral, regulator-led approach continues as the default position on AI regulation in the UK. The 2023 White Paper proposed five general principles for existing regulators to interpret and apply within their remits in order to promote safe, responsible AI innovation. These are:
Official guidance, Implementing the UK's AI Regulatory Principles, was issued in February 2024 to bring some coherence to the sector-specific rules, prompting action from the regulators. The CMA has published a report on AI foundation models; followed by a strategic update. The FCA has published its approach and engaged in various discussions with the Bank of England and the Prudential Regulation Authority on AI and machine learning; and Ofcom's Strategic Approach to AI was published in March 2024. The ICO has issued guidance on the interplay between AI and data protection, with a consultation series centred on GenAI, and has also revamped its audit toolkit to include an AI component. Much of the guidance produced to date could well be reworked in the light of the AI Opportunities Action Plan.
The UK's AI policy rests with the Department for Science, Innovation and Technology. The DSIT has, with the National Cyber Security Centre, recently published a Code of Practice setting out baseline cyber security principles to help secure AI systems. The aim is that this will eventually form the baseline for a new global standard.
Other bodies with a central role include:
The UK's situation is changing quickly with the government's drive to harness the transformative power of AI and, if all the proposals in the Action Plan are translated into policy, the rate of change looks likely to accelerate. Any business with an interest in AI, in whatever form, must make it a priority to keep up with developments. Meanwhile, in the absence of AI-specific regulation, issues raised by the development and use of AI remain subject only to relevant parts of current legislation and guidance.
The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.
Readers should take legal advice before applying it to specific issues or transactions.