ITIF Logo
ITIF Search
The EU Should Learn From How the UK Regulates AI to Stay Competitive

The EU Should Learn From How the UK Regulates AI to Stay Competitive

April 28, 2023

The EU’s development and use of artificial intelligence (AI) is already far below its global rivals and now looks set to fall further. Recent advancements in AI are poised to disrupt many industries, creating opportunities for new digital leaders to emerge. While the UK aims to use light touch regulation to become Europe’s leader in AI, the EU has taken the opposite approach, creating burdensome rules for the technology in its set-to-pass AI Act. EU policymakers would be wise to follow the UK’s more innovation-friendly approach if they want the EU to remain competitive in the global tech landscape.

The EU’s AI Act would subject so-called “high risk” use cases of AI—for example, uses in education, public services, or critical infrastructure—to burdensome requirements, including conformity assessments, transparency requirements, and post-market monitoring obligations. The bill is broad in scope and would even regulate simple software using statistical methods. The AI Act also has an over-broad category of “high risk” applications. The European Commission’s initial assessment estimated that these “exceptional circumstances” would comprise only 5 to 15 percent of all use of AI, but a new study finds that 18 percent of AI systems would definitely fall into the “high risk” category but a further 40 percent might. While the final bill may see more precise wording and scope, it is likely to widen as recent proposals to include generative AI as “high risk” indicate an ad hoc approach where lawmakers respond to headlines and panic, rather than carefully considered risks.

In contrast, the UK approach to AI has several innovation-friendly characteristics. First, compared to the EU’s AI Act, the UK’s AI white paper has a more flexible definition of AI. It defines AI based on its features, such as whether an AI system can train on data sets, make inferences, and generate output with little to no human oversight. Because this approach does not focus on specific AI techniques and methods, it is less likely to become obsolete as technology advances. Second, the UK assigns enforcement responsibility to sectoral regulators, such as the Health and Safety Executive or the Financial Conduct Authority, which means regulators can address risks within different sectors proportionately as needed. In contrast, the EU delegates entire sectors as “high risk” and enforces the rules through a supranational, not a specialized sectoral, authority. Finally, the UK approach focuses on addressing its concerns through existing legislation. Unlike the EU’s rush to create a new law, the UK’s caution is unlikely to adversely affect the adoption and deployment of AI.

If the EU’s strict approach to AI regulation burdens EU companies adopting AI, they will likely find it harder to compete with their peers in jurisdictions like the UK that are more welcoming of AI. The UK has already released detailed proposals to implement pro-innovation regulation of digital technologies to continue to boost its competitive advantage and provide unique benefits like regulatory sandboxes and light-touch regulation to companies that settle on its shores. One study benchmarking countries based on their AI development ranks the UK higher than any of the EU’s 27 member states.

Now, the UK’s approach is not without flaws. Both the EU and UK frameworks risk over-regulating or over-managing AI risk and stifling the adoption of AI by holding it to a higher standard than other technologies and products on the market. The European Commission’s proposal contains impractical requirements such as “error-free” data sets and impossible interpretability requirements that human minds are not held to. Similarly, the UK government needs to recognize that no technologies are risk-free and clarify that risk for AI systems should be comparable to what government allows for other products on the market.

The EU already lags behind China and the United States in AI and cannot afford to lose further ground. It should align its approach to AI closer to the UK’s to enable interoperability and limit further damage to AI development and adoption caused by its heavy-handed approach to tech regulation. Otherwise, the UK will continue to be Europe’s destination for AI technologies and further widen the competitive gap the EU aims to narrow.

Given the critical juncture the EU finds itself in regarding the future of AI and the reality that it only gets one chance to get this policy right, EU policymakers should follow the UK’s lead and not rush to pass any new legislation targeting AI. Instead, it should focus on applying existing regulations to AI, monitoring how the technology develops, and further studying the potential negative impacts of hasty regulation of this emerging technology.

Back to Top