How better AI regulation can supercharge UK growth

AI has become a headline topic as the technology’s potential to transform the way we work and live continues to captivate the media’s attention and public imagination alike.

It’s therefore no surprise that a great AI debate has been quick to emerge. From AI maximalists who see the technology as a new global goldrush, to careful technologists seeking synergies between its capabilities and existing inefficiencies, watchdogs examining the potential consequences of current and future misuse, and cynics pushing back on the wilder claims about the technology’s power, the variety of reactions has been vast.

Hot on their heels, governments and regulators are moving fast to establish new rules of play for AI. The seismic nature of the change that AI is set to trigger has two major consequences for governmental intervention. First, it means that few regulatory initiatives are as urgent, or as critical to get right, as this. Second, it makes the work of drafting appropriate legislation complex, given how varied the technology’s impact will be on different aspects of society and how many stakeholders deserve a say in what those impacts will look like.

At the same time, it could turn out to be a tragic mistake if we apply a naive precautionary principle and bar the gates of AI until governments are ready for it. A blanket ban, for example, on medical applications of AI would freeze the patient privacy environment in its current state – but only at the expense of important work like developing systems which spot cancers months earlier or discovering more effective drugs to curb the effects of Alzheimer’s.

The technological context of regulation

So, what might an effective and productive regulatory response look like? So far, the first major attempt at legislating AI in a wide-ranging way is the EU AI Act. This broadly divides AI systems into three categories of risk. Systems representing ‘unacceptable risk’, such as those which deliberately manipulate vulnerable people’s behaviour, are set to be banned entirely, while those representing ‘high risk’, such as those which ensure safety or are used to enforce justice, are set to be subject to tight governmental oversight. ‘Low risk’ systems, meanwhile, would carry on as before.

If passed, this would effectively extend a European consensus position on business and societal ethics to AI systems. As we have seen repeatedly over recent years, the EU’s intervention in this case would likely have major consequences for the global state of play, as businesses seeking to trade in the EU would have to harmonise their compliance with the AI Act across all markets. This is significant, and the approach being taken is well-reasoned.

As the EU chooses to take adopt a risk-based approach to legislation, I think there is a real opportunity for the British Government to take a more adventurous, forward-thinking stance which sets the pace for regulatory responses around the world.

Towards an innovation-led regulatory framework

Currently, the lack of a sincere and concerted regulatory effort to understand how AI operates on a technological level is clearly felt in the kinds of conversations being had about it. Take, for instance, the fixation in some quarters on the importance of making AI ‘explainable’. While the notion that AI systems must offer not just outputs but an account of how they arrived at those outputs might feel useful for transparency reasons, it simply does not reflect how the technology works. In the same way that human decisions stem from too many influencing factors to fully account for – many of which are likely to be unknown even to the decision-maker – the complexity of the data feeding into AI outputs does not translate into clear, flowchart-like explanations.

Another, simpler issue with this lack of technological engagement can be detected in the rigidity of the EU AI Act’s categorisation system. While it may appropriately prevent uses of AI which are already on the table, it does not have a way to account for dangerous applications which may only emerge years down the line.

With the potential benefits of AI advancement being as vast as they are, the regulatory schema we ultimately arrive at must work to incentivise and encourage technological progress by being fit for the future of AI.

In particular, it should steer away from a one-size-fits-all response to what is, in reality, a highly multifaceted area of research and development. AI does not have the same potential risk factors, upsides, challenges, or consequences in healthcare, law, construction, finance, or other sectors. AI has very different technological impacts depending on whether it is a more specialist or more generalised tool, whether it relies on large language models or other approaches, and whether it is employed by trained professionals or the general public.

Regulators must seek to reflect these subtleties in their work – and that will require a collaborative approach with AI innovators.

The British Government has long put forth its intention to make the UK an AI ‘powerhouse’, and the Chancellor’s announcement earlier this year that £900m will be earmarked for a nationally-owned exascale supercomputer certainly indicates a real strength of commitment.

In its role as a market creator, the Government today has an opportunity to echo that direct industrial investment with an intellectual investment, drawing on the deep AI talent which exists in the UK to establish an authoritative, progressive, and nuanced framework for AI development. The country holds a unique position in this regard, with a combination of on-shore expertise, which makes it an intellectual world-leader in AI, and the freedom to move at pace to establish a high-quality regulatory approach.

Doing so would enable the UK to take a position of thought leadership on AI in the global community, driving the conversation forward while also becoming a net exporter of advanced AI systems which can create real social value in nations around the world. That’s a prize worth seeking.


About the Author

Jaeger Glucina is MD and Chief of Staff at Luminance. Luminance is the pioneer in Legal-Grade™ AI, wherever computer meets contract. Built on a proprietary Large Language Model (LLM), Luminance brings specialist AI to every touchpoint a business has with its contracts, from generation to negotiation and post-execution analysis. Developed by AI experts from the University of Cambridge and validated by leading lawyers, Luminance’s AI is now in use by 700+ customers in 70+ countries, from AMD and the LG Group to Hitachi, BBC Studios and Staples.

more insights