Alex Hern 

TechScape: Why is the UK so slow to regulate AI?

Britain has announced £10m for regulators but has done very little to mitigate the risks linked with artificial intelligence. Plus, Facebook’s deep-fake Biden conundrum
  
  

Rishi Sunak at the AI safety summit last November.
Rishi Sunak at the AI safety summit last November. Photograph: Chris J Ratcliffe/EPA

Britain wants to lead the world in AI regulation. But AI regulation is a rapidly evolving, contested policy space in which there’s little agreement over what a good outcome would look like, let alone the best methods to get there. And being the third most important hub of AI research in the world doesn’t give you an awful lot of power when the first two are the US and China.

How to slice through this Gordian knot? Simple: move swiftly and decisively to do … absolutely nothing.

The British government took the next step towards its AI regulation bill today. From our story:

The government will acknowledge on Tuesday that binding measures for overseeing cutting-edge AI development are needed at some point – but not immediately. Instead, ministers will set out “initial thinking for future binding requirements” for advanced systems and discuss them with technical, legal and civil society experts.

The government will also give £10m to regulators to help them tackle AI risks, as well as requiring them to set out their approach to the technology by 30 April.

When the first draft of the AI white paper was released, in March 2023, reaction was dismissive. The government’s proposals dropped on the same day as the now-notorious call for a six-month “pause” in AI research to control the risk of out-of-control systems. Against that background, the white paper seemed pitiful.

The proposal was to give regulators no new powers at all, nor to hand any individual body the responsibility for guiding AI development. Instead, the government planned to coordinate existing regulators such as the Competition and Markets Authority and the Health and Safety Executive, offering five principles to guide the regulatory framework when they think about AI.

This approach was criticised for having “significant gaps” by the Ada Lovelace Institute, the UK’s leading AI research group, even ignoring the fact that a years-long legislative process would leave AI unregulated in the interim period.

So what’s changed? Well, the government has found a truly whopping £10m to hand to regulators to “upskill” them, and it has set a deadline of 30 April for the biggest to publish their AI plans. “The UK government will not rush to legislate, or risk implementing ‘quick-fix’ rules that would soon become outdated or ineffective,” a Department for Science, Innovation and Technology spokesperson said.

It is an odd definition of “global AI leadership”, where being the quickest to say “we’re not doing anything” counts. The government is also “thinking” about real regulations, positing “future binding requirements, which could be introduced for developers building the most advanced AI systems”.

A second, slightly larger, pot of money will launch “nine new research hubs across the UK” funded by “nearly” £90m. The government also announced £2m of funding to support “new research projects that will help to define what responsible AI looks like”.

There’s a tragicomic element to reading a government press release that triumphantly discloses £2m of funding just a week after Yoshua Bengio, one of the three “godfathers” of AI, urged Canada to spend $1bn building a publicly owned supercomputer to keep up with the technology giants. It’s like bringing a spoon to a knife fight.

You can call it staying nimble in the face of conflicting demands, but after a while – 11 months and counting – it just looks like an inability to commit. The day before the latest updates to the AI white paper were announced, the Financial Times broke the news that a different pillar of AI regulation had collapsed. From its story (£):

The Intellectual Property Office, the UK government’s agency overseeing copyright laws, has been consulting with AI companies and rights holders to produce guidance on text and data mining, where AI models are trained on existing materials such as books and music.

However, the group of industry executives convened by the IPO that oversees the work has been unable to agree on a voluntary code of practice, meaning that it has returned the responsibility back to officials at the Department for Science Innovation and Technology.

Unlike broader AI regulation – where there’s a morass of conflicting opinions and some very vague long-term goals – copyright reform is quite a clean trade-off. On the one hand, creative and media businesses who own valuable intellectual property; on the other, technology firms who can use that IP to build valuable AI tools. One or the other group is going to be irritated by the outcome; a perfect compromise would merely mean both are.

Last month, the boss of Getty Images was one of many calling on the UK to back its creative industries, one-tenth of the British economy, over the theoretical benefits that AI might bring in the future. And so, faced with a hard choice to make and no right answer, the government chose to do nothing. That way, it can’t lead the world in the wrong direction. And isn’t that what leadership is all about?

Deeply fake

To be fair to the government, there are obvious problems with moving too fast. To see some of them, let’s look at social media. Facebook’s rules don’t ban deepfake videos of Joe Biden, its oversight board (AKA its “supreme court”), has found. But it’s honestly not clear what they do ban, which is going to be an increasing problem. From our story:

Meta’s oversight board has found that a Facebook video wrongfully suggesting that the US president, Joe Biden, is a paedophile does not violate the company’s current rules while deeming those rules “incoherent” and too narrowly focused on AI-generated content.

The board, which is funded by Meta – Facebook’s parent company – but run independently, took on the Biden video case in October in response to a user complaint about an altered seven-second video of the president.

Facebook rushed out a policy on “manipulated media” amid growing interest in deepfakes a few years ago, before ChatGPT and large language models became the AI fad du jour. The rules barred misleadingly altered videos made by AI.

The problem, the oversight board notes, is that it is an impossible policy to apply, with little obvious rationale behind it and no clear theory of harm it seeks to prevent. How is a moderator supposed to distinguish between a video made by AI, which is banned, and a video made by a skilled video editor, which is allowed? Even if they can distinguish them, why is only the former problematic enough to remove from the site?

The oversight board suggested updating the rules to remove the faddish reference to AI entirely, instead requiring labels identifying audio and video content as manipulated, regardless of the manipulation technique. Meta said it would update the policy.

Age-appropriate social media

In the wake of her daughter’s murder by two classmates, the mother of Brianna Ghey has called for a revolution in how we approach teenage use of social media. Under 16s, she says, should be limited to using devices built for teens, which can be easily monitored by parents, with the full spectrum of tech-enabled living age-gated by the government or tech companies.

I spoke Archie Bland, editor of our daily newsletter First Edition, about her pleas:

That lament will resonate with many parents, but has specific power in Brianna’s case. She had “secretly accessed pro-anorexia and self-harm sites on her smartphone”, a petition created by Esther says. And prosecutors said that her killers had used Google to search for poisons, “serial killer facts” and ways to combat anxiety, as well as looking for rope on Amazon.

“It’s not that you need new software to do everything that Esther Ghey is asking for,” Alex Hern said. “But there is a broader issue here – in the same way that this field has historically moved faster than governments have been able to keep up with, it’s also moved faster than parents can keep up with. It is different with every app, it changes on a regular basis, and it is a large and difficult job to keep on top of.”

You can read Archie’s whole email here (and do also sign up here to get First Edition every weekday morning).

  • If you want to read the complete version of the newsletter please subscribe to receive TechScape in your inbox every Tuesday.

 

Leave a Comment

Required fields are marked *

*

*