Britain’s Big AI Summit Is a Doom-Obsessed Mess

UK prime minister Rishi Sunak’s global summit on AI governance will focus on extreme scenarios of algorithms causing harm. Many British AI experts would rather he focus on near-term problems.
Red spheres on red circular paths radiating out from a center point dangerous AI concept
Illustration: Eugene Mymrin/Getty Images

The UK government, with its reversals on climate policy and commitment to oil drilling and air pollution, usually seems to be pro-apocalypse. But lately, senior British politicians have been on a save-the-world tour. Prime minister Rishi Sunak, his ministers, and diplomats have been briefing their international counterparts about the existential dangers of runaway artificial superintelligence, which, they warn, could engineer bioweapons, empower autocrats, undermine democracy, and threaten the financial system. “I do not believe we can hold back the tide,” deputy prime minister Oliver Dowden told the United Nations in late September.

Dowden’s doomerism is supposed to drum up support for the UK government’s global summit on AI governance, scheduled for November 1 and 2. The event is being billed as the moment that the tide turns on the specter of killer AI, a chance to start building international consensus toward mitigating that risk. The summit is an important event for Sunak, who has trumpeted his desire to turn the UK into “not just the intellectual home, but the geographical home of global AI safety regulation,” along with broader plans to create a “new Silicon Valley” and a “technology superpower.” But just over a week before it begins, the summit looks set to be simultaneously doom-laden and underwhelming. Two sources with direct knowledge of the proposed content of discussions say that its flagship initiative will be a voluntary global register of large AI models—an essentially toothless initiative. Its ability to capture the full range of leading global AI projects would depend on the good will of large US and Chinese tech companies, which don’t generally see eye to eye.

How is the rest of the summit shaping up? Sources close to negotiations say that the US government is annoyed that the UK has invited Chinese officials (and so are some members of the UK’s ruling Conservative Party). The attendee list hasn’t been released, but leading companies and investors in the UK’s domestic AI sector are angry that they’ve not been invited, cutting them out of discussions about the future of their industry. And they and other AI experts say that the government’s focus on the fringe concern of AI-driven cataclysm means the event will ignore the more immediate real-world risks of the technology—and all of its potential upsides.

“I don’t know what the UK is bringing to the table in all this,” says Keegan McBride, lecturer in AI, government, and policy at Oxford University’s Internet Institute. “They’re so narrow in their focus.” He and others in the British AI scene argue it would be better for the government to instead look at how it can help British AI companies compete at a moment of rapid change and huge investment in AI.

The summit agenda says that it will cover two types of AI: that which has narrow, but potentially dangerous capabilities—such as models that could be used to develop bioweapons—and “frontier AI,” a somewhat nebulous concept that the UK is defining as huge, multipurpose artificial intelligence that matches or exceeds the power of large language models like the one behind OpenAI’s ChatGPT. That filter automatically narrows the list of attendees. “Only a handful of companies are doing this,” says McBride. “They're almost all American or Chinese, and the infrastructure that you need to train these sorts of models are basically all owned by American companies like Amazon or Google or Microsoft.”

WIRED spoke to more than a dozen British AI experts and executives. None had been invited to the summit. The only representative of the UK's AI industry known to be attending is Google DeepMind, which was founded in London but acquired by the search giant in 2014. That’s causing a lot of frustration.

“A lot of modern day AI was developed in the UK,” says Sachin Dev Duggal, at Builder AI, an AI-powered app development startup based in London. “On one hand we’ll say we’re the AI center of the world, but on the other we’re saying we don’t want to trust our own CEOs and entrepreneurs or researchers to have a more prevalent voice. It doesn’t make sense.”

Much of the world’s cloud computing and social media infrastructure is owned by US companies, which already puts UK companies—and British regulators—at a disadvantage, Duggal says. If industry-shaping deals get done without domestic businesses having any input, the next generation of tech could also end up being concentrated in the hands of a few huge US companies. “There’s a group of us that are pretty concerned,” he says.

Duggal’s view was shared by others in the UK's AI industry, who complain that the obsession with frontier models misses “everything behind that frontier,” as one executive at a unicorn AI startup, speaking anonymously because they still hope for a summit invite, says. That includes every startup, every academic team developing their own AI, and every application of the technology that’s currently possible, the executive says. The frontier focus also excludes open source language models, the best of which are seen as slightly behind the best available, but can be downloaded and used—or misused—by anyone.

The UK government has promised to invest more than $1 billion in AI-related initiatives, including funding to develop the local semiconductor industry, a new supercomputer in Bristol to support AI research, and various task forces and promotion bodies. How much they’ll help remains to be seen—critics point out that in global terms, it’s not a great deal of money. Powering up both chip and AI industries, while starting well behind the leaders in the US and Asia, with a single billion dollars will be challenging. And the funding is not necessarily flowing into British companies. In May, the CEO of Graphcore, a Bristol-based startup that makes specialist chips for AI, asked the government to earmark some of the funds for UK manufacturers. That didn’t happen, and this month Graphcore warned it needed an injection of cash to stay in business.

“What’s very weird is the government is saying that AI can do all this sort of stuff, it’s so powerful it can literally end the world,” Oxford’s McBride says. “But you would expect them to also be sort of investigating how to harness its power. The rest of the world is going to be looking to America and to the United Kingdom to figure out how they can use this stuff. And at the moment, the UK doesn't really have much to show the rest of the world.”

The UK’s parliament hasn’t begun debating any domestic AI regulation on the scale of the European Union’s AI Act, although the government has released a white paper that recommends a less restrictive set of rules in order to promote growth in the industry. But it’s a long way from being policy or law, and the EU has set the pace.

“It is pretty embarrassing that the UK is not regulating itself,” says Mark Brakel, director of policy at the Future of Life Institute, a US think tank that focuses on existential risks. In the US, there are concrete proposals on regulation in the Senate. The EU’s AI Act is close to becoming law. Brazil is developing its own regulations, as is China, Brakel says. “But we have nothing in the UK. If you're the hosts, I think it would make sense if you were able to put something on the table yourself.”

Brakel, whose institute was behind a headline-grabbing open letter in March that called for a pause on AI developments, is very supportive of the idea of the summit. The institute, which is backed by leading figures in tech, including Skype cocreator Jaan Tallinn, has been very active in lobbying governments to take existential risks seriously. But even Brakel’s hopes for the outcome of the UK event are quite limited. “This is, I think, AI risk 101,” Brakel says. “I would be really happy if everyone leaving that summit is in agreement about what the most important risks are and what they need to focus on.”

That may not be enough for Sunak, whose government has expended considerable political capital assembling the summit. US vice president Kamala Harris is set to attend. But the UK has also invited a Chinese delegation, which has reportedly angered US officials, who see Beijing as a strategic threat. Reports in the UK press suggest that the Chinese officials may now only be allowed to attend half of the summit. European officials will be attending—although France will host its own AI summit, organized by telecoms billionaire Xavier Niel, two weeks after the UK. On October 18, China’s Cyberspace Administration announced its own global AI governance initiative.

The gatherings hosted by individual countries also have competition from international forums, including the UN and G7, which are looking into multilateral approaches for regulating AI. It’s not clear how the UK’s approach will differ—or if any state-to-state agreement capable of meaningfully changing the course of AI development is possible at such an early stage of development.

“I completely agree with [Sunak’s] strategy, which is to attempt international consensus. But my guess is international consensus will form only around the broadest of principles,” says Jeremy Wright, a former UK digital minister for Sunak’s Conservative Party. “Feasibly, if you're going to do anything, you probably have to do it nationally before you do it internationally.”

Two sources with knowledge of discussions confirmed Politico’s reporting from earlier this month that Sunak will pitch an AI Safety Institute to attendees. And, they said, the British government will propose a register of frontier models that would let governments see inside the black box of frontier AI and get ahead of any potential dangers. The initiative will involve asking model developers to provide early access to their models so they can be “red teamed” and their potential risks assessed.

Most of the big US companies have already signed up to an American government pledge on safety. It’s not clear why they’d feel the need to sign up to a new one, and commit to handing over valuable proprietary information to a UK body.

Critics of the UK’s doom summit—including members of the ruling Conservative Party—fear it is doomed to, at best, mediocrity. The real reason, they say, that the summit has been rushed through is domestic politics. It’s something that Sunak can show, or at least pretend, to be leading the world at a time he is trailing in polls and seen as almost certain to lose power in the next election. The evidence of that, several insiders point out, is the choice of venue—a 19th-century country mansion associated with a time when the UK truly was a top global power in computing.

Bletchley Park was where Britain’s World War II cryptographers cracked the Nazi's “Enigma” Code. The site is indelibly linked with one of the most significant figures in British computing, Alan Turing—which is, no doubt, why the government chose it. Practically, it makes less sense. Bletchley Park is 50 miles from London and “a pain in the arse to get to,” according to one government adviser, speaking on condition of anonymity because they still occasionally work for the Department of Science and Technology. But that distance doesn’t make it conveniently remote and secure either. During the war, the campus was situated away from prying eyes, but it is now on the outskirts of Milton Keynes, a small city built after the war that has long been a punchline in the UK, synonymous with concrete blandness and famed for its profusion of roundabouts.

It’s a venue that, like the summit itself, suggests to some that symbolism triumphed over substance. One tech executive, speaking on condition of anonymity because he was still hoping to deal with the government, calls it “government by photo op.” He’s taking solace in the fact that Sunak’s Conservative Party is likely to lose the next election, which has to be held before January 2025. “They’ll be gone in 18 months,” he says.