Editor’s note: Much has happened in a short space of time since my last Doing Westminster Better. To coincide with the AI Safety Summit, I’m putting out this short opinion piece. We’ll return to your regularly scheduled programming soon.
Today’s the day.
Prime Minister Rishi Sunak has brought AI experts and governmental representatives from across the world together. On the agenda: preventing human extinction.
Here are the stakes, as Sunak sees them:
Get this wrong, and AI could make it easier to build chemical or biological weapons. Terrorist groups could use AI to spread fear and destruction on an even greater scale. Criminals could exploit AI for cyber-attacks, disinformation, fraud, or even child sexual abuse.
And in the most unlikely but extreme cases, there is even the risk that humanity could lose control of AI completely through the kind of AI sometimes referred to as ‘superintelligence’.
Indeed, to quote the statement made earlier this year by hundreds of the world’s leading AI experts: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”. (source)
Sunak is accurately reflecting the conclusions of his scientific advisers; the Office for Science has stated “there is insufficient evidence to rule out that highly capable Frontier AI systems, if misaligned or inadequately controlled, could pose an existential threat”. And he’s accurately reflecting if not a complete consensus then a widely-held position among widely-respected AI researchers — including Geoffrey Hinton and Yoshua Bengio, whose work laid the foundations for modern AI; and Sam Altman, Demis Hassabis and Dario Amodei, whose labs have secured billions from the likes of Microsoft, Google and Amazon to push that dangerous frontier of AI.
But still! Superintelligence. Loss of control. Extinction. These are extraordinary things for a Prime Minister to say.
Suank won’t be Prime Minister for ever. In fact, he won’t be Prime Minister for long. If we take the AI research community seriously on the risks, and I believe we should, then should we worry that AI safety policy will be finished when he’s finished?
The next UK general election is a while away yet,1 and politics is full of surprises. But you don’t need to be a superforecaster to see that the Conservatives are probably on their way out. Labour haven’t been behind in the polls since December 2021 and their lead has been solidly around 20 percentage points since Liz Truss. After 13 years and five Prime Ministers, it looks like the Tories have run out of ideas and talent, and the public have run out of patience.
Here’s TIME Magazine’s Yasmeen Serhan with a cutting analysis:
At this year’s conference in Manchester—likely to be the party’s last before the general election expected next year—the mood was […] dour. With less excitement and fewer attendees than years past, this gathering belonged to a party that seemed resigned to its electoral fate—one which polls project will see the Conservatives booted out of government by the ascendant opposition Labour Party after 14 long years in power. (source)
So, short of a black swan, Sunak has about twelve months until he’s replaced by Labour’s Keir Starmer.
It’ll be interesting to see what Sunak does after his premiership. David Cameron and, after flirting with an outrageously fast return to power, Boris Johnson left the House of Commons. Theresa May and Liz Truss has stuck around to needle their sucessor(s); in fact, Truss’s most recent intervention was to criticise Sunak for inviting China to the AI Safety Summit. Former heads of government sometimes continue to speak out about or even take on an official role concerning their pet cause, like Johnson advocating for Ukraine or Jacinda Ardern assuming a Special Envoy position related to the Christchurch shooting. When a retired political leader lays claim to a cause area like this, they are sometimes signalling that it is not politics as usual, that this subject must be above everyday partisanship. It’s not unimaginable to see Sunak as a Special Envoy on AI for an apolitical AI governance organisation like GovAI or CLTR. That could go very wrong, of course. Sunak is a controversial figure and such an appointment could make AI safety unnecessarily partisan.2
It’s hard to know what Sunak will do next. But there’s a more pressing question: what would Keir Starmer do about AI?
Some insights come from from Vincent Manancourt, Tom Bristow and Laurie Clarke’s deep-dive on Starmer’s AI policy for Politico last month. “Keir Starmer loves artificial intelligence,” they write, but in truth Starmer has said little about AI. It clearly doesn’t interest him as it interests Sunak. The only direct quote from Starmer in this piece is about how the UK “must use technology” (generic, unspecified) across the NHS. Sure, Starmer’s May opinion piece on a digital NHS celebrated “AI interpretation of chest X-rays”. But that’s not exactly an AI strategy.
Let’s dig in. The Starmerites, who hold power in the party, want to see innovation and entrepreneurship create economic growth. The country has been stagnant too long; with growth, goes the argument, we can create good jobs and fund public services.3 AI doesn’t just fit into this agenda. AI seems like it was made for this agenda. Economists believe AI could boost global GDP by trillions of USD and potentially even unlock the kind of explosive growth last seen with the Industrial Revolution.
In short, Starmer wants a “decade of national renewal,” and he thinks it could be powered by what Harold Wilson called the white heat of technology. What does that mean for Starmer’s AI policy?
“A Labour government will cut red tape,” says Peter Kyle, Shadow Secretary of State for Science, Innovation and Technology.
Manancourt, Bristow and Clarke note that Starmer’s vision on AI is being shaped by the Tony Blair Institute (TBI). If anything, the journalists understate the influence the TBI has on the current Labour leadership (or, at least, the current Labour policy wonks); the TBI might be Britain’s best-funded, best-connected think tank. And if anything, the journalists understate the importance the TBI ascribe to AI. Since their article was published, the TBI — who had already said that our AI response is “more likely than anything else to determine Britain’s future” — have launched the UK’s AI Startup Roadmap. This is a collaboration with the Startup Coalition and Onward; they’re a nonprofit lobbying on behalf of startups and a centre-right think tank, which is maybe all the vibe-check you need. The AI Startup Roadmap backs Starmer’s vision for an AI-turbocharged4 economy:
Given the pace at which this technology is developing, the UK is heavily focused on safety. But this cannot come at the expense of innovation. The country must build tomorrow’s giants of the world economy.
AI is likely to be the definitive technology of the century, disrupting and reforming the world around us. From generative AI through to biotech, clean tech, education and much more, this technology will drive economic growth, transform scientific discovery and even help reimagine government itself. The opportunity is vast for the UK and its sluggish economy: AI has the potential to add £400 billion in economic value by 2030. (source)
Here, the TBI frame AI safety as trading off against AI innovation. This rhetoric seems dangerous! A new wave of innovative startups might do a lot for the UK — but they can’t be worth losing control over our own future, or gambling with the extinction of our species. When Peter Kyle talks about cutting red tape, he shouldn’t mean AI safety regulation.
But fortunately, I don’t think that’s what Kyle does mean, and I think the TBI only kinda-sorta think safety trades off against innovation.
Here’s what Kyle said after Sunak’s AI speech:
Artificial intelligence is already having huge benefits for Britain, and the potential of this next generation of AI could be endless, but it poses risks as well. Safety must come first to prevent this technology getting out of control.
Rishi Sunak should back up his words with action and publish the next steps on how we can ensure the public is protected.
We are still yet to see concrete proposals on how the government is going to regulate the most powerful AI models. A Labour government would set clear standards for AI safety, so that this leading tech can be used to restore our public services and boost growth. (source)
I’m usually all for criticising the Tories, but it seems a little harsh to snipe at Sunak here. His government is, to my continual astonishment, leading the world on AI safety. But such is politics. In any case, Kyle’s statement makes clear that, as much as Labour wish they could just talk about AI “restor[ing] our public services and boost[ing] growth”, they also recognise the importance of safety regulation for Frontier AI. Safety must come first.
Meanwhile, the TBI have written that “safety regulation that is too rigid could stifle needed development in AI safety technology” (p.23).5 They’re clearly reporting on startups’ fears of an overburdensome regulatory regime, and frustrations with a lack of clarity on what that regime will look like. But it’s interesting that the TBI suggest here that the real problem isn’t so much that safety “come[s] at the expense of innovation”, but that innovation is safety, such that restricting innovation means blocking progress in AI safety.
There are a couple of ways to read this:
On one reading, the debate here isn’t so much about innovation versus safety, but about competing visions for best safety practices. We should be wary of companies pushing for public policy to disengage from the safety question; that’s the kind of regulatory capture that has blocked much-needed and widely-supported policy efforts on animal welfare, climate change and tobacco. But here we can feel ourselves arriving at thorny, heated political questions that exist in contexts beyond AI. Will the AI free market solve AI externalities? It’s been rebadged, but this conversation isn’t new to the political world.6
On another reading, the debate here isn’t necessarily about existentially risky AI at all. The only AI safety risks mentioned in this TBI paper are “data breaches, the proliferation of disinformation, and hallucination or AI manipulation” (p.23). These things are all important; I think we should be especially worried about what AI disinformation will do to democracy. But you might think — or, more to the point, the Labour leadership might think — that existentially risky AI should be held to stricter standards of scrutiny and safety.
Reading so deeply into the TBI is interesting to me only in so far as, as the Fabian Society or Momentum have been before, the TBI is today something like the uconcious mind of the Labour Party. What really matters is: when Kyle says the government should regulate the most powerful AI models, which models does he mean?7 When he says a Labour government would set clear standards for AI safety, how would his standards balance private innovation and regulatory restrictiveness?8 And when the Labour Party say AI will be dangerous, do they recognise that many AI researchers believe superintelligence is an existential threat?
Extinction is bigger than politics, so I think it’s time for the Labour Party to say if they think AI is an existential threat. If the TBI are Labour’s unconscious mind, it’ll be noteworthy that they have written: “Inadequately aligned AI systems smarter than humans could seek to achieve their objectives in ways that involve the seizure of ultimate control”. But I don’t blame Labour, or any party, for not having a full AI policy agenda yet; AI governance seems extremely complicated, and the sudden, surprising window of political possiblity here has only just opened up.
You may remember the surprise when Kyle won the Shadow DSIT position; Darren Jones had been seen as the most likely candidate. Jones, who is now Shadow Chief Secretary to the Treasury, recently lent some clarity to Labour’s position on AI. I’ll quote him, (almost) in full:
I think it’s right that we’re hosting this Summit. I called for a Summit of this nature at the start of the year, in my capacity as Chair of the Interparliamentary Forum on Emerging Technologies. But the key questions for the Summit is — there’s two key questions. The first is: what is the outcome in terms of policy and regulation here in the UK, as it relates to the impact of technology on people’s lives, whether that’s in work or public services or decisions that are made about them? And, quite frankly, we’ve not seen anything from Rishi Sunak on that. He’s not even responded to his own consultation on the AI White Paper, and so it’ll be interesting to see whether he makes any progress on that issue.
And the second question is: what is the point of hosting a Summit like this in Britain? What role will Britain play in the future, in terms of influencing and regulating this sector? […] The United Nations is doing work on this, the OECD, the G7, there are other organisations working at a global level. And, if the Prime Minister doesn’t secure a commitment off the back of this Summit for Britain to continue to convene and play a coordinating, central role in this space, then it will be a failure of the Summit to achieve one of the core outcomes of holding it here in the first place.
According to reports, it looks as if the Prime Minister and other people at the conference are going to commit to voluntary schemes, something called “responsible scaling,” where essentially you let a company scale their AI models freely and you rely on the company to then tell you if something bad happens. Well, that either assumes that the company knows something bad has happened or is about to happen, or that the the company is actually wiling to tell you in the first place. [BEN: I think he means both/and, not either/or] And we in the Labour Party say: we don’t want to smother these innovative companies with regulation, because we want that investment here in the UK, but we do think there needs to be some more mandatory aspect to supervision of these models, and we don’t think the government’s currently got that balance right.
Is this the closest we’ve come to commitments on AI safety policy from Labour? I’m glad to see it. Some thoughts:
I’m pleased that we have a bipartisan consensus that AI governance will require international engagement. (The TBI also supports the Summit and continued global coordination).
It looks like Labour’s home AI policy will look out for workers and victims of algorithmic bias, like the Bulgarian nationals wrongly flagged as fraudsters by a process inside DWP involving an AI.
Labour want regulation that’s stronger than voluntary commitments to responsible scaling policies (RSPs). I’ll hold my hands up and say I’m a little out of the loop, but I know RSPs have been talked about as an alternative to a moratorium on capabilites research. If you’re a mandatory commitments advocate and you’re disappointed with Sunak, you have a year to change his mind, but you also have a new, more sympathetic government coming soon.
Although I think Sunak has been right to sit down with Altman, Hassabis and Amodei, and to engage with AI labs, I take worries about regulatory capture seriously. No, companies aren’t galaxybrainedly lying about existential risk to create an AI cartel. But I don’t think these companies are acting in the public’s best interest (because they are building technology they say could kill everybody), and I think companies tend towards acting only in their own interest. I would respect a government that stands up to Big Tech and its AI labs.
If you think the existential threat from superintelligence is one of our planet’s biggest, most neglected problems, then I think it would be reasonable to worry that the end of Sunak’s premiership will spell the end of AI safety policy. I wish we knew more about what Labour would do on AI, and I hope the party leadership soon state plainly whether or not they believe superintelligence could be an existential threat. Extinction is bigger than politics.
That said, some of Labour’s vision for AI does excite me. I’m wary of treating AI like a silver bullet, but I want everybody to share in the benefits of (AI-turbocharged?) economic growth, like more houses and better jobs, and ensuring that the rising tide of AI lifts all boats will mean strengthening workers’ rights. I’m really uncertain what Frontier AI regulation will be best, but I agree that a mandatory approach will be important. The incentives are too strong to trust AI labs to regulate voluntarily. And I’m keen for the UK to continue to coordinate internatinonal fora on AI. The risks and the rewards affect everybody, so no country should be making decisions on this scale without global democratic input.
In closing, I’d like to note that the Starmerite orientation towards AI is clearly still being defined. In fact, I think it would be very fair that say there is no Starmerite orientation towards AI yet. Many factors will shape the Labour Party’s attitude towards AI in the run-up to the next general election. Other factions within the party will have their say, not least the trade unions (which remain constitutionally and financially linked to the Labour Party) as they push to protect workers from firing by algorithm.9 Starmer may feel pressured to keep the pace up in the AI safety race; at the very least, given his open admiration of the Biden adminstration, it may be strange to fall behind Biden’s recent Executive Order. And Starmer may feel mandated by public opinion to clampdown on AI labs, with polling suggesting the UK wants to place limits on AI models we don’t fully understand and prevent AI from quickly achieving superhuman capabilities.
For now, I’m looking forward to seeing what Rishi Sunak achieves at the AI Safety Summir, and over his next ~12 months as Prime Minister. In the future, I’m hoping to vote in a Prime Minister Keir Starmer who will find himself well-positioned to take on AI, from existential risk to workers’ rights, with international backing and broad public support.
It might be almost exactly a year away; the rumour mill has been muttering about an election on Halloween 2024. Will you dread or relish the front page puns?
As a side-note, I think Johnson’s lobbying on Ukraine aid benefits from his reptuation as a partisan. He directs some efforts towards US Republicans, who are at risk of turning against Ukraine aid spending but can see a shared political identity with the Prime Minister from Brexit.
I’m not an economics expert, but I find this case pretty compelling. I liked John Burn-Murdoch’s article showing “Britain’s supply of skilled workers is outstripping demand in the form of skilled jobs” (paywalled).
Feel free to subsitute for your political buzzword of choice.
By “AI safety,” the TBI are here referring to a broader set of concerns than existential risk. I’ll discuss this in a moment.
And I note that there are versions of this discussion that are familiar to the AI safety research community. Something like, will developing AI capabilities help solve technical AI alignment? Or perhaps, will technical AI safety research accidentally boost AI capabilities?
Would Labour define them by capabilities, by FLOP or by something else? And, if by capabilities, then by which capabilities?
And there’s an important third component here, which I’ve not really mentioned because Labour hasn’t really mentioned it: public research on AI safety.
I’ve written about trade unions’ attitudes to AI policy in Doing Westminster Better before. I think we shouldn’t underestimate their importance to political debate in the UK and I think that, even if you’re skeptical about existential risk from AI, we should take some economists’ warnings that many jobs could be replaced by AI extremely seriously.
Does Labour's appointment of a Shadow Minister for AI and IP, this September, change this analysis?
What’s your view on AI & copyright. There seems to be mounting anger among content creators that AI is stealing their creative work. I’m conflicted, as without ‘reading’ Winny The Pooh for example, how is AI ever going to understand the cultural references and nuances which come from literature?