It’s 78 years (and one Christopher Nolan movie) since Robert Oppenheimer set the benchmark for being a drama queen by detonating the first atomic bomb in the New Mexico desert. Since then, the human race has successfully endured the bombings of Hiroshima and Nagasaki, a half-century of Cold War, nuclear proliferation, and a few near misses without blowing ourselves all to kingdom come. How?
One school of thought suggests human beings are fundamentally rational creatures. The logic of Mutually Assured Destruction means no sane leader would ever press the nuclear button. Deterrence works. If the situation ever gets out of hand – or ‘goes Cuban’ – sensible leaders will step back from the brink, install some snazzy telephones, and sit down for a chat in one of those international institutions they so fetishize.
I prefer another school of thought: that for the last eight decades, humanity has got lucky on a huge scale. The Cuban Missile Crisis was not resolved peacefully because international relations theory suggested it should be, but because John F. Kennedy had been reading up on the First World War, and knew his military advisors were speaking tosh. And because Vasili Arkhipov disagreed with his commander’s presumption that being cut off from Moscow meant launching a nuclear torpedo.
My potted history of you, me, and the atom bomb is a roundabout way of introducing our relationship with another new form of technology with the potential – if the Terminator-laden profiles beloved by our popular press are to be believed – to a similarly epoch-shaking impact: artificial intelligence (AI). The fear: AI ends our reign as the second smartest thing on the planet (after the dolphins). If mankind avoided nuclear apocalypse only through luck, what chance do we have against Frankenstein’s chatbot?
Today, GPT-4: a useful tool for lazy schoolboys. Tomorrow: a dystopia of job-stealing computer code, self-generating misinformation, and the inevitable triumph of some machine superintelligence that decides silly little humanity stands in the way of its perfect functioning. An Austrian twang is optional.
These fears have Elon Musk and Steve Wozniak calling for a six-month pause on the development of AI systems more powerful than GPT-4. They also have Rishi Sunak aiming to not only host a global conference on AI’s future, but to convince the Americans to allow us to establish a “nuclear-style global AI watchdog” in London”. AI is developing so fast that Downing Street’s own advisors are being blindsided.
However, those of us sad enough to admit to subscribing to the Substack of Dominic Cummings were recently greeted with a very different take on AI’s impact to the popular doom and groom. Cummings highlighted a piece by Marc Andreessen – an American tech investor and software engineer – entitled ‘Why AI will save the world’. It’s worth a read.
For the uninitiated, Andreessen provides a useful definition of what AI is: the “application of mathematics and software code to teach computers how to understand, synthesize, and generate knowledge in ways similar to how people do it”. It is a computer programme, not a killer robot. It is, according to Andreessen, a “way to make everything we care about better”.
The application of human intelligence has enabled human beings to go in a few thousand years from caves to skyscrapers. AI provides the opportunity of putting rocket boosters under our own natural capacities. AI tutors, assistants, and advisers will become embedded at every level of education, research, business, life, and politics.
Consequently, productivity growth “will accelerate dramatically” and a door will be opened onto a “new era of heightened material prosperity for the planet”. AI “is quite possibly the most important – and best – thing our civilisation has ever created” and supporting its development and proliferation is “a moral obligation we have to ourselves, to our children, and to our future”.
Andreessen suggests the hysteria that dominates so much of popular discussion of AI is driven by an unfortunate combination of corporate interests aiming to regulate against the competition, the useful political grifters aiming to censor their opponents, and mankind’s natural tendency towards millenarianism.
He does not deny that there are risks to AI. If it heightens our ability to apply our own intelligence towards good, it does the same for our ability to do evil; a cure for cancer versus terrorist bioweapons. But Andreessen argues the way to guard against bad outcomes is to build safety into AI systems and deploy them against bad outcomes. Use good AI to destroy bad AI, Thanos-style.
Obviously, whilst such a suggestion might make sense to a Silicon Valley investor, it sits uneasily with the mind of the man in Whitehall. It requires the coordination of decentralised actors, rather than the clunking fist of top-down regulation. It also provides fewer opportunities for the Prime Minister to suggest we can become world leaders in an industry by strangling it in its infancy.
That is the approach that the European Union – in its efforts to become a “global regulatory superpower” – has been taking for decades. All Brussels has succeeded in doing in that time is to unlearn the lessons of the Renaissance and kill off much of the continent’s tech industry. Britain currently trails only the United States and China when it comes to AI. This is not an approach we should emulate.
As Peter Franklin has argued, Beijing is also the reason why we should not comply with Musk’s suggestion of a pause – or why efforts to establish an international regulatory regime are inevitable non-starters. Whereas we in the West might see AI as, at best, an amusing way to bring Philip Larkin back from the dead, China is already deploying it as their latest tool of repression.
From ChatBots that can’t talk about Tiananmen Square to the automated biometric tracking of the general population, China aims to create the very dystopia that Andreessen refutes. With the rapid development of AI-controlled battlefield equipment, Schwarzenegger isn’t that far away after all. Efforts to get the Chinese to agree on common international rules for AI will not work when their vision for its future is so different from our own.
Consequently, we must accept that Pandora’s Box has been opened. The development of AI can be shaped, but not stopped. As Cummings points out, Oppenheimer would not have paused the Manhattan Project for six months to give the Nazis a chance to catch up. To do so now would be almost as absurd as declaring on Day One of the Ashes with Joe Root batting sublimely.
What does this mean for Sunak? He should heed the warnings of Tony Blair and William Hague, and ensure that his new AI Taskforce not only has the independence from Whitehall’s inertia that empowers its vaccine predecessor, but that funding for it is scaled up massively. In 2045, HS2 still won’t have been built. Will pumping billions more into it than AI really look like a sound investment?
The Prime Minister’s rhetoric on AI has been noticeably more positive than Keir Starmer’s. He suggests it could “help us achieve the holy grail of public service reform”, whereas Labour has only made some Luddite groaning about deindustrialisation and inequality. But Sunak is enough of a California tech bro to know that AI’s impact will go far beyond streamlining NHS record-keeping.
AI represents an enormous opportunity for humanity. Downing Street should stop entertaining the absurd fiction that London can be the Athens to Silicon Valley’s Rome, and start building the alliances, hiring the right people, and laying the groundwork for the sort of decentralised safety initiative that Andreessen suggests.
If we do not, we could see China write AI’s future – and clumsy regulation would only mean saying Hasta La Vista to a growing industry that Britain has a half-decent chance at being competitive in.