The eyes of the world are currently trained on the Middle East. But this week, just for a few days, Rishi Sunak wishes they were rather on Milton Keynes.
Today sees the launch of his summit on artificial intelligence (AI) at Bletchley Park. The spirit of Alan Turing is being summoned in an unprecedented attempt to get politicians and industry leaders together to discuss the opportunities and risks associated with this new technology. It is also the latest attempt by a British Prime Minister to answer Dean Acheson’s jibe.
We have all become more familiar with AI over the last year. Even if some of us haven’t got much beyond experimenting with visions of Cyrmo-Futurism, since OpenAI launched ChatGPT ten months or so ago, investment in the technology has surged. Britain lags only China and the United States in the funds we are putting in.
But the question remains: should we be excited, or wary, about AI? Speaking last Thursday, the Prime Minister’s tone about its potential consequences was gloomy. He raised the prospect of humanity “losing control of AI completely”. A “superintelligence” could bring about human “extinction”. Cue sub-editors reaching for Terminator screengrabs.
Even if we swerved the more “extreme risks”, AI could still speed up the creation of chemical and biological weapons, spread disinformation, facilitate cyber attacks, and enable child sexual abuse. Sunak suggested “this is not a risk people need to be losing sleep over now” but announced the creation of the world’s first AI safety institute to examine “all the risks”.
Nonetheless, there was little sign of the Prime Minister joining Elon Musk and another 15,000 industry luminaries in calling for a six-month pause on the technology’s development. He suggested that since “we believe in innovation” we should not “rush to regulate” this new technology.
Sebastian Payne has a good rundown of AI’s capacity for speeding up productivity in the NHS and civil service. It could be used to diagnose conditions faster, or automate Whitehall’s processing of everything from benefits to asylum claims. Collapsing public sector productivity has wrecked havoc with the public finances. Who cares if civil servants are WFH if they can be replaced with a robot?
For its most Panglossian proponents, AI is a truly transformative technological advance. Nothing else on the horizon, according to Tyler Cowen, has the potential to boost our stagnant growth rates than AI. Whatever the disruption it causes, it provides our best shot at escaping the West’s demographic stagnation.
We at ConservativeHome are natural fans of anything that could ease pressure on the public finances and bolster Britain’s international position. Using our freedom outside of the EU to make Britain a world leader in transformative technology is an obvious post-Brexit success. Why should we abandon a thriving new infant industry for some sci-fi fantasies?
Unfortunately, the more I read about AI, the more depressed about it I become. Not necessarily for my own job. ChatGPT can write a handy article about AI’s consequences for the British economy, but it can’t do SW1 gossip. No, I’m very seriously worried that AI has could be a real threat to humanity’s future – and that Sunak should tell Kamala Harris, Google, and anyone else attending this week to shut all research down.
Hyperbole? Did I go and see Oppenheimer once too often? Has the natural pessimism instilled by teenage years spent reading John Gray articles blinded me to the benefits of the biggest technological revolution since Johannes Gutenberg got a sore wrist? Perhaps. But there are also signs the development of AI is spiraling out of control.
Experts and industry leaders differ on when they think a “superintelligence” – an AI that exceeds human intelligence – will be created. But reaching that end is the stated goal of companies such as Google Deepmind and OpenAI. Many predict it will be achieved within the next decade, some within two years. Some have even suggested one already exists. Developers are increasingly cagey.
In historical terms, this would be a development comparable only to the creation of the Atomic Bomb. Suddenly, anything humanity could do, an AI could do better. No longer would Home Sapiens (or the mice, or the dolphins) be the most intelligent lifeform on Planet Earth. If the Bomb gave mankind the ability to wipe ourselves out, AI provides us with the opportunity to replace ourselves.
Rogue AIs – like Skynet or Hal 9000 – are science fiction staples. But it could be a genuine reality. AIs are becoming increasingly unpredictable, all the while becoming more sophisticated and harder for human beings to control. Already, AIs have arrived that can design internationally banned chemical weapons in a matter of hours.
But AI could go rogue in pursuing even the most innocuous task, treating humans as an end to achieving its ambitions in ways we cannot foresee. One thinker has pointed towards an AI that wipes out humanity in its quest to make as many paperclips as possible, outsmarting our attempts to shut it down as it turns all available resources towards maximising production.
Again, it seems ludicrous. But back in April, Eliezer Yudhowsky, the head of California’s Machine Intelligence Institute, wrote a Time article suggesting it if “somebody builds a too-powerful AI, under present conditions” that he expects “every single member of the human species and all biological life on Earth dies shortly thereafter”. He recommended a moratorium on all development.
That is obviously a fantasy. The nature of competition means that any lab asked to observe a pause would be concerned its competitors wouldn’t do so. That is doubly the case when we are dealing with a nascent Cold War II. Why should Beijing listen to Silicon Valley? Why should OpenAI allow other labs the chance to catch up with it?
This is why Sunak is putting his faith in international regulation, encouraging the creation of UN institutions designed to oversee, monitor, and hopefully control AI’s development. But as Niall Ferguson has highlighted, global efforts to regulate both nuclear and biological weaponry have been far from successful. Neither have attempts to coordinate global action on climate change.
Regulating AI is even harder. Building a nuclear or biological weapon is difficult, and usually only within the capacity of a state. But AI research is primarily concentrated in the private sector. 32 private companies produced machine-learning models last year, compared to only three by academic institutions.
In extremis, Washington could bomb a North Korean or Iranian nuclear testing site. Is it also supposed to bomb Silicon Valley? How can a government stop individuals coding? The next Robert Oppenheimer might be a teen with a laptop.
If you could go back and tell the scientist to stick to snogging Florence Pugh, would you? Some would say nuclear weapons have saved millions of lives through the balance of terror. But others think that’s nonsense, that a nuclear holocaust has been avoided only by luck and accident.
The risks from AI are directly comparable. Attempts to encode human morality within it are fundamentally flawed. Those within the industry – hooked on funding – will downplay such concerns, whilst politicians will relish any attempt to seem cutting edge. Sunak should tell attendees to keep Pandora’s Box locked. But it might already be open.
So when we’re praying to our new robot overlords, picking our way through the smoldering radioactive ruins, or drowning under mountains of paperclips, at least be grateful that we got a year or two of funny generated images, new Beatles records, and the slightly faster processing of asylum claims. Backing AI might be a Brexit win, but it is a historic loss for the human race.