Peter Franklin is an Associate Editor of UnHerd.
Michelle Donelan is the member of the Cabinet responsible for tech policy. It was in that capacity that she told readers of The Sun that:
“AI [artificial intelligence] is not something we should fear… People should trust that computers which think and learn won’t be used to undermine their safety, their privacy, their rights or their health.”
Reassuring words. However, Donelan found herself contradicted by an open letter calling for an immediate halt to “giant AI experiments”. It was signed by the likes of Elon Musk and Steve Wozniak, people who have every reason to champion technology – but who are so worried by recent developments that they want to give humanity a chance to catch up.
As Conservatives, we ought to be conservative about things. At the very least, this means exercising a degree of caution before rushing headlong into irreversible change. If creating a pluripotent thinking machine doesn’t fall under this heading, then I don’t know what does.
And yet while speeding towards danger is clearly unconservative, the same applies to slamming on the brakes at the first sign of trouble – which I fear is what the signatories of the open letter are demanding.
The current focus of concern is on a class of AI systems called large language models or LLMs. Examples include ChatGPT, which has just been superseded by an even more powerful system called GPT-4.
Like just about everyone, I’ve been surprised – shocked, even = by the ability of these systems to learn the rules of human language and respond in a disconcertingly human-like way to questions. However, there’s no ghost in the machine here; just a mindless process of trial-and-error amplified by the capacity of computers to process data on an epic scale.
Despite the impressive engineering, GPT-4 no more understands the English language than the calculator on your desktop understands mathematics.
Why, then, are so many AI experts worried that we’re are about to lose control?
Part of the problem is the materialist worldview that dominates the tech elites: if you believe that a human being is nothing more than a complex arrangement of atoms from which a conscious mind has somehow arisen, then why not assume that the same is possible of a sufficiently complex machine?
Furthermore, if true artificial intelligence does emerge, then why not also assume that it will design evermore sophisticated versions of itself: the premise of the so-called Singularity, a godlike AI that may or may not tolerate our continued existence.
It should be said that not every tech bro buys into this new religion. If we can’t even define consciousness, then we can’t code it – let alone a cybernetic god-substitute.
And yet the underlying concept of self-accelerating technological progress remains influential. It is why AI experts get over-excited every time someone makes a breakthrough.
Let’s leave aside the notion of a silicon deity and consider a more plausible scenario: AI as a job-destroying machine.
It wouldn’t be the first time that IT has replaced humans in the workplace: just look at what word processors did to the typing pool or spreadsheet software to the traditional accounts department. But the overall impact of the IT revolution has been to increase, not reduce, demand for knowledge workers.
To become a threat to employment, AI wouldn’t just need to accelerate the pace of technological change, but also reverse its net effect.
If that did happen then, for the first time in decades, we’d have to choose between jobs and productivity.
For a preview of what this dilemma would do to conservative politics, watch this 2018 conversation between Tucker Carlson and Ben Shapiro. The key point comes when they debate the impact of driverless vehicle systems (another aspect of AI) on American truckers and their families.
Shapiro asks Carlson if he’d really restrict the new technology to protect jobs: “Are you joking?”, an incredulous Carlson replies, “in a second!” With ten million jobs at stake, he argues there’d be no choice.
The irony is this example shows why we don’t have to fear an AI jobocalypse just yet. Five years on from the interview, automated cars and trucks have yet to take over. That’s not because technology hasn’t progressed: a computer can drive a vehicle for hundreds of miles on public highways in almost all conditions and situations.
However, it’s the almost that’s the sticking point. If you still need a human driver one per cent of the time, then unless you segregate roads to create a 99-per-cent predictable environment you can’t have automated vehicles at all.
Time will tell, but I suspect that the same one per cent problem also applies to replacing knowledge workers with large language models. Most jobs consist of routine tasks interspersed with unpredictable challenges for which some combination of initiative, creativity, and common sense is required.
Until AI can emulate those qualities it will be a lot easier to automate individual tasks than individual jobs. So, for the time being, neither the Singularity nor the jobocalypse (nor anything like those two prospects) justifies an immediate halt to AI research.
Of course, we don’t need to resort to science fiction to realise that AI still needs to be regulated.
Above all, there’s the danger of all the complex and essentially unfathomable code it generates. Humans coders already write ramshackle software that produces unpredictable and occasionally disastrous results, especially when code is added piecemeal by multiple individuals and teams.
The end product is a system that no one can fully understand or properly manage; AI-powered software that adds to its own code greatly compounds the problem.
Regulators can’t scrutinise every line of code. But developers should be compelled to design automated systems that contain the impact of catastrophic software failures. The classic thought experiment in which an artificially intelligent paperclip machine turns the whole world into paperclips is only plausible if you give the machine control of the world’s resources. So let’s not do that.
Needless to say, I’m only scratching the surface of the immense regulatory challenge here.
However, it’s also a huge opportunity for Britain. By some reckonings, the UK is the world’s third-most important country for AI. We can cement and build upon our advantages by creating the world’s best system of AI regulation.
By this I don’t mean a free-for-all. Businesses obviously prefer not to be overburdened by bureaucracy, but even more they value a predictable policy framework so that investments aren’t put in peril by sudden shifts in direction — such as Italy’s recent ban on ChatGPT — or by the lobbying tactics of unscrupulous competitors. Much better a robust framework than a flimsy one.
In this respect, this country is well placed to deliver. The American system is rotten with lobbyists; the EU system is a multi-layered mess; whilst the Chinese system isn’t exactly famous for its even-handed treatment of foreign versus domestic companies.
Britain, however, can offer a level playing field, and our success with the biotech sector shows that our regulators can keep pace with developments in a rapidly advancing industry.
But that brings me back to why we shouldn’t call a halt to AI research. Indeed, it’s hard to think of a more self-defeating policy.
Chinese researchers would pause briefly to laugh in our direction and then carry on developing the tools of tomorrow. All hope of understanding – and thus controlling – this vital technology would be lost to us forever.