Anthony Browne is MP for South Cambridgeshire and Chair of the Conservative backbench Treasury Committee.
The most complex entity in the known universe – and possibly the unknown universe – is between your ears. The human brain, and the intelligence it is capable of, is so extraordinarily powerful that it has transformed humans from being lion food to overwhelmingly dominating the planet – far more than any species in evolutionary history. But is our brain about to make itself extinct?
I have been fascinated with AI since I was a teenager, when I coded AI programmes for the BBC Micro. I was particularly proud of one programme that could rewrite its own code while running. In later life, I’ve invested in and advised AI firms.
Over the decades, public interest was waxed and waned, but global attention has been transfixed by ChatGPT. The eloquence and apparent intelligence of ChatGPT has impressed us all and alarmed many.
Some of the leaders of artificial intelligence are so concerned by the pace of change they have jointly called for a pause in AI research, warning it could wipe out humanity, Terminator-style. The more optimistic are merely worried that it will lead to mass unemployment and the end of democracy.
Five years ago, I suggested to policymakers we needed to start thinking about regulating AI, but there was bafflement. Now the Prime Minister Rishi Sunak is brokering a deal to make the UK the home of a new global AI regulatory body.
From the Spinning Jenny onwards, every new technology has led to apocalyptic warnings about its impact. They have almost all proved false alarms. Despite endless waves of automation, more people work now than ever before. Automation makes us more efficient, but we then focus our efforts on the many things that machines can’t do.
But AI is different. The reason we dominate the planet is not because we are faster or stronger than anyone else, but because we are more intelligent. If we are no longer the most intelligent, we could lose the dominance. It is right to see this as probably the biggest challenge humanity has faced.
In Life 3.0: Being Human in the Age of Artificial Intelligence, Max Tegmark recounts how, over the decades, AI has progressed in fits and starts, with countless dawns proving false, and being followed by AI winters.
But AI is not a single thing, it is a range of capabilities that progress on many fronts. Things that were thought impossible for a computer to do quickly become commonplace. A few years after it was cracked, we now all have incredibly powerful facial recognition in the computers in our pockets, which can also understand the words we speak and follow our commands. ChatGPT crafts extraordinary writing in any style on any subject.
ChatGPT is very impressive, but it is important to realise it is really little more than a flamboyant parrot. It repeats words it has read elsewhere, but has no understanding of what they actually mean, and cannot come up with any new insights.
It is basically advanced pattern recognition – something that AI excels at. Its very fluency deceives us into thinking it understands more than it does. It reminds me of a guide I once had at a Vietnamese temple, who gave eloquent descriptions of what we were seeing in almost flawless English, but when it came to the questions, it turned out she couldn’t understand English at all (she could just recite it at length from memory).
What AI can’t do – yet – is advanced general intelligence: scoping out a problem, understanding it, seeing new angles, and producing new solutions. The British company Deep Mind produced AlphaZero, the world’s most powerful chess programme, and the only one that learnt without going through games that humans played (it had zero human input): it was given the rules of chess and played itself billions of times to learn what the best moves in each situation were, and created a totally new style of chess playing (where strategic position matters more than the value of pieces; humans are good at evaluating pieces, but not positions – so we overemphasise the value of pieces).
But AlphaZero could not do what an intelligent human could do, and be told the rules of chess and then just work out how to play tolerably well from the very first game.
No one knows how long artificial general intelligence will take to develop, but we have to assume it will come in fits and starts. That will be the game changer, because artificial intelligence could be deployed to make itself even more powerful, in an ever accelerating cycle of improvement.
In The Singularity is Near, the erstwhile Google futurist, Ray Kurzweil, concludes that the advance of AI will be “super-exponential”, exploding at some point not too far away to effectively infinite intelligence. In his book Superintelligence: Paths, Dangers, Strategies, an Oxford Professor, Nick Bostrom, speculates on an arms race to produce the first superintelligence, because the first country to achieve it would be able to use that superintelligence to stop anyone else following it. It had better be us doing that, not a hostile state: it would be a matter of national security.
The prospect of superintelligence is why AI researchers are warning that it could make humans extinct. It could end up outside our control, and whatever objectives we give it could be changed. Isaac Asimov’s First Law of Robotics was that a robot can never harm a human, but a superintelligence could end up writing its own rules, as humans do. Or like the malign computer HAL in 2001: A Space Odyssey, it could interpret rules in harmful ways. Unless a superintelligence was in cold storage, physically cut off from the rest of the world, it could move its location across the internet and could not be stopped by pulling a plug.
Such scenarios are worth thinking about, but not panicking about. At present, AI is a powerful tool that offers huge opportunities. Around my constituency of South Cambridgeshire, life sciences firms are using AI to accelerate the drug discovery process to improve hospital logistics. At Addenbrooke’s Hospital, AI is being developed to radically improve the outcomes of radiology treatment to cure cancer. Healthcare unenhanced by AI will soon seem medieval.
But even now, there are clear risks to AI. Abuse of facial recognition could lead to a horrendous surveillance state, in which no-one can ever be anonymous. There is a massive risk of misinformation, and deep fake impersonations of famous people, eroding out ability to interpret truth and have democratic debate. I am leading the Government’s work on tackling scams – and fraudsters using AI to trick victims is a genuine worry. AI will no doubt lead to economic changes, reducing the demand for certain roles such as clerical workers.
We do need to regulate AI to stop the abuses. AI should never be able to make life or death decisions with no human oversight, from the diagnosis and treatment of patients to the use of autonomous weapons, whereby a machine can decide to kill someone in the battlefield.
Deep fakes and misinformation clearly need controlling. We need to ensure that AI does not perpetuate or amplify human prejudice. It is good to do this on a global basis, since it is a challenge for all humanity – and it is indeed an area where the UK can lead. We need to make sure we understand the threats of AI, and curtail them – but we also need to make the most of the opportunities.