Anthony Browne MP is Chair of the 1922 Treasury Committee, a former member of the Treasury Select Committee, and on the advisory board of the Institute of Fiscal Studies.
When someone suggests a new law to me, I often ask what the problem is that it aims to solve, and what the scale of that problem is. It is generally best to avoid legislating to stop something that isn’t happening.
Normally, Governments regulate to clean up after a mess has occurred. But artificial intelligence is different. Because of its rapid momentum and scale of impact, it really does make sense to think beforehand what to do. You need to act when you can see the runaway train coming towards you, not when it hits you.
I attended the Prime Minister’s speech last week when he announced he was setting up the world’s first AI Safety Institute. This week he is hosting the world’s first global AI Summit at Bletchley Park (as the Prime Minister’s Anti-Fraud Champion, I am chairing an eve of summit session on fraud and AI).
Earlier this year, the Government published a White Paper on the principles of regulating AI, and the Science and Technology Select Committee recently launched a report on the topic. The EU is also negotiating legislation on artificial intelligence. When I was elected in 2019, I suggested to various people in Government that we should think about what regulation was needed for AI, and was met with bemused looks. Now everyone is at it.
It is good that the Government is ahead of this curve. And the UK is in a unique position to provide global leadership to this global problem. We are not just the leader of AI in Europe, but the third main AI power after the US and China. However, the US is conflicted by being home to the world’s leading AI firms, and China is run by the Communist Party. The UK has an internationally respected track record of smart technology regulation which supports innovation.
There are clear massive opportunities from AI. In my constituency, the life science capital of Europe, researchers are feverishly using AI to improve diagnosis, and develop new treatments from drugs to cancer therapies. In education, AI promises massive improvements in tailored teaching appropriate for each individual child. AI will bring huge improvements in productivity, accelerating economic growth.
But there are also real threats from AI. The most common concern is over jobs. Like earlier industrial revolutions, AI will both destroy jobs and create them, but there is no reason to think it will lead to fewer jobs overall. But those whose jobs are affected will need to support to retrain for the new economy. The best way that the Government can prepare people for this is to ensure they get the best possible education.
There are also frequent warnings, especially from industry leaders, that a computer superintelligence could make humans extinct, but personally I am not that worried about this. We have not remotely developed artificial general intelligence that can think like a person (current AI is really just impressive pattern recognition). We are a very long way from this, and when we get close to it there will be many ways to stop it if we want to. However, it is absolutely right that Governments start thinking about such a high impact-low probability scenario.
There are other risks that are more imminent and real. Artificial intelligence could help “bad actors” (as the Americans like to call them, and they are not talking about Kim Kardashian) develop their own chemical and biological weapons. We have already seen bad actors try to use social media to manipulate elections – with AI they will get much more sophisticated, and could undermine democracy. Deep fakes – manufactured but convincing videos and voice recordings – could have multiple malign uses.
I recently attended a round table with the Governor of the Bank of England about the impact of AI on fraud. The epidemic of fraud is driven by well-resourced and highly entrepreneurial organised criminal gangs, who would leap at the opportunity to use AI to steal even more money. They could use deep fakes to scam people – there has already been a case of a deep fake video of Martin Lewis urging people to put their money into a fraudulent investment. They could use AI to automate scams such as romance fraud rather than using people to do it, enabling them to operate at far greater scale. They could also use AI, and big data, to better target victims. AI is very useful in tackling fraud, and indeed banks, tech firms, and mobile phone operators already use it extensively to do this. But it also seems inevitable that AI will lead to a whole new scale and sophistication of fraud.
In addition, there are the more traditional concerns of algorithmic bias, where AI perpetuates discrimination, and loss of privacy. Should police be able to use facial recognition on all their cameras to detect where we all are all the time?
There are many more concerns about the possible bad effects of AI, but with most of these we are on a voyage of discovery about what the possible impact is. That is why I fully support the creation of the AI Safety Institute, to monitor and think through the issues.
What is less clear is what the solutions are, and often it is too early to tell. We want to make sure we regulate in a way that stops bad things but does not stifle innovation and the good things. With fast-moving technology, it means we need to remain flexible. This is why the EU will almost certainly get it seriously wrong – as well as having instinctive over-prescriptiveness, the EU is the planet’s least flexible legislator.
The Government has set out some principles on AI governance and regulation, but said it does not have plans for immediate legislation, other than possibly requiring regulators (such as Ofcom and the Financial Conduct Authority) to use their powers to address the impacts of AI. In many cases, existing laws may be adequate. The rise of unauthorised deep fakes for malign purposes does need special consideration: it does seem wrong to allow people to create deep fakes of a real person that causes damage to them or others, or steals commercial opportunities from famous actors.
There have been demands to stop the development of advanced AI, but that seems impossible: there is a technological race on to develop general AI, and if some countries try to stop it, research will move elsewhere. But we need to think how we can stop bad actors getting access to the most powerful AI. There is a tension between making the code of AI “open source” so it is open to scrutiny, and keeping it secret so that criminals or rogue states don’t get easy access to it.
The frontier AI companies (at the cutting edge of developing AI, such as Deep Mind and OpenAI) are now generally not letting their source code be open access, but they are giving the Government access to it on a confidential basis. The Government last week published a report on the practices and principles that the frontier AI companies have agreed to abide by.
We are at the beginning of probably the most profound technological revolution in human history. There are huge unknowns, but we need to do what we can to prepare. There is only one thing that is certain: and that is that today’s children will inherit a very different world.