Lord Willetts is President of the Resolution Foundation. He is a former Minister for Universities and Science, and his book A University Education is published by Oxford University Press.
Sometimes the critics say that the issues of personal interest to Rishi Sunak are too esoteric and don’t add up to a political strategy. But his focus on AI is absolutely right. There are not many other world leaders who could have put in the effort to deliver last week’s AI Summit.
The briefing paper for it was of high quality. It was an excellent account of where AI has got to. Think of the latest large language models as the predictive text you have on your smartphone magnified many times over.
The Government paper also sets out a daunting list of possible risks from generative AI. It is hard to stop these systems complying with harmful requests. They are prone to “hallucination”, giving plausible but wrong answers.
It is hard to specify all the constraints which we humans live by. Imagine asking a robot to make its way from Piccadilly Circus to the Olympic Park. It sounds straightforward until you try to work out all the human rules and conventions needed to stop it walking over a baby. AI gets round the problem of formulating all the rules by instead training the robot on the data from millions of such trips. But that is not 100 per cent reliable.
There is a real risk of what the paper calls “degradation of the information environment”. Deep fakes will become ever more plausible. One survey suggests that already 30 per cent of people who go online do not consider the truthfulness of the information they access. The paper warns that “the attention economy means on the supply side, trade-offs are made between the truth orientation of information and attention-grabbing strategies.” This makes the practice of politics in a tolerant liberal society ever harder. It may also enhance the value of trusted data sources and institutions which make an effort to check their sources – and, yes, that makes the BBC even more of a national asset.
There is a race on to develop new systems, and performance is being enhanced at an extraordinary rate. However, the paper points out that “safety testing and evaluation of frontier AI is ad-hoc, with no established standards, scientific grounding or engineering best practices.” The Summit was a crucial first step to agreeing some basic standards. As a minimum, we need to know what is going on.
Meanwhile, there is a lot of regulatory activity elsewhere. The EU has prepared its AI Act. The US Government produced a Executive Order last week in the run-up to the Summit on “the Safe Secure and Trustworthy Development and Use of Artificial Intelligence.” It is a very detailed and attempt to regulate the sector. It has been welcomed by Microsoft, but also challenged by smaller technology firms as a “red tape wish list”. It is possible that is going to help the incumbents and provide a further barrier to new entrants.
One omission which surprised me is that the Summit does not seem to have covered the issue of all the data which the Large Language Model (LLM) companies are scraping so that they can create their models. These large American companies are taking every sentence and image and song created, but without any consent and with no regard to copyright and IP.
Indeed, they are deliberately vague about exactly what their sources of data are as they try to fend of the inevitable court cases. Does Taylor Swift have any rights over songs created in her style by LLMs? As we have such an extraordinary cultural heritage here and a dynamic creative sector, we are particularly vulnerable to exploitation.
The Summit ended with Elon Musk envisaging a world where nobody had to work. Meanwhile here in the UK, with the Autumn Statement looming our problem is the opposite of Elon Musk’s. We have a low investment low productivity economy. It takes us more hours of work to get to a given level of output than our major competitors. And those hours of work are increasingly strenuous and demanding – whether you are a blue collar or white collar worker. It is good news that we have got high levels of employment, but the bad news is that we haven’t invested enough in either kit or skills to enable us to work with maximum effectiveness. Musk’s world is indeed a long way off.