Garvan Walshe is a former National and International Security Policy Adviser to the Conservative Party and founder of the AI startup Article7.
An enormous corporate struggle has broken out over the future of OpenAI, the company behind ChatGPT. It’s a fight between two groups of Silicon Valley visionaries who believe that AI will transform our societies: one apparently terrified of the consequences of the development of super-intelligent machines in the future; the other believing in its limitless potential to make money with the technology available today.
The stakes were raised by the enormous popularity of the question-answering bot, ChatGPT. Its apparently magical properties – stringing together sentences in multiple languages, hoovering up information from its training data set, and generating computer code – have overshadowed its limited reliability and major potential for security flaws.
It has been accompanied by other AI models that generate images from short text prompts, and improved forms of image generation. This comes on top of revolutions in machine translation, speech recognition, computer vision and speech and video generation.
Alongside the excitement has come the fear (some leading AI industry figures, many of whose companies, coincidentally, had already generated models of their own) called for a moratorium on (their competitors) building new ones, and some have even likened the technology to nuclear weapons.
The real danger at the moment however is not a superintelligent killer robot emerging from a secret production facility in China to impose tyranny on human slaves, but hype, leading to regulatory panic.
The current AI boom comes from breakthroughs in our ability to train ‘neural networks’ — statistical models inspired by the brain — that use the same mathematics as graphics in computer games, and which have been able to benefit from specialised chips designed for them.
The idea is to feed the models training data, out of which the model can ‘learn’ patterns within it. If there are fundamental patterns in the data, as there are in pictures of the world, or amounts of text, these models will pick them up. However, these suffer from shortcomings: as well as well-publicised stories of them becoming offensive or yielding bomb-making instructions, they can generalise within their training data, but appear to struggle beyond it; they don’t have any notion of truth (as an unfortunate lawyer found when he used ChatGPT for court filings only to have it make up plausible sounding cases), and perhaps get bamboozled by negation.
Most crucially, they are opaque. We’ve no way of peeking inside and working out why they make the decisions they do. My AI startup is at the early stages of experimenting with some models that we could see into, but even if these tell us which factors go into a model’s decision in a particular case; they won’t tell us whether those factors actually apply to the individual in question.
These problems appear technical but are in reality moral. Our society is based on holding individuals responsible for their actions. This works in politics, where we hold individual ministers to account (not, for example, just all ministers with grey hair); in criminal justice (courts that judge individual guilt and innocence are what distinguishes free societies from totalitarian dictatorship) and even in art. Stock characters are dull; we need individual specifics and experience to liven them up.
Rather than artificial intelligence, we have created artificial intuition. A guide perhaps and, in the right circumstances, a hugely beneficial assistant, but no substitute for reason.
The danger, at the moment, comes from empowering systems to make bad decisions based on hunches derived from their training datasets, rather than from extremely intelligent machines able to outwit us. This problem does not, however, need regulation, beyond the attribution of liability – just the application of existing law.
If you use a large language model to defame someone (one wit has called them “large libel models”) you still risk a libel suit. If you make a bomb after tricking ChatGPT into giving you the instructions, you will violate counterterrorism law, just as much as if you followed the instructions in The Anarchist’s Cookbook. If you make a driverless car that ploughs into a pedestrian, or use a facial recognition system that routinely leads to miscarriages of justice, expect to be met by civil and criminal penalties. If you misrepresent a product as accurately identifying criminals using AI when it in fact does not, fraud and trade descriptions law will come for you.
The controversy over “harmful but not illegal” social media content (which led to the Online Safety Act) arose because social media firms have taken advantage of laws that prevent them being classed as publishers. AI systems, and their users (other than social media companies), enjoy no such exemption.
While the AI systems we currently have pose risks that can be addressed by the ordinary legal instruments we have now, rushing to regulate creates two major harms of its own: stifling competition against the incumbent AI firms (including OpenAI),and snuffing out innovation in a hugely important sector that’s just starting to get going.
The European Union and United States are already rushing out onerous new regulations, as though they are determined to copy the Ottoman Empire’s suppression of the printing press. Rishi Sunak’s government is right not to follow suit. This panic by the Biden Administration and the EU have given the UK an opportunity to establish itself as the free world’s AI leader.
Full “artificial general intelligence” will require new forms of law (how should it be controlled) and even politics (should such systems have political rights, as well as obligations?). This will need careful thought and if possible international coordination. Current systems, however, do not. Through their overreaction, the US and EU may have handed Britain something quite wondrous: a genuine economic benefit of Brexit.