The Artificial Intelligence (AI) Safety Summit was a real opportunity for the UK to show leadership in managing the challenges and risks posed by this exciting new technology. Unfortunately, the summit jettisoned real-world concerns in favour of sci-fi narratives about the impending AI apocalypse.
The two-day summit saw the recently-established Frontier AI Taskforce host entrepreneurs, academics, and politicians to discuss “frontier AI”— powerful AI systems that could have potentially disastrous consequences for humanity — and how to regulate it.
Although the summit’s aim to raise the profile of safety concerns among AI researchers was noble, its fixation on frontier AI has only further legitimised the sci-fi debate on whether AI will lead to technological utopia or human annihilation. The latter camp’s focus on existential risk, or “x-risk”, has been pushed by proponents in Silicon Valley, and dominated the AI Safety Summit. This was a waste of time.
Of course, the Government should not downplay the risks AI presents to society. As Oliver Dowden, the Deputy Prime Minister, recently said, “there is a very real possibility that the world’s next shock will be a tech shock.” AI systems are already being misused to create and disseminate misinformation and perform cyberattacks and may even be used to create biological weapons.
A report by the American think-tank, RAND Corporation, suggests that Large Language Models (LLMs), such as ChatGPT, “could assist in the planning and execution of a biological attack,” while the CEO of the American AI startup, Anthropic, has said that AI could be used to develop bioweapons in the next two years.
However, focusing on frontier AI threats comes at the expense of less glamourous, but more immediate socio-economic concerns. AI may trigger rising unemployment and worsen social inequalities. A Goldman Sachs report suggested that AI could put at risk 300 million full-time jobs in Europe and the US and a report by the Institute for the Future of Work found that the adoption of AI by companies will erode job quality and increase employee stress levels.
These existing harms should be the focus of present AI regulation, since their potential dangers are concrete and tangible, whereas the hard-to-evaluate AI systems that could pose an existential risk for humanity should be left to be debated by philosophers, and not policymakers.
Presently, AI is nowhere near human-level intelligence — let alone world-ending superintelligence — so any talk of ‘god-like’ AI is detached from immediate reality and currently belongs to the world of sci-fi.
Examples of previous technological advancements must guide debates around the existing risks from AI. One such example is the expansion of the automobile industry in the early twentieth century. The adoption of automobiles destroyed the carriage industry.
However, after a sharp decline in jobs associated with the industry, income generated by technological advancement cycled through the economy and demand for automobiles skyrocketed, resulting in even greater demand for labour in the economy, driven by the automobile industry.
The adoption of AI could cause a similar labour market transformation, creating entirely new jobs that simply do not exist yet. It is already the case that 60 per cent of workers are employed in jobs that did not exist in 1940, so AI is unlikely to render everyone jobless. The summit should have emphasised these real-world employment troubles, rather than reinforced sci-fi speculation.
Indeed, Prime Minister Sunak’s goal of positioning the UK as the “intellectual [and] geographical home of global AI safety regulation” has been sidelined by the summit’s focus on doomsday scenarios.
The UK has much to offer the international arena regarding AI regulation. Notably, the UK has avoided giving the responsibility for AI governance to a single AI regulator that may apply a ‘one-size fits all’ policy to AI regulation. Rather, the UK is pursuing a decentralised regulatory system where existing regulatory bodies formulate their approaches to control how AI tools are used in their sectors.
To avoid an overly complex system, these approaches will be guided by the overarching principles of safety, transparency, fairness, accountability, and contestability, as laid out in a recent white paper on AI. The summit could have been an opportunity for the UK to promote this approach and position itself as an international example of AI safety regulation.
Instead, summit participants will have come away from the summit with the impression that their main concern should be the risk that AI may become some sort of robot demiurge. This risks AI being regulated as though it is an omnipotent monolith and overlooks the real challenges behind AI.
If the summit had chosen to address the less glamourous consequences of AI on society, then it would have proven an excellent opportunity to realise the Prime Minister’s AI ambitions, shaping international AI regulation in the UK’s image, whilst facilitating debates on the best way to tackle AI’s impact on people’s daily lives. Unfortunately, this was not the case.
The Artificial Intelligence (AI) Safety Summit was a real opportunity for the UK to show leadership in managing the challenges and risks posed by this exciting new technology. Unfortunately, the summit jettisoned real-world concerns in favour of sci-fi narratives about the impending AI apocalypse.
The two-day summit saw the recently-established Frontier AI Taskforce host entrepreneurs, academics, and politicians to discuss “frontier AI”— powerful AI systems that could have potentially disastrous consequences for humanity — and how to regulate it.
Although the summit’s aim to raise the profile of safety concerns among AI researchers was noble, its fixation on frontier AI has only further legitimised the sci-fi debate on whether AI will lead to technological utopia or human annihilation. The latter camp’s focus on existential risk, or “x-risk”, has been pushed by proponents in Silicon Valley, and dominated the AI Safety Summit. This was a waste of time.
Of course, the Government should not downplay the risks AI presents to society. As Oliver Dowden, the Deputy Prime Minister, recently said, “there is a very real possibility that the world’s next shock will be a tech shock.” AI systems are already being misused to create and disseminate misinformation and perform cyberattacks and may even be used to create biological weapons.
A report by the American think-tank, RAND Corporation, suggests that Large Language Models (LLMs), such as ChatGPT, “could assist in the planning and execution of a biological attack,” while the CEO of the American AI startup, Anthropic, has said that AI could be used to develop bioweapons in the next two years.
However, focusing on frontier AI threats comes at the expense of less glamourous, but more immediate socio-economic concerns. AI may trigger rising unemployment and worsen social inequalities. A Goldman Sachs report suggested that AI could put at risk 300 million full-time jobs in Europe and the US and a report by the Institute for the Future of Work found that the adoption of AI by companies will erode job quality and increase employee stress levels.
These existing harms should be the focus of present AI regulation, since their potential dangers are concrete and tangible, whereas the hard-to-evaluate AI systems that could pose an existential risk for humanity should be left to be debated by philosophers, and not policymakers.
Presently, AI is nowhere near human-level intelligence — let alone world-ending superintelligence — so any talk of ‘god-like’ AI is detached from immediate reality and currently belongs to the world of sci-fi.
Examples of previous technological advancements must guide debates around the existing risks from AI. One such example is the expansion of the automobile industry in the early twentieth century. The adoption of automobiles destroyed the carriage industry.
However, after a sharp decline in jobs associated with the industry, income generated by technological advancement cycled through the economy and demand for automobiles skyrocketed, resulting in even greater demand for labour in the economy, driven by the automobile industry.
The adoption of AI could cause a similar labour market transformation, creating entirely new jobs that simply do not exist yet. It is already the case that 60 per cent of workers are employed in jobs that did not exist in 1940, so AI is unlikely to render everyone jobless. The summit should have emphasised these real-world employment troubles, rather than reinforced sci-fi speculation.
Indeed, Prime Minister Sunak’s goal of positioning the UK as the “intellectual [and] geographical home of global AI safety regulation” has been sidelined by the summit’s focus on doomsday scenarios.
The UK has much to offer the international arena regarding AI regulation. Notably, the UK has avoided giving the responsibility for AI governance to a single AI regulator that may apply a ‘one-size fits all’ policy to AI regulation. Rather, the UK is pursuing a decentralised regulatory system where existing regulatory bodies formulate their approaches to control how AI tools are used in their sectors.
To avoid an overly complex system, these approaches will be guided by the overarching principles of safety, transparency, fairness, accountability, and contestability, as laid out in a recent white paper on AI. The summit could have been an opportunity for the UK to promote this approach and position itself as an international example of AI safety regulation.
Instead, summit participants will have come away from the summit with the impression that their main concern should be the risk that AI may become some sort of robot demiurge. This risks AI being regulated as though it is an omnipotent monolith and overlooks the real challenges behind AI.
If the summit had chosen to address the less glamourous consequences of AI on society, then it would have proven an excellent opportunity to realise the Prime Minister’s AI ambitions, shaping international AI regulation in the UK’s image, whilst facilitating debates on the best way to tackle AI’s impact on people’s daily lives. Unfortunately, this was not the case.