Kamran Balayev is an international legal and policy expert, business leader, and former London mayoral candidate.
Something extraordinary is happening in laboratories and server farms across the world, and most people have not yet felt it in their daily lives.
But they will.
Machines are no longer just following instructions. They are learning — reasoning across subjects, improving themselves at a pace that makes human adaptation look slow.
The numbers are striking. On a standard coding benchmark, AI systems went from solving 4.4 per cent of problems in 2023 to 71.7 per cent in a single year. Goldman Sachs estimates that tasks equivalent to 300 million full-time jobs globally could be affected by AI automation. The World Economic Forum projects that more than 90 million jobs will be displaced by 2030, with 41per cent of employers worldwide already planning workforce reductions in areas where AI can automate tasks. These are not distant forecasts. They are showing up now in hiring freezes and entire job categories quietly disappearing. Big Tech companies reduced new graduate hiring by 25 per cent in 2024 compared to 2023 — not slowdowns, but positions that are no longer being created.
The greatest danger, though, is not the technology itself. It is the moral vacuum in which it is being deployed. That should focus Conservative minds. If there is one political tradition built for moments of disorienting change that demand wisdom rather than speed, it is ours.
What we are watching is not a revolution guided by any shared vision of the common good. It is a geopolitical arms race between East and West — each side deploying increasingly powerful systems not because the ethical questions have been answered, but because each fears being second.
Edmund Burke would have recognised this danger immediately. Rapid transformation pursued without regard for accumulated wisdom has a habit of devouring its own promise. The French Revolution was not defeated by a rival ideology. It collapsed under the weight of its own recklessness.
Friedrich Hayek is equally relevant. His central insight was not simply that markets work — it was that no single mind has enough information to plan complex systems from the top down. The same logic applies to AI governance. No government, no Silicon Valley billionaire, no grand committee possesses the wisdom to manage this transition alone. What is needed are robust institutions, built over time, capable of absorbing complexity without concentrating power dangerously in one place.
Britain has spent several centuries building exactly those institutions.
No nation perfectly fits the role of trusted, neutral guide through this transition. But Britain comes closer than most — closer, perhaps, than it gives itself credit for. Not Brussels, with its instinct for regulatory caution, but London — a place where something distinctive has always occurred: the ability to take world-changing ideas and make them commercially workable, institutionally trustworthy, and useful to ordinary people.
The evidence is concrete. According to Finch Capital’s State of European FinTech report, London has emerged as the world’s leading fintech hub, attracting more than €30 billion in investment between 2022 and 2025 — outpacing both San Francisco and New York over the same period. Companies like Monzo, Revolut, and Starling Bank built products for global consumers from day one — succeeding not despite British institutional culture, but because of it.
That combination of commercial boldness and institutional trust is precisely what AI governance needs. Britain has already demonstrated it. The AI Safety Summit brought the United States and China together at a pivotal moment to confront frontier AI risks — a diplomatic achievement few other nations had managed. The UK AI Safety Institute has since emerged as one of the world’s leading centres for governmental AI research and testing. That is a foundation to build on, not a footnote to quietly manage.
In 1986, Margaret Thatcher’s Big Bang deregulation transformed the City of London from a comfortable insular club into the global financial capital it remains today. Its genius lay not just in liberalising markets, but in positioning Britain as the indispensable hub of a new global system at the precise moment that system was being assembled.
The strategic logic today is identical — though the responsibility is greater. The rules governing AI are being written now, across more than 800 policy initiatives from at least 60 countries. The nations that shape those frameworks will not merely benefit from the AI age. They will define it. Britain must not be a rule-taker this time. It has earned the standing to be a rule-maker.
Conservatives have always understood that true influence is not the ability to force, but the authority to be trusted. Britain built legal frameworks that independent nations voluntarily adopted long after empire ended. At its best, it has been the country that stepped forward not purely for itself, but for the wider system.
That is precisely what this moment asks. Not a local player managing its own corner, but a genuinely global one — willing to take responsibility for outcomes that will shape lives not yet lived.
Britain’s moment is here. The only question is whether we have the ambition to claim it.
Kamran Balayev is an international legal and policy expert, business leader, and former London mayoral candidate.
Something extraordinary is happening in laboratories and server farms across the world, and most people have not yet felt it in their daily lives.
But they will.
Machines are no longer just following instructions. They are learning — reasoning across subjects, improving themselves at a pace that makes human adaptation look slow.
The numbers are striking. On a standard coding benchmark, AI systems went from solving 4.4 per cent of problems in 2023 to 71.7 per cent in a single year. Goldman Sachs estimates that tasks equivalent to 300 million full-time jobs globally could be affected by AI automation. The World Economic Forum projects that more than 90 million jobs will be displaced by 2030, with 41per cent of employers worldwide already planning workforce reductions in areas where AI can automate tasks. These are not distant forecasts. They are showing up now in hiring freezes and entire job categories quietly disappearing. Big Tech companies reduced new graduate hiring by 25 per cent in 2024 compared to 2023 — not slowdowns, but positions that are no longer being created.
The greatest danger, though, is not the technology itself. It is the moral vacuum in which it is being deployed. That should focus Conservative minds. If there is one political tradition built for moments of disorienting change that demand wisdom rather than speed, it is ours.
What we are watching is not a revolution guided by any shared vision of the common good. It is a geopolitical arms race between East and West — each side deploying increasingly powerful systems not because the ethical questions have been answered, but because each fears being second.
Edmund Burke would have recognised this danger immediately. Rapid transformation pursued without regard for accumulated wisdom has a habit of devouring its own promise. The French Revolution was not defeated by a rival ideology. It collapsed under the weight of its own recklessness.
Friedrich Hayek is equally relevant. His central insight was not simply that markets work — it was that no single mind has enough information to plan complex systems from the top down. The same logic applies to AI governance. No government, no Silicon Valley billionaire, no grand committee possesses the wisdom to manage this transition alone. What is needed are robust institutions, built over time, capable of absorbing complexity without concentrating power dangerously in one place.
Britain has spent several centuries building exactly those institutions.
No nation perfectly fits the role of trusted, neutral guide through this transition. But Britain comes closer than most — closer, perhaps, than it gives itself credit for. Not Brussels, with its instinct for regulatory caution, but London — a place where something distinctive has always occurred: the ability to take world-changing ideas and make them commercially workable, institutionally trustworthy, and useful to ordinary people.
The evidence is concrete. According to Finch Capital’s State of European FinTech report, London has emerged as the world’s leading fintech hub, attracting more than €30 billion in investment between 2022 and 2025 — outpacing both San Francisco and New York over the same period. Companies like Monzo, Revolut, and Starling Bank built products for global consumers from day one — succeeding not despite British institutional culture, but because of it.
That combination of commercial boldness and institutional trust is precisely what AI governance needs. Britain has already demonstrated it. The AI Safety Summit brought the United States and China together at a pivotal moment to confront frontier AI risks — a diplomatic achievement few other nations had managed. The UK AI Safety Institute has since emerged as one of the world’s leading centres for governmental AI research and testing. That is a foundation to build on, not a footnote to quietly manage.
In 1986, Margaret Thatcher’s Big Bang deregulation transformed the City of London from a comfortable insular club into the global financial capital it remains today. Its genius lay not just in liberalising markets, but in positioning Britain as the indispensable hub of a new global system at the precise moment that system was being assembled.
The strategic logic today is identical — though the responsibility is greater. The rules governing AI are being written now, across more than 800 policy initiatives from at least 60 countries. The nations that shape those frameworks will not merely benefit from the AI age. They will define it. Britain must not be a rule-taker this time. It has earned the standing to be a rule-maker.
Conservatives have always understood that true influence is not the ability to force, but the authority to be trusted. Britain built legal frameworks that independent nations voluntarily adopted long after empire ended. At its best, it has been the country that stepped forward not purely for itself, but for the wider system.
That is precisely what this moment asks. Not a local player managing its own corner, but a genuinely global one — willing to take responsibility for outcomes that will shape lives not yet lived.
Britain’s moment is here. The only question is whether we have the ambition to claim it.