George Holt is a local councillor and campaigner on neurodiversity.
Artificial Intelligence (AI) is no longer the future; it’s the present. As this technology advances at what some might think is an alarming speed, the critical question arises: how do we balance the innovation AI brings with the necessity of regulation?
With Rishi Sunak set to address a major summit on the subject next week, let’s consider the distinct approaches of AI regulation taken by the European Union (EU) and the United Kingdom (UK).
The EU has proposed the AI Act. To their credit, it was the world’s first serious comprehensive legal roadmap for AI, and takes the approach before you can even consider regulating AI, you must define it.
It does this by classifying AI systems into four tiers: unacceptable risk, high risk, limited risk, and minimal risk. The high-risk tier, covering AI systems in critical domains like healthcare, education, and infrastructure, are subjected to stringent requirements and routine compliance assessments. On the flip side, AI systems with minimal risk (think spam filters and AI-powered video games) enjoy a mostly hands-off approach, and the act simply focuses on providing transparency.
But it’s worth noting that this legislation was first proposed in April 2021, and there’s no guarantee that this will be law by the close of 2023, given ongoing disagreements between member states. Though their early action was commendable, their speed at moving this along is embarrassing.
Now, let’s shift to Britain. The AI White Paper lays down five core principles for AI governance: safety, transparency, accountability, fairness, and contestability. While the EU’s path may appear more prescriptive, the UK opts for a more flexible strategy, setting the groundwork and leaving room for potential future AI regulations when parliamentary time allows.
The Framework aims to be a tool in “getting regulation right”, so that businesses feel confident in investing and retaining their investment in the UK. Essentially, this approach will give AI companies the free-range needed to innovate, and the Government will mould regulations around this as time goes on – generally taking the view it’s too early to tell right now what regulation is needed.
This approach doesn’t seem to be a solely Conservative view either; Labour have been light on details on their AI policy, only really making noises about that their regulation will be framework based and would be “stronger” than what has currently been proposed.
This compliments the Government’s attempts to establish itself as a technology leader. (This approach is largely two pronged: AI is deemed one key area as an emerging technology and Fintech the other, to leverage our mature financial services sector.)
To bolster this the UK will play host to the international AI safety summit in November, which will see tech bosses from Silicon Valley, the US Vice President, and multiple international leaders coming to spiritually appropriate site of Bletchley Park, to discuss the future of AI regulation.
In the real world, these different regulatory styles impact AI companies in significant ways. Take DeepMind, a UK-based player that’s been venturing into the healthcare sector. Their AlphaFold program, which predicts protein structures, has groundbreaking implications for drug discovery and disease treatment. Their work in this space has one of them of the most prestigious prizes in medical science, the Lasker Award.
DeepMind has benefited from the UK’s more “hands-off” approach to AI, whereas over on the continent, companies could face up to €40 Million in a single fine. Their approach has not been met with open arms; a joint letter from 150 executives in European companies said that:
“In our assessment, the draft legislation would jeopardize Europe’s competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing”.
Truth be told, when it comes to the EU’s AI Act, they really need do need to get their act together. This bill was first proposed in 2021; a lot has changed and they’re struggling to keep up, and the industry can see it.
The UK stands to benefit from this. Whilst it’s whitepaper may be seen as loosey-goosey by some, it’s set out enough basic ground rules to allow these innovators to get on with it, within reasonable limits. The EU’s approach, meanwhile, has been seen by some as overly-interventionist, telling industry titans that they can only play with their new AI toys when the European Commission says they can.
At the heart of this global AI shift lies a delicate balance. Regulators globally need to hit that sweet spot between nurturing innovation and enforcing ethical and safety standards. The EU’s stringent approach does aim to shield consumers and society from potential AI risks, but puts the brakes on innovation.
Meanwhile, the UK’s more flexible approach may spark short-term innovation while keeping the door open for more well-defined regulations later down the line – but for some, this is seen as an opportunity for uncertainty.
In the broader AI arena, the importance of learning from one another’s experiences cannot be overstated. It’s crucial to construct a regulatory framework that links up global practices, adapting to the ever-evolving landscape of AI tech. As innovation continues to march forward, international collaboration will emerge as the keystone to success.
We know from the past that these tech giants will not wait around for regulators to catch up. When the industry moves, it moves fast and without regard for nation states. The onus sits on the Governments of the world to understand this industry, to understand what it can leverage, what it needs to restrict.
George Holt is a local councillor and campaigner on neurodiversity.
Artificial Intelligence (AI) is no longer the future; it’s the present. As this technology advances at what some might think is an alarming speed, the critical question arises: how do we balance the innovation AI brings with the necessity of regulation?
With Rishi Sunak set to address a major summit on the subject next week, let’s consider the distinct approaches of AI regulation taken by the European Union (EU) and the United Kingdom (UK).
The EU has proposed the AI Act. To their credit, it was the world’s first serious comprehensive legal roadmap for AI, and takes the approach before you can even consider regulating AI, you must define it.
It does this by classifying AI systems into four tiers: unacceptable risk, high risk, limited risk, and minimal risk. The high-risk tier, covering AI systems in critical domains like healthcare, education, and infrastructure, are subjected to stringent requirements and routine compliance assessments. On the flip side, AI systems with minimal risk (think spam filters and AI-powered video games) enjoy a mostly hands-off approach, and the act simply focuses on providing transparency.
But it’s worth noting that this legislation was first proposed in April 2021, and there’s no guarantee that this will be law by the close of 2023, given ongoing disagreements between member states. Though their early action was commendable, their speed at moving this along is embarrassing.
Now, let’s shift to Britain. The AI White Paper lays down five core principles for AI governance: safety, transparency, accountability, fairness, and contestability. While the EU’s path may appear more prescriptive, the UK opts for a more flexible strategy, setting the groundwork and leaving room for potential future AI regulations when parliamentary time allows.
The Framework aims to be a tool in “getting regulation right”, so that businesses feel confident in investing and retaining their investment in the UK. Essentially, this approach will give AI companies the free-range needed to innovate, and the Government will mould regulations around this as time goes on – generally taking the view it’s too early to tell right now what regulation is needed.
This approach doesn’t seem to be a solely Conservative view either; Labour have been light on details on their AI policy, only really making noises about that their regulation will be framework based and would be “stronger” than what has currently been proposed.
This compliments the Government’s attempts to establish itself as a technology leader. (This approach is largely two pronged: AI is deemed one key area as an emerging technology and Fintech the other, to leverage our mature financial services sector.)
To bolster this the UK will play host to the international AI safety summit in November, which will see tech bosses from Silicon Valley, the US Vice President, and multiple international leaders coming to spiritually appropriate site of Bletchley Park, to discuss the future of AI regulation.
In the real world, these different regulatory styles impact AI companies in significant ways. Take DeepMind, a UK-based player that’s been venturing into the healthcare sector. Their AlphaFold program, which predicts protein structures, has groundbreaking implications for drug discovery and disease treatment. Their work in this space has one of them of the most prestigious prizes in medical science, the Lasker Award.
DeepMind has benefited from the UK’s more “hands-off” approach to AI, whereas over on the continent, companies could face up to €40 Million in a single fine. Their approach has not been met with open arms; a joint letter from 150 executives in European companies said that:
“In our assessment, the draft legislation would jeopardize Europe’s competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing”.
Truth be told, when it comes to the EU’s AI Act, they really need do need to get their act together. This bill was first proposed in 2021; a lot has changed and they’re struggling to keep up, and the industry can see it.
The UK stands to benefit from this. Whilst it’s whitepaper may be seen as loosey-goosey by some, it’s set out enough basic ground rules to allow these innovators to get on with it, within reasonable limits. The EU’s approach, meanwhile, has been seen by some as overly-interventionist, telling industry titans that they can only play with their new AI toys when the European Commission says they can.
At the heart of this global AI shift lies a delicate balance. Regulators globally need to hit that sweet spot between nurturing innovation and enforcing ethical and safety standards. The EU’s stringent approach does aim to shield consumers and society from potential AI risks, but puts the brakes on innovation.
Meanwhile, the UK’s more flexible approach may spark short-term innovation while keeping the door open for more well-defined regulations later down the line – but for some, this is seen as an opportunity for uncertainty.
In the broader AI arena, the importance of learning from one another’s experiences cannot be overstated. It’s crucial to construct a regulatory framework that links up global practices, adapting to the ever-evolving landscape of AI tech. As innovation continues to march forward, international collaboration will emerge as the keystone to success.
We know from the past that these tech giants will not wait around for regulators to catch up. When the industry moves, it moves fast and without regard for nation states. The onus sits on the Governments of the world to understand this industry, to understand what it can leverage, what it needs to restrict.