£2.7m government fund to support streamlining AI

Technology Secretary Peter Kyle is due to announce a £2.7 million government fund to support UK regulators in piloting AI systems Technology Secretary Peter Kyle is due to announce a £2.7 million government fund to support UK regulators in piloting AI systems

Technology Secretary Peter Kyle is due to announce a £2.7 million government fund to support UK regulators in piloting AI systems to keep Britain at pace in the global race for tech leadership.

Kyle stressed that the move is not about lowering safety standards but applying “smart regulation” to accelerate approvals and help British innovators compete globally. The funding will support agencies such as Ofgem, the Civil Aviation Authority, and the Office for Nuclear Regulation in projects ranging from AI-assisted accident report analysis to unified regulatory guidance platforms and nuclear waste management pilots.

Key proposals include the introduction of AI-specific regulatory sandboxes to support experimentation in controlled environments, improved access to computer infrastructure, and the expanded remit of the AI Safety Institute. These are backed by government commitments to industrial strategy, technical infrastructure, and collaborative oversight.

The announcement comes amid growing industry pressure for consistent and innovation-friendly regulation, with many business leaders warning that fragmented oversight risks stifling competitiveness. While the direction has been welcomed, some voices caution against overselling AI’s short-term impact or compromising independent scrutiny in the rush to modernise.

“Peter Kyle’s call for AI reform is a welcome step towards making AI regulation more responsive to business needs. Too often, innovation is slowed not by lack of ambition, but by unclear governance and fragmented oversight. Creating space for innovation through AI-specific regulatory sandboxes and improving access to technical infrastructure would be a meaningful shift, but to make these ambitions real, we also need to ensure the data foundations are in place to build AI systems that are trustworthy, explainable, and scalable,” said Stuart Harvey, CEO of Datactics. “Any regulatory evolution must go hand in hand with investment in data quality and data governance. Without reliable data and clear lineage, even the most well-intentioned regulation can fall short. It’s encouraging to see a growing political appetite for a collaborative approach that balances innovation with accountability.”

“AI offers huge promise to improve detection, speed up response times, and strengthen defences, but without robust strategies for cyber resilience and real-time visibility, organisations risk sleepwalking into deeper vulnerabilities. Our research shows that over a third (34%) of CISOs have already banned certain AI tools like DeepSeek entirely, driven by fears of privacy breaches and loss of control,” added Andy Ward, SVP International of Absolute Security. “As attackers leverage AI to reduce the gap between vulnerability and exploitation, our defences must evolve with equal urgency. Now is the time for security leaders to ensure their people, processes, and technologies are aligned, or risk being left dangerously exposed.”

As AI transitions from policy aspiration to real-world implementation, experts have also highlighted the need to build workforce capacity and organisational readiness. From inclusive training pathways to AI deployment frameworks, the ability of regulators and businesses to govern AI responsibly will depend not just on funding and ambition—but on people. Delivering safe, scalable AI requires a pipeline of trained professionals and systems designed to support ethical, transparent adoption.

With the UK continuing to position itself as a global leader in responsible AI, attention is now turning to whether its regulatory and workforce infrastructure can keep pace with industry demand.

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Previous Post
Thales research shows three in five working on post-quantum readiness

Thales research shows three in five working on post-quantum readiness

Next Post
element14, an Avnet Community, together with Pico Technology, will host a webinar on 11th September from 5 to 6 p.m. BST (6 to 7 p.m. CEST)

element14 and Pico will host webinar on USB oscilloscopes