UK removes commercial AI systems from mandatory investment screening
The United Kingdom's investment screening regime, established under the National Security and Investment Act 2021, was designed to catch foreign capital entering sensitive sectors. AI was included broadly. That broad inclusion has now been narrowed. The UK government announced it is removing commercially available AI systems from the list of technologies that trigger mandatory notification, a change that takes direct aim at the friction founders and investors have been complaining about for two years.
The practical effect is straightforward. A foreign investor putting money into a UK startup that builds AI-powered SaaS tools, consumer applications, or enterprise software will no longer need to file a mandatory notification with the Investment Security Unit before closing. That process, which involves a 30-working-day review period that can extend to 45 days in complex cases, has been cited repeatedly by venture capital firms as a reason to route capital through non-UK entities instead.
What the policy change actually covers
The government drew a distinction between two categories of AI technology. Commercially available AI systems, meaning software products that can be purchased or licensed on the open market, are now out of scope for mandatory screening. Frontier AI systems, which includes large-scale foundation models, advanced military applications, and AI systems with direct national security implications, remain subject to mandatory review.
This is not a blanket deregulation of AI investment. The government retained the ability to call in any transaction for voluntary review if it raises national security concerns, regardless of whether it falls under the mandatory notification list. The change reduces compulsory bureaucracy for the majority of AI deals while keeping the government's intervention powers intact for cases that genuinely warrant scrutiny.
Why the original screening rules were causing problems
The NSI Act was passed with broad sector definitions intentionally. At the time, the thinking was that over-inclusion was safer than gaps. AI sat within the 'advanced technologies' and 'data infrastructure' categories, which meant almost any startup working with machine learning models could, in theory, trigger a notification requirement depending on how its investors and lawyers interpreted the rules.
Founders and legal teams responded the way they typically do when regulatory lines are unclear: they filed notifications to be safe. Between January 2022 and March 2024, the Investment Security Unit received over 1,500 mandatory notifications. The government approved the vast majority without conditions, which suggested a significant share of the filings were precautionary rather than genuinely security-relevant. Processing them still consumed time and resources on both sides.
For early-stage startups specifically, the timeline mismatch was painful. A Series A round that needs to close in six weeks does not fit neatly into a 30-to-45-day government review window. Several UK founders told publications including Sifted and TechCrunch that international investors, particularly from the US and Singapore, had cited the NSI Act review process as a factor in deciding to pass on UK deals or restructure cap tables to avoid triggering the notification threshold.
The competitive context behind this decision
The UK is competing directly with France, Germany, and the UAE for AI investment that would otherwise default to the United States. France's La French Tech initiative and the UAE's AI investment programs both offer streamlined regulatory entry for foreign capital. The EU's AI Act, while comprehensive on safety requirements, does not impose investment screening on commercial AI software in the way the UK's NSI Act did.
The UK government's AI Opportunities Action Plan, published in January 2025, set a target of tripling the size of the UK's AI sector by 2030. Removing friction from international investment rounds is one of the more direct levers available to support that target. The policy change was announced alongside a broader package of measures including new compute infrastructure commitments and updated visa pathways for AI researchers.
The timing also follows pressure from the Tony Blair Institute and the Alan Turing Institute, both of which published separate analyses in late 2024 arguing that the UK's investment screening framework was calibrated for a pre-LLM world where AI applications were narrower and easier to categorize. The commercial AI software sold today by UK startups bears little resemblance to the dual-use military AI that the original screening rules were designed to catch.
What counts as frontier AI under the new rules
The government did not publish a precise technical definition of 'frontier AI' alongside the policy announcement, which is already generating questions from lawyers and compliance teams. The broad indicators include compute thresholds, training data scale, and whether the system has capabilities that go beyond what is commercially available on the open market. A startup fine-tuning an open-source model for legal document review almost certainly falls outside the frontier category. A company training a novel foundation model on proprietary data at scale is likely inside it.
The Department for Business and Trade has said further guidance on the frontier AI definition will be published before the changes take effect. That guidance will matter considerably for companies operating in the space between clearly commercial products and clearly sensitive research, including AI safety companies, biological AI applications, and defense-adjacent AI tools.
The updated screening rules are expected to come into force in the second half of 2025, subject to parliamentary approval of the statutory instrument amending the NSI Act's mandatory notification schedule. Until then, the existing rules remain in place, and investments in AI companies should still be assessed under the current framework.
AI Summary
Generate a summary with AI