UK removes commercial AI systems from mandatory investment screening

    The United Kingdom's investment screening regime, established under the National Security and Investment Act 2021, was designed to catch foreign capital entering sensitive sectors. AI was included broadly. That broad inclusion has now been narrowed. The UK government announced it is removing commercially available AI systems from the list of technologies that trigger mandatory notification, a change that takes direct aim at the friction founders and investors have been complaining about for two years.

    The practical effect is straightforward. A foreign investor putting money into a UK startup that builds AI-powered SaaS tools, consumer applications, or enterprise software will no longer need to file a mandatory notification with the Investment Security Unit before closing. That process, which involves a 30-working-day review period that can extend to 45 days in complex cases, has been cited repeatedly by venture capital firms as a reason to route capital through non-UK entities instead.

    What the policy change actually covers

    The government drew a distinction between two categories of AI technology. Commercially available AI systems, meaning software products that can be purchased or licensed on the open market, are now out of scope for mandatory screening. Frontier AI systems, which includes large-scale foundation models, advanced military applications, and AI systems with direct national security implications, remain subject to mandatory review.

    This is not a blanket deregulation of AI investment. The government retained the ability to call in any transaction for voluntary review if it raises national security concerns, regardless of whether it falls under the mandatory notification list. The change reduces compulsory bureaucracy for the majority of AI deals while keeping the government's intervention powers intact for cases that genuinely warrant scrutiny.

    UK government policy on AI investment screening
    UK government policy on AI investment screening

    Why the original screening rules were causing problems

    The NSI Act was passed with broad sector definitions intentionally. At the time, the thinking was that over-inclusion was safer than gaps. AI sat within the 'advanced technologies' and 'data infrastructure' categories, which meant almost any startup working with machine learning models could, in theory, trigger a notification requirement depending on how its investors and lawyers interpreted the rules.

    Founders and legal teams responded the way they typically do when regulatory lines are unclear: they filed notifications to be safe. Between January 2022 and March 2024, the Investment Security Unit received over 1,500 mandatory notifications. The government approved the vast majority without conditions, which suggested a significant share of the filings were precautionary rather than genuinely security-relevant. Processing them still consumed time and resources on both sides.

    For early-stage startups specifically, the timeline mismatch was painful. A Series A round that needs to close in six weeks does not fit neatly into a 30-to-45-day government review window. Several UK founders told publications including Sifted and TechCrunch that international investors, particularly from the US and Singapore, had cited the NSI Act review process as a factor in deciding to pass on UK deals or restructure cap tables to avoid triggering the notification threshold.

    The competitive context behind this decision

    The UK is competing directly with France, Germany, and the UAE for AI investment that would otherwise default to the United States. France's La French Tech initiative and the UAE's AI investment programs both offer streamlined regulatory entry for foreign capital. The EU's AI Act, while comprehensive on safety requirements, does not impose investment screening on commercial AI software in the way the UK's NSI Act did.

    The UK government's AI Opportunities Action Plan, published in January 2025, set a target of tripling the size of the UK's AI sector by 2030. Removing friction from international investment rounds is one of the more direct levers available to support that target. The policy change was announced alongside a broader package of measures including new compute infrastructure commitments and updated visa pathways for AI researchers.

    The timing also follows pressure from the Tony Blair Institute and the Alan Turing Institute, both of which published separate analyses in late 2024 arguing that the UK's investment screening framework was calibrated for a pre-LLM world where AI applications were narrower and easier to categorize. The commercial AI software sold today by UK startups bears little resemblance to the dual-use military AI that the original screening rules were designed to catch.

    What counts as frontier AI under the new rules

    The government did not publish a precise technical definition of 'frontier AI' alongside the policy announcement, which is already generating questions from lawyers and compliance teams. The broad indicators include compute thresholds, training data scale, and whether the system has capabilities that go beyond what is commercially available on the open market. A startup fine-tuning an open-source model for legal document review almost certainly falls outside the frontier category. A company training a novel foundation model on proprietary data at scale is likely inside it.

    The Department for Business and Trade has said further guidance on the frontier AI definition will be published before the changes take effect. That guidance will matter considerably for companies operating in the space between clearly commercial products and clearly sensitive research, including AI safety companies, biological AI applications, and defense-adjacent AI tools.

    The updated screening rules are expected to come into force in the second half of 2025, subject to parliamentary approval of the statutory instrument amending the NSI Act's mandatory notification schedule. Until then, the existing rules remain in place, and investments in AI companies should still be assessed under the current framework.

    Love this story? Explore more trending news on uk

    Share this story

    Frequently Asked Questions

    Q: Does this mean all AI investments in the UK are now exempt from national security review?

    No. Only commercially available AI systems are removed from the mandatory notification list. Frontier AI systems and other sensitive technology categories still require mandatory review, and the government retains the right to call in any deal voluntarily if it raises security concerns.

    Q: When will the new AI investment screening rules take effect in the UK?

    The updated rules are expected to come into force in the second half of 2025, pending parliamentary approval of the statutory instrument amending the National Security and Investment Act. The existing framework applies until then.

    Q: How does the UK define 'frontier AI' under the new policy?

    The government has not published a precise technical definition yet. Broad indicators include compute scale, training data volume, and whether the system's capabilities go beyond what is available commercially. Detailed guidance is expected before the changes take effect.

    Q: What was the typical timeline for an AI investment review under the old rules?

    The standard mandatory review period under the NSI Act is 30 working days, which can extend to 45 working days in more complex cases. This timeline was frequently cited as incompatible with the pace of venture capital deal-making.

    Q: Which other countries are competing with the UK for AI investment, and how do their rules compare?

    France, Germany, and the UAE are active competitors for international AI capital. The EU's AI Act focuses on safety obligations rather than investment screening for commercial software, and the UAE has no equivalent mandatory review process for most AI investments.

    Read More