OpenAI's data center strategy shift puts its IPO finances under the microscope

    OpenAI has changed how it is approaching data center development, and Wall Street analysts are not entirely comfortable with what they are seeing. With a potential IPO penciled in for later in 2026, the company's capital spending decisions are now subject to the kind of scrutiny that comes with preparing to be a public company. The concern is not whether OpenAI can grow, it clearly can, but whether the money going into infrastructure is being deployed in a way that will hold up when investors start applying standard financial metrics.

    What changed in the data center approach

    OpenAI had been operating under the Stargate framework, a joint venture with SoftBank, Oracle, and others announced in January 2025, with a stated commitment of up to $500 billion in AI infrastructure investment over four years. The early phase involved $100 billion in near-term spending. OpenAI has since adjusted how it participates in that structure, shifting some data center development away from direct ownership toward arrangements where it takes capacity from partners rather than building and controlling facilities itself.

    The shift has practical logic to it. Building and owning data centers requires enormous upfront capital and long depreciation schedules. Taking capacity from partners keeps those assets off OpenAI's balance sheet. But it also means OpenAI pays ongoing costs for compute it does not own, which affects gross margins in a way that investors will notice when they read a prospectus.

    OpenAI's data center strategy is drawing scrutiny ahead of its planned IPO
    OpenAI's data center strategy is drawing scrutiny ahead of its planned IPO

    The revenue numbers are strong, but that is only part of the story

    OpenAI has surpassed $25 billion in annualized revenue, a figure that would place it among the fastest-growing enterprise software companies ever. Anthropic is approaching $19 billion in annualized revenue, a number that would have seemed implausible for a three-year-old AI lab even 18 months ago. Both companies are growing fast. The question for OpenAI's IPO is not top-line growth but what the path to profitability looks like when the company is spending billions on compute, research salaries, and infrastructure simultaneously.

    OpenAI's training runs for frontier models consume enormous amounts of GPU compute over periods of months. Inference, the process of running the model to answer user queries, is a separate ongoing cost that scales with usage. As ChatGPT's user base grows, inference costs grow with it. The company has been working on more efficient architectures to reduce the per-query cost, but that work takes time, and the IPO timeline may arrive before those efficiency gains are fully reflected in the financials.

    Why Wall Street is paying attention to capital efficiency now

    Public market investors apply different standards than the venture capital and strategic investors who have funded OpenAI through private rounds. The company's last private valuation was $157 billion, reached during a funding round in late 2024. For a public offering at that valuation or higher to succeed, analysts need to see a credible model where revenue growth outpaces infrastructure spending over a defined time horizon. A data center strategy that adds ongoing capacity costs without clear unit economics is harder to underwrite at scale.

    The concern is not unique to OpenAI. Microsoft, Google, and Amazon have all faced questions about the return on their AI infrastructure investments, and each has had to explain to shareholders how and when those costs convert into margin-accretive revenue. OpenAI does not have the luxury of cross-subsidizing infrastructure through unrelated business segments the way those companies do. Its compute costs map almost directly onto its product costs, which makes the relationship between spending and profitability more visible and harder to obscure.

    What the IPO timeline looks like

    OpenAI completed its conversion from a nonprofit-controlled structure to a public benefit corporation in early 2025, a necessary step before any public offering. The company has not filed an S-1 registration statement with the SEC, which is typically done six to twelve months before a listing. If the IPO is genuinely targeting late 2026, the filing would be expected sometime in the first half of the year. Until that document is public, the financial details that Wall Street wants to scrutinize remain largely controlled by OpenAI's own disclosures.

    Love this story? Explore more trending news on openai

    Share this story

    Frequently Asked Questions

    Q: What is the Stargate project and how does it relate to OpenAI's data center plans?

    Stargate is a joint venture announced in January 2025 involving OpenAI, SoftBank, Oracle, and others, with a stated goal of investing up to $500 billion in AI infrastructure over four years. OpenAI has since adjusted its direct participation in the project, shifting toward taking capacity from partners rather than owning facilities outright.

    Q: How much revenue is OpenAI generating, and how does that compare to Anthropic?

    OpenAI has surpassed $25 billion in annualized revenue, while Anthropic is approaching $19 billion. Both figures reflect rapid growth in enterprise and consumer AI products over the past year.

    Q: Why does OpenAI's nonprofit-to-PBC conversion matter for the IPO?

    OpenAI was originally controlled by a nonprofit board, which placed restrictions on profit distribution and investor returns. Converting to a public benefit corporation removes those structural barriers and makes a traditional public stock offering legally viable.

    Q: When might OpenAI file its IPO paperwork with the SEC?

    Companies typically file an S-1 registration statement six to twelve months before going public. If OpenAI is targeting a late 2026 IPO, a filing in the first half of 2026 would be consistent with that timeline, though no filing has been made yet.

    Q: Why are inference costs a concern for OpenAI's profitability model?

    Inference is the ongoing computational cost of responding to user queries. As ChatGPT's user base scales, inference costs scale with it. Unlike training runs, which are one-time expenses per model, inference costs are continuous and directly tied to revenue, making gross margin improvement dependent on efficiency gains in model architecture.

    Read More