Meta's Avocado AI model delayed to May as it falls short of Google Gemini and OpenAI

    Meta has pushed its next-generation AI model, internally codenamed Avocado, back to May after internal benchmarks showed it performing below the latest versions of Google Gemini 3.0 and OpenAI's current models. For a company that spent $14.3 billion assembling what it described as a world-class AI research team, that gap is uncomfortable. The delay is not just a scheduling issue. It signals something more specific about where Meta actually stands in the competition for frontier model performance.

    The team behind Avocado was put together under Alexandr Wang, the Scale AI founder whom Meta recruited at significant cost. The expectation was that the combination of Wang's data infrastructure expertise and Meta's compute resources would produce a model capable of competing directly with OpenAI's GPT-4 class successors and Google's Gemini lineup. That has not happened yet, at least not on the timeline Meta originally planned.

    Meta's AI development faces a competitive gap against Google and OpenAI
    Meta's AI development faces a competitive gap against Google and OpenAI

    What the benchmarks actually showed

    Meta's internal evaluations found Avocado trailing on the metrics that matter most to enterprise and consumer AI buyers: reasoning accuracy, instruction following, and coding performance. Google Gemini 3.0 and the latest OpenAI models have raised the bar considerably since early 2025, and Avocado as currently built does not clear it. Delaying rather than shipping a weaker model is the right call commercially, but it creates a window where Meta has no competitive frontier model to offer.

    Meta's open-source Llama models have given the company real credibility with developers, and Llama 3 saw wide adoption across research and enterprise deployments through 2025. Avocado is meant to be something different: a closed, performance-first model positioned to compete directly with GPT and Gemini in head-to-head capability comparisons. Llama's openness is a separate strategy. Avocado is about raw benchmark performance, and right now it is not there.

    The Gemini licensing discussion

    The detail that will draw the most attention is the reported internal discussion about temporarily licensing Google's Gemini model while Avocado undergoes further development. That conversation, if it happened, is a striking admission. It would mean Meta paying Google to use a model that Avocado was explicitly built to beat. Whether those talks were serious or a contingency discussion at the margins, the fact that they occurred at all says a lot about how wide the gap became.

    A licensing arrangement would also create a strange competitive dynamic. Meta's AI ambitions are tied closely to its consumer products: Meta AI across WhatsApp, Instagram, Messenger, and Ray-Ban glasses. Running those products on a Google model, even temporarily, would hand Google both revenue and usage data from one of its biggest rivals. It is hard to imagine Mark Zuckerberg signing off on that unless the alternative was shipping nothing at all.

    What a May release actually means

    A May release date gives the Avocado team roughly six to eight weeks to close the performance gap identified in internal testing. That is not a lot of time for fundamental model improvements, which suggests the changes will likely be targeted: better post-training, improved instruction tuning, and possibly a revised evaluation setup that narrows the apparent deficit. A model that scores better on benchmarks in May might still trail Gemini 3.0 in real-world use cases.

    The broader context here is that $14.3 billion in AI spending has not yet produced a model that leads any major benchmark category. That does not mean the spending was wasted. Building AI infrastructure, recruiting research talent, and training frontier models all require long lead times. But the gap between the investment narrative and the current product reality is wide enough that Meta's next earnings call will likely face pointed questions about Avocado's timeline and what specifically changed between the original release plan and the May delay.

    Where Meta goes from here

    Zuckerberg has been more publicly committed to AI than almost any other major tech CEO, describing it as the central priority for Meta across multiple earnings calls and public interviews. That public commitment makes delays like this more visible than they might be at a company with lower stated ambitions. OpenAI and Google have both shipped multiple major model updates in the past twelve months. Anthropic released Claude 3.7 Sonnet with extended thinking capabilities in early 2025. Meta, by comparison, has leaned heavily on Llama while Avocado has stayed behind closed doors.

    May is still close enough that Avocado could land before the summer AI conference season heats up. If Meta ships a model that genuinely competes with Gemini 3.0 on coding and reasoning tasks, the delay will be quickly forgotten. If it ships something that still trails, the questions about Alexandr Wang's team and the $14.3 billion investment will get louder.

    Love this story? Explore more trending news on meta

    Share this story

    Frequently Asked Questions

    Q: What is Meta's Avocado AI model?

    Avocado is Meta's internal codename for its next-generation closed AI model, designed to compete directly with OpenAI's GPT-series and Google's Gemini on benchmark performance. It is separate from Meta's open-source Llama models.

    Q: Why was Avocado delayed to May?

    Meta's internal evaluations found Avocado underperforming compared to Google Gemini 3.0 and the latest OpenAI models, particularly on reasoning, coding, and instruction-following tasks. The company chose to delay rather than release a model that trailed its competitors.

    Q: Did Meta really consider licensing Google's Gemini model?

    Reports indicate Meta's leadership discussed the possibility of temporarily licensing Gemini while Avocado was being improved. It is unclear how serious those talks were, but the discussion reflects how wide the performance gap became internally.

    Q: Who is Alexandr Wang and what is his role at Meta?

    Alexandr Wang is the founder of Scale AI, a data labeling and AI infrastructure company. Meta recruited him to lead a high-profile AI super-team with a mandate to build frontier-level models, backed by $14.3 billion in reported investment.

    Q: How does Avocado differ from Meta's Llama models?

    Llama models are open-source and widely used by developers and researchers. Avocado is being developed as a closed, performance-first model intended to compete on raw capability against proprietary models from OpenAI and Google.

    Read More