Nvidia GTC 2026: Jensen Huang unveils Groq 3 LPU and Vera Rubin AI chips
Jensen Huang took the stage at Nvidia's GTC 2026 conference in San Jose this week and delivered what might be the company's most consequential product announcement in years. The keynote ran two hours and covered a lot of ground, but two things stood out immediately: the debut of the Groq 3 Language Processing Unit and a trillion-dollar revenue projection tied to Blackwell and Vera Rubin chip orders.
The Groq 3 LPU makes its first appearance
Nvidia acquired Groq for $20 billion, and the Groq 3 LPU is the first chip to come out of that deal. It's scheduled to ship in Q3 2026. The original Groq architecture was designed specifically for fast inference, running language models at speeds that GPU-based setups struggled to match for certain workloads. What Nvidia plans to do with that architecture inside its own ecosystem is still coming into focus, but the early signal is that it wants to offer a more specialized inference option alongside its existing GPU lineup.
The Groq 3 wasn't presented with a full spec sheet at the keynote. Nvidia has a habit of announcing chips well before the technical documentation catches up, so detailed benchmarks will likely surface closer to the Q3 ship date. For now, the message was clear: Nvidia is serious about owning the inference side of the AI stack, not just training.
Vera Rubin and the $1 trillion order projection
Huang projected that purchase orders for Blackwell and Vera Rubin chips could reach $1 trillion through 2027. That figure doubles what Nvidia had estimated earlier. The Vera Rubin chip, named after the astronomer who provided critical observational evidence for dark matter, is positioned as the successor to Blackwell in Nvidia's data center roadmap. Huang has consistently used scientist names for his chip generations, and the naming choice here carries a bit of weight given how much of AI's progress depends on processing power that's still being pushed forward.
The jump from earlier estimates to $1 trillion reflects how quickly large cloud providers and enterprise customers are committing to AI infrastructure buildouts. Microsoft, Google, Amazon, and Meta have all publicly signaled multi-billion dollar capital expenditure plans for 2025 and 2026. Nvidia sits squarely in the middle of that spending wave. Whether those orders fully convert to revenue on Nvidia's timeline is a different question, but the demand signals are clearly not softening.
DLSS 5 and the autonomous vehicle deals
Beyond the chip news, Huang announced DLSS 5. Previous versions of Deep Learning Super Sampling have already become a standard feature across most modern PC games, and version 5 is expected to push image reconstruction quality further while reducing GPU load. Specific performance numbers weren't released during the keynote, but Nvidia tends to follow up GTC announcements with detailed technical whitepapers within weeks.
On the automotive side, Nvidia announced new autonomous vehicle partnerships with BYD, Hyundai, Nissan, and Geely. All four are using Nvidia's Drive platform in some capacity, though the depth of integration varies by manufacturer. BYD in particular is worth watching. It overtook Tesla in global EV sales volume in 2023 and has been aggressive about adding technology partnerships to its supply chain. A deeper Nvidia tie-in gives BYD access to Nvidia's software stack for driver assistance and eventually full autonomy features.
What this keynote actually tells us about Nvidia's direction
Nvidia is no longer just a chip company. The Groq acquisition, the Drive partnerships, the DLSS work, the data center projections: these are signals of a company building vertical control across multiple industries at once. The $20 billion Groq deal in particular deserves attention. Groq had carved out a real niche in inference speed, and Nvidia absorbing that capability removes a credible competitor while adding a specialized architecture to its own portfolio.
The $1 trillion order projection will dominate the financial coverage, but the Groq 3 LPU shipping in Q3 2026 is probably the more concrete near-term milestone. If the chip performs close to what Groq demonstrated pre-acquisition, it gives Nvidia something genuinely different to offer customers who need fast inference without the overhead of a full GPU cluster. That's a specific problem a lot of enterprise AI deployments are trying to solve right now.
AI Summary
Generate a summary with AI