NVIDIA unveiled Cosmos 3 on March 18 at GTC, the first world foundation model that unifies synthetic world generation, physical AI reasoning, and action simulation in a single architecture. The model gives robots and autonomous vehicles the ability to imagine scenarios before encountering them in the real world, a capability NVIDIA positions as the foundation of physical AI.

What Happened

Jensen Huang announced Cosmos 3 alongside a wave of companion releases at GTC. GR00T N1.7, an open vision-language-action model for humanoid robots, entered general availability. GR00T N2 launched with 2x better generalization to new tasks. Alpamayo 1.5 brought improvements to autonomous driving. Isaac Lab 3.0 entered early access with the new Newton physics engine 1.0.

The core strategy behind Cosmos 3 is turning robotics' data problem into a compute problem. Instead of collecting millions of hours of real-world robot training data, Cosmos 3 generates synthetic environments and scenarios at scale. NVIDIA is releasing the Physical AI Data Factory Blueprint on GitHub in April 2026 to formalize this approach.

Why It Matters

Physical AI has been held back by a fundamental bottleneck: training data. Real-world robot data is expensive, slow to collect, and dangerous to gather at scale. Cosmos 3 addresses this by generating photorealistic synthetic environments where robots can train on millions of scenarios without physical risk. As Huang stated, "Physical AI has arrived. Every industrial company will become a robotics company."

The partner list shows this is not a research preview. ABB Robotics, FANUC, YASKAWA, and KUKA, the four companies that dominate global industrial robotics, are all building on the platform. Consumer robotics partners include 1X, AGIBOT, Agility, Boston Dynamics, and Figure. Disney is using the technology for its Olaf droid at Disneyland Paris. Enterprise adopters include HCLTech, Johnson & Johnson MedTech, and Toyota Research Institute.

Building on the earlier Cosmos 2.5 release, Cosmos 3 represents a significant architectural leap. Where Cosmos 2.5 focused on world generation for video and simulation, Cosmos 3 adds vision reasoning and action simulation, unifying three capabilities that previously required separate models and pipelines.

Key Details

  • Architecture: First world foundation model unifying synthetic world generation, vision reasoning, and action simulation.
  • Companion models: GR00T N1.7 (open VLA for humanoids), GR00T N2 (2x task generalization), Alpamayo 1.5 (autonomous driving).
  • Developer tools: Isaac Lab 3.0 (early access) with Newton physics engine 1.0.
  • Industrial partners: ABB Robotics, FANUC, YASKAWA, KUKA.
  • Robotics partners: 1X, AGIBOT, Agility, Boston Dynamics, Figure, Disney.
  • Enterprise users: HCLTech, Johnson & Johnson MedTech, Toyota Research Institute.
  • Open resources: Physical AI Data Factory Blueprint available on GitHub in April 2026.

What to Do Next

Robotics developers and simulation engineers should review the technical analysis from The Decoder for details on how Cosmos 3 converts data collection into compute workloads. Teams working on physical AI applications can monitor the expanded model family announcements for additional open-source releases. The Physical AI Data Factory Blueprint on GitHub in April will be the first hands-on entry point for developers looking to generate synthetic training data at scale. For broader GTC coverage, Blockchain News provides an overview of the full robotics partnership ecosystem.