Yann LeCun's AMI Labs raised $1.03 billion at a $3.5 billion pre-money valuation, making it Europe's largest seed round ever. LeCun left Meta at the end of 2025 to build "world models" that learn from physical reality rather than text alone. For creators working with AI-generated images, video, and 3D, this could reshape what generative AI can actually produce.

What Happened

AMI Labs closed $1.03 billion in seed funding with backing from Bezos Expeditions, Cathay Innovation, and Greycroft. The $3.5 billion pre-money valuation makes this the most expensive seed-stage company in European history and one of the largest AI seed rounds globally.

LeCun, who spent nearly a decade leading Meta's AI research division (FAIR), departed at the end of 2025 to start AMI Labs. His thesis: current large language models are fundamentally limited because they learn from text. They can string words together, but they do not understand how the physical world works. Objects have weight. Light casts shadows. Fabrics drape. Fluids pour. These properties are obvious to humans but invisible to models trained purely on language.

World models aim to fix this gap. Instead of predicting the next token in a sentence, they predict what happens next in a physical scene. The models learn from video, sensor data, and simulations to build internal representations of how reality behaves. LeCun has been publishing research on this approach for years. AMI Labs is the vehicle to build it at scale.

Why It Matters for Creators

Current AI video generators produce impressive clips, but they regularly break physics. Even state-of-the-art tools like Kling 3's native 4K video generation occasionally produce artifacts where fingers pass through objects, shadows move in wrong directions, or pouring liquid defies gravity. These artifacts exist because the underlying models have no concept of how the physical world actually works.

World models could eliminate these problems at the architectural level. If a model understands that a ball dropped from a table falls downward and bounces, it generates that motion correctly by default. For video creators, 3D artists, and game developers, this means AI-generated content that looks physically accurate without constant manual correction.

The $1.03 billion in funding gives AMI Labs the compute budget to train these models at the scale required. This is not an academic research project. It is a well-capitalized company with one of the most cited AI researchers in history at the helm, building technology that directly competes with the generative AI approaches used by OpenAI, Google, and his former employer Meta.

Key Details

MetricValue
Funding raised$1.03 billion
Pre-money valuation$3.5 billion
Round typeSeed (Europe's largest ever)
FounderYann LeCun (ex-Meta FAIR chief)
Key investorsBezos Expeditions, Cathay Innovation, Greycroft
Core technologyWorld models (physics-aware AI)
Target applicationsImage, video, 3D generation with physical understanding

What to Do Next

AMI Labs has not released any public tools or APIs yet. This is a long-term bet, not something that changes your workflow today. But it is worth tracking. If world models deliver on their promise, the next generation of AI image, video, and 3D tools will produce output that is fundamentally more realistic than anything available now.

For creators who hit frustrating physics errors in current AI video tools, this is the research direction most likely to solve those problems. In the meantime, platforms like Luma's Creative AI Agents are already working on multi-model orchestration that could incorporate world models once they mature. Follow AMI Labs and expect competing labs to accelerate their own physics-aware research in response to this massive funding event. LeCun's published research on world models provides a technical foundation for understanding the approach.


This story was covered by Creative AI News.

Subscribe for free to get the weekly digest every Tuesday.