Recursive Superintelligence emerged from stealth on May 13, 2026 with a $650 million round at a $4.65 billion valuation, led by GV and Greycroft with Nvidia and AMD venture arms joining. CEO Richard Socher (former Salesforce Chief Scientist) and co-founder Tim Rocktaschel (former Google DeepMind) are betting that recursive self-improvement is the shortest path to superintelligence, with a team that has grown past 25 engineers across San Francisco and London.

Track This For Impact

There is no API, no model, and no product to use today. What is worth tracking: Recursive plans a "Level 1" autonomous training system targeting mid-2026, meaning the company has roughly 12 to 18 months to ship a model that demonstrates self-improvement. If it works, the foundation-model layer you license from OpenAI, Anthropic, Google, or DeepSeek gets a new competitor with deep-pocket backing from two of the biggest chip vendors. Bookmark Recursive's announcement coverage and watch for a public model release this fall.

Why It Matters

The funding stack is the signal. GV and Greycroft are conventional venture bets, but Nvidia and AMD writing checks into the same round is unusual: it tells the market that both chip vendors see the team as a credible foundation-model contender worth pre-positioning compute relationships with. TechCrunch's reporting emphasizes Socher's insistence the company will actually ship products, which separates this raise from the long list of safety-research labs that consume funding without releasing anything creators can touch.

Key Details

The technical bet is recursive self-improvement applied first to AI research itself: an AI system that runs automated experiments, validates the results, and proposes changes to its own training code, harness, and infrastructure. The team plans to extend the same approach to physics, chemistry, and biology after the first internal loop closes. Founding staff include Yuandong Tian (ex-Meta FAIR director) and researchers pulled from OpenAI, Meta, and Uber AI. The headcount is under 30, the offices are split between San Francisco and London, and there is no released product yet according to TheNextWeb's coverage. Additional reporting at Tech.eu confirms the same investor list and the dual-city structure.

What to Do Next

Add the company name to your model-release watchlist alongside Anthropic, DeepSeek, and Mistral. The mid-2026 "Level 1" milestone is the next concrete checkpoint; anything before that is recruiting and infrastructure work. If recursive self-improvement claims start surfacing in benchmark wins, expect rapid creator-tool integrations from the labs that need to keep up.