Tencent open-sourced HY-World 2.0 on April 16, 2026, a multi-modal framework that turns text descriptions or single images into navigable 3D environments with exportable meshes and Gaussian Splattings ready for Unity, Unreal Engine, and Blender.

For the broader landscape, see our open-source AI models 2026 creator reference.

Unlike video world models that produce temporary pixel outputs, HY-World 2.0 generates persistent 3D assets. Feed it a text prompt or a photo and it returns a 3D scene you can explore, edit, and drop directly into a game engine or 3D pipeline. This distinction matters for creators: the output is not a rendered video clip but actual geometry you can manipulate.

What Changed

HY-World 2.0 accepts four input types: text, single-view images, multi-view images, and video. Its four-stage pipeline runs panorama generation (HY-Pano 2.0), trajectory planning (WorldNav), world expansion (WorldStereo 2.0), and world composition (WorldMirror 2.0) in sequence. The result is a 3D scene delivered as 3D Gaussian Splattings, meshes, point clouds, depth maps, or surface normals depending on what your workflow needs.

The WorldMirror 2.0 reconstruction model and weights are available now on GitHub and Hugging Face. The full world generation pipeline code is marked as coming soon. Model weights carry an open license.

Why It Matters for Creators

The practical impact is scene creation speed. Environment artists who previously needed to photograph reference locations, retopologize scans, and manually build materials now have a path from a single reference image to an explorable 3D scene. Game developers can prototype entire level layouts from text descriptions before committing to modeled assets.

Tencent claims HY-World 2.0 is the first open-source state-of-the-art 3D world model, delivering results comparable to closed-source systems including World Labs Marble. Competing models from Alibaba (Happy Oyster, released the same day) remain in closed limited early access, making HY-World 2.0 the only option developers can actually run today.

Key Details

  • Release date: April 16, 2026
  • Inputs: Text, single-view image, multi-view images, video
  • Outputs: 3DGS, meshes, point clouds, depth maps, surface normals
  • Engine support: Unity, Unreal Engine, Blender, Isaac Sim
  • License: Open source (WorldMirror 2.0 weights and code available now)
  • Access: GitHub and Hugging Face (see links below)

What to Do Next

The WorldMirror 2.0 inference code and model weights are live on GitHub and Hugging Face. If you work with Unity or Unreal Engine, the mesh and 3DGS outputs slot directly into standard import workflows. The full generation pipeline is not yet released but the team has flagged it as coming soon in the same repository. Watch the GitHub releases page for updates as NAB 2026 week approaches and more 3D tools come online.