Alibaba launched Happy Oyster on April 16, 2026, an interactive world model that generates continuous video from text and image prompts while accepting real-time steering during generation. Users can add elements, redirect characters, or change camera angles mid-output without restarting the render. It is the second product in Alibaba's "Happy Universe" series, following HappyHorse.
What Happened
Alibaba's ATH division (Token Hub, formed March 2026 under CEO Eddie Wu) released Happy Oyster to a limited early-access pool via invitation codes. The tool ships in two modes.
Direct mode generates video up to three minutes at 480p or 720p. Text, voice, or image prompts are accepted, and users can intervene mid-generation: adding a flock of birds to a scene, redirecting a character, or shifting the camera angle without interrupting the render. Audio and video are produced synchronously from the same underlying world state.
Wander mode produces first-person navigable environments up to one minute long at 480p. The world expands continuously as the user moves through it, maintaining physics and lighting coherence across the extended environment.
The model uses a streaming generation framework that compresses scene state into compact latent representations for low-latency response to real-time input. API access is scheduled to open April 30, 2026.
Why It Matters
Most AI video tools are one-shot generators: prompt, wait, receive a clip. Real-time steering during generation is a different working pattern. It resembles directing a live shoot more than commissioning a render.
The day-one comparison with Tencent HY-World 2.0, released the same day, clarifies where Happy Oyster fits. HY-World outputs real 3D geometry that imports directly into Unity, Unreal, and Blender. Happy Oyster outputs video. Neither approach is superior, but they target different stages of a production pipeline: Happy Oyster suits pre-visualization and reference generation, while geometry-based tools suit asset pipeline work where exportable meshes are the deliverable.
HappyHorse, the predecessor model from the same team, ranked first on the Artificial Analysis AI Video Arena ahead of ByteDance Seedance 2.0, Kuaishou Kling AI, and Google Veo 3 Fast as of early April 2026. That performance benchmark gives Happy Oyster credibility at launch despite limited public access.
Key Details
- Developer: Alibaba ATH (Token Hub) division
- Launch date: April 16, 2026 (early access)
- Direct mode: Up to 3 minutes, 480p or 720p, real-time steering via text/voice/image
- Wander mode: Up to 1 minute, first-person navigable, infinite world expansion at 480p
- Audio: Synchronized audio-video generation from shared world state
- Access: Invitation-code waitlist at happyoyster.cn
- API: Planned April 30, 2026
- Open source: No. No weights or code released.
What to Do Next
Request early access at happyoyster.cn. Invitation codes are being distributed in batches. The April 30 API launch is the broader access point for teams wanting to test the tool in a production pipeline rather than the consumer interface.
The practical question to answer when access opens: whether the real-time steering holds up at longer durations, and whether the 720p ceiling is a hard limit or an early-access constraint. Those two factors will determine whether it slots into commercial pre-visualization workflows or remains a creative exploration tool.