Helios is a 14-billion-parameter open-source video model that runs at 19.5 FPS on a single NVIDIA H100, generating up to 60 seconds of video in real time. Built by Peking University, ByteDance, and Canva, it launched under the Apache 2.0 license on March 4, 2026.
What Happened
The Helios research team released three model checkpoints (Base, Mid, and Distilled), training scripts, and the HeliosBench evaluation framework on GitHub. The model generates videos up to 1,452 frames long, approximately 60 seconds at 24 FPS, and achieves 19.5 FPS throughput on a single H100.
What makes this technically notable: Helios hits 14B-level quality at the speed previously only seen from 1.3B models. It does this without KV-cache, quantization, sparse attention, or other standard long-video anti-drifting techniques. With Group Offloading enabled, the model runs on as little as 6 GB of VRAM, making it accessible on consumer GPUs.
Why It Matters
Real-time video generation at this quality and length is a first for open-source models. Previous open-weight video generators struggled with either duration (most cap at 10-15 seconds) or speed (generation much slower than playback). The Decoder's analysis notes that Helios changes the economics of local video production: creators running ComfyUI or similar pipelines can now generate minute-scale video locally without cloud costs.
The Apache 2.0 license means commercial use is permitted with no restrictions. The involvement of Canva as a research partner is also significant: it signals that production companies are now co-developing open-weight models they can integrate into consumer products.
Key Details
- Model size: 14B parameters (autoregressive diffusion architecture)
- Speed: 19.5 FPS on single NVIDIA H100
- Duration: up to 1,452 frames (~60 seconds at 24 FPS)
- Modes: text-to-video, image-to-video, video-to-video
- VRAM: ~6 GB with Group Offloading enabled
- License: Apache 2.0 (commercial use permitted)
- Partners: Peking University, ByteDance, Canva, Chengdu Anu Intelligence
- Code and checkpoints: github.com/PKU-YuanGroup/Helios
What to Do Next
The model is available now on GitHub. If you run a local GPU setup with at least 6 GB VRAM, the Distilled checkpoint is the recommended starting point for fastest generation. The project page includes sample outputs and benchmark comparisons against other open-weight video models. For ComfyUI users, community nodes supporting Helios are already appearing in the ecosystem.