Alibaba's HappyHorse 1.0, the AI video model that currently ranks first on the Artificial Analysis text-to-video leaderboard, arrived in ComfyUI on April 27 with five native workflow templates covering both creation and editing at up to 1080p resolution.

What Happened

ComfyUI added HappyHorse 1.0 as a supported model with ready-to-run workflow templates for Text-to-Video (T2V), Image-to-Video (I2V), Subject-to-Video (S2V), Video-to-Video (V2V), and Subject-Video-to-Video (SV2V). Each workflow is available on Comfy Cloud as a one-click launch or as a downloadable template for local ComfyUI installs. The integration was announced on the ComfyUI blog by Daxiong (Lin) and Eric Solorio from the Comfy team.

HappyHorse 1.0 itself is developed by Alibaba and has been available via fal.ai since April 26. The ComfyUI integration opens a node-graph workflow path alongside the existing API access.

Why It Matters

HappyHorse 1.0 is built around cinematic aesthetics: wide-aperture framing, shallow depth of field, refined texture, and atmospheric mood. That visual character is distinct from the flat broadcast-style look common in competing video models. ComfyUI's node-based pipeline makes those qualities composable with existing image preprocessing, masking, and motion control nodes. Creators working on advertisements, e-commerce product demos, and short-form social content now have a path to integrate the top-ranked video model into automated batch workflows without leaving their existing ComfyUI setup.

The Subject-to-Video and Subject-Video-to-Video modes are particularly useful for product-focused creators. S2V lets you anchor a specific product, person, or object from a reference image and generate video around it. SV2V takes that a step further by replacing subjects in existing video footage while preserving the original motion and composition.

The addition follows the pattern ComfyUI has established with recent model integrations, including GPT Image 2, Seedance 2.0, and Wan2.7 all landed in ComfyUI within days of their wider launches, positioning the platform as a fast on-ramp for new generative models.

Key Details

  • Maximum clip length: 15 seconds at 1080p
  • Workflow modes: T2V, I2V, S2V for creation; V2V and SV2V for editing existing footage
  • Visual style: Multi-shot sequencing with consistent subject identity across cuts
  • Target use cases: Advertisements, e-commerce content, social marketing videos
  • Access via Comfy Cloud: No local GPU or install required, browser-based workflow launch
  • Access via local ComfyUI: Update to the latest version, then search "HappyHorse" in the Template Library
  • Developer: Alibaba; integration maintained by the Comfy-Org team

Early user feedback in the blog comments flags temporal quality drop-off beyond the five-second mark and latent drift on longer clips. These are known tradeoffs with current long-context video diffusion models and are not ComfyUI-specific.

What to Do Next

To try HappyHorse 1.0 in ComfyUI, open Comfy Cloud and load the Image-to-Video or Text-to-Video workflow template directly from the ComfyUI blog announcement. Each workflow section includes a direct "Try on Comfy Cloud" link. For local installs, update ComfyUI to the latest version and search the Template Library for "HappyHorse."

If you prefer API-level access without a node graph, HappyHorse 1.0 is also available on fal.ai at $0.28 per second of 1080p output, with per-second billing starting from one second.