Topaz Labs announced six new AI enhancement models on April 28, 2026, calling it the largest single release in the company's history. Four image models (Wonder 3, Denoise Max, Super Focus 3, High Fidelity 3) and two video models (Starlight Precise 2.5 local, Astra 2) ship at once, alongside a proprietary memory-reduction technology called NeuroStream that the company says cuts VRAM usage by up to 95% for local inference.

What Happened

The announcement came via a PR Newswire press release from CEO Eric Yang, who described the release as "both the largest and most technically advanced set of models we've released in the company's history." The image models target the photo-enhancement workflow (Wonder 3 for one-click sharpen-upscale-denoise, Denoise Max for grain removal, Super Focus 3 for blurry subjects, High Fidelity 3 for high-resolution input like smartphone RAW). The video pair pushes Starlight Precise 2.5 from cloud-only to local availability inside Topaz Video, and Astra 2 lands as a creative upscaler for AI-generated video with prompt-based detail control.

The release coincides with Topaz Labs' booth at NAB 2026, the broadcast industry's largest annual trade show, signaling that the video-enhancement push targets professional post-production rather than just hobbyist users. The current promotion offers 30% off all monthly plans for three days following the announcement.

Why It Matters

The headline feature for working creators is NeuroStream's claimed 95% VRAM reduction. Photo and video enhancement at high resolution has always been GPU-memory-bound: a 4K video pass through Starlight typically required 24GB or more, putting professional results out of reach for anyone running an 8GB or 12GB consumer GPU. If NeuroStream genuinely delivers a 20x memory reduction across the six powered models, the same workflows become available on 8GB hardware that previously demanded $1,500+ workstation cards.

The shift to local availability for Starlight Precise 2.5 is the second structural move. Cloud-only video upscaling has been a friction point because round-trip render times on multi-minute clips can exceed an hour at high quality. Bringing Starlight local inside Topaz Video collapses that loop and gives broadcast teams a workflow they can run in-house without surrendering footage to a vendor cloud.

Key Details

  • Six new models: Wonder 3, Denoise Max, Super Focus 3, High Fidelity 3, Starlight Precise 2.5 (local), Astra 2.
  • NeuroStream technology: Up to 95% VRAM reduction across six Topaz models running locally.
  • Wonder 3: One-click sharpen-upscale-denoise with three enhancement levels, handles both high-quality and degraded inputs.
  • Denoise Max: Optimized for portraits and fine textures, intelligent sharpening of blurry areas.
  • Super Focus 3: Sharpening that preserves already-sharp detail while bringing back blurry subjects.
  • High Fidelity 3: Upscaling tuned for high-resolution input (smartphone, RAW).
  • Starlight Precise 2.5 local: Available now in Topaz Video standalone app; recovers detail in archival footage.
  • Astra 2: Cloud-only in Astra app and API; creative upscaler with custom levels and prompt-based control for AI-generated video.
  • Pricing: 30% off monthly plans for three days; standard pricing on the Topaz Photo, Topaz Video, Astra, and API tiers afterward.

What to Do Next

If you run an enhancement workflow on consumer-grade hardware (8-16GB VRAM), test the NeuroStream-powered models against your existing pipeline before committing to a subscription. The 95% memory reduction is a strong claim and benchmarks against your typical input sizes will tell you whether it holds up under your specific load. If you do video post-production professionally, Starlight Precise 2.5 going local is the move worth budgeting for, especially paired with the upcoming ComfyUI v0.20.1 SUPIR upscale node for hybrid pipelines that mix open-source and Topaz models on the same source. For AI-generated video specifically, Astra 2's prompt-based detail control is worth A/B testing against the VFX upscaling chain in our 2026 video generation guide before defaulting to whichever pipeline you used last quarter.