Apple researchers published a new 3D rendering method called LGTM (Less Gaussians, Texture More) on March 28, 2026, enabling 4K-resolution novel view synthesis via Gaussian Splatting without requiring any per-scene optimization, a step that has historically been the main bottleneck in 3D scene reconstruction workflows.
What Happened
LGTM was posted to Apple's Machine Learning Research site and on arXiv on March 28. The method addresses a fundamental scaling problem in Gaussian Splatting: existing feed-forward approaches predict one Gaussian primitive per pixel, meaning the primitive count grows quadratically as resolution increases, making 4K synthesis computationally intractable. LGTM breaks that coupling by predicting a compact set of sparse Gaussian primitives (far fewer than the pixel count) and pairing each one with a small texture map, so rendering resolution becomes decoupled from geometric complexity.
The result is that 4K-quality novel views can be synthesized from monocular, two-view, or multi-view input in a single forward pass, with no per-scene training loop. LGTM works across multiple baseline architectures and does not require known camera poses in all configurations.
Why It Matters for Creators
Per-scene optimization in Gaussian Splatting workflows currently means waiting minutes to hours before a newly captured scene is renderable. That friction limits real-time and on-set use. LGTM's feed-forward approach points toward a future where a single photo or short video clip becomes a 4K-ready 3D asset almost instantly, which is directly relevant to creators in product visualization, virtual production, game asset generation, and AR content. The absence of per-scene training also means the method generalizes to new environments without manual setup, a key requirement for high-volume creative pipelines. While LGTM is a research paper and not a shipping product, Apple's involvement gives it significant downstream potential given the company's existing work on human Gaussian Splats and its hardware roadmap for spatial computing.
What to Do Next
The full paper and project page are at yxlao.github.io/lgtm, including visual comparisons against prior methods. Creators interested in current Gaussian Splatting tools can explore the original 3DGS repository and tools like Luma AI, which already uses Gaussian Splatting for scene capture. Watch for LGTM to appear in frameworks like HuggingFace's paper tracker as implementations emerge.
This story was covered by Creative AI News.
Subscribe for free to get the weekly digest every Tuesday.