Meta released Segment Anything Model 3.1 on March 27, doubling video tracking speed through a technique called object multiplexing. The open-source update processes 16 objects in a single forward pass, hitting 32 frames per second on a single H100 GPU.

For the broader landscape, see our complete guide to AI video generation in 2026.

What Happened

SAM 3.1 is a drop-in replacement for SAM 3, which launched in November 2025. The key innovation is object multiplexing: instead of running separate processing cycles for each tracked object, SAM 3.1 handles up to 16 objects simultaneously in one pass. This eliminates redundant computation and cuts memory overhead, doubling throughput from 16 to 32 fps on an H100.

Why It Matters

Real-time object tracking at 32 fps opens practical applications for video editors and VFX artists. Previous versions required dedicated processing per object, which meant tracking a busy scene with multiple subjects could drop below usable framerates. SAM 3.1 solves this by applying global reasoning across all tracked objects at once.

Meta is already deploying SAM 3.1 across its own products. Instagram's Edits app uses it for dynamic effects applied to specific people and objects in video. Meta AI Vibes builds new creation experiences on top of the tracking. Facebook Marketplace's View in Room feature uses it for home decor visualization. For single images with dense scenes, SAM 3.1 processes over 100 detected objects on an H200 GPU in roughly 30 milliseconds.

Key Details

  • Speed: 32 fps on a single H100 GPU (2x improvement over SAM 3)
  • Object multiplexing: Track up to 16 objects in one forward pass
  • Dense scenes: 100+ objects processed in ~30ms on H200
  • Open source: Model checkpoints, code, and datasets available on GitHub and HuggingFace
  • Drop-in upgrade: Works as a direct replacement for SAM 3 with no code changes

What to Do Next

Download the updated model from Meta's GitHub repository or HuggingFace. If you already use SAM 3 in your pipeline, SAM 3.1 is a direct swap with no API changes. Video editors working with multi-subject scenes will see the biggest gains. The Segment Anything Playground offers an interactive demo for testing before integration.