Back to reviews
SAM 3.1

SAM 3.1

Meta's Segment Anything doubles video speed via object multiplexing

SAM 3.1 is Meta's latest update to the Segment Anything Model family, released March 27 2026 as a drop-in replacement for SAM 3. The core innovation is object multiplexing: where the previous model required a separate processing pass for each tracked object, SAM 3.1 processes all tracked objects together in a single shared-memory pass, eliminating redundant computation across the decoder. The result is a doubling of throughput for videos with a medium number of objects—from 16 to 32 frames per second on a single H100 GPU—without sacrificing tracking accuracy. For applications like sports analytics, surveillance, or video editing that track 5–20 objects simultaneously, this makes real-time deployment on commodity cloud hardware feasible for the first time. SAM 3.1 inherits SAM 3's open-vocabulary segmentation capability (segmenting objects described by text prompts), which achieved 75–80% of human performance on the SA-CO benchmark covering 270K unique concepts. The model checkpoint is available on Hugging Face at `facebook/sam3.1`, and the codebase supports fine-tuning via the facebookresearch/sam3 repository. Meta released SAM 3.1 under a research license with commercial use provisions similar to its predecessors.

Panel Reviews

The Builder

The Builder

Developer Perspective

Ship

The multiplexing change is a genuine architectural improvement, not just parameter tuning—processing all objects together means inference cost no longer scales linearly with object count. For video pipelines tracking 10+ objects this completely changes the cost calculus for real-time deployment.

The Skeptic

The Skeptic

Reality Check

Skip

32 fps on a single H100 sounds impressive until you price H100 cloud time. The research license also creates uncertainty for commercial applications—Meta's licensing terms have quietly shifted in the past, and building a production pipeline on 'research license with commercial provisions' is asking for future legal headaches.

The Futurist

The Futurist

Big Picture

Ship

Segment Anything reaching real-time speeds on multi-object video unlocks an entire category of applications that were previously GPU-prohibitive: live sports analysis, real-time video editing, autonomous driving perception. SAM 3.1 is infrastructure for the next wave of vision applications.

The Creator

The Creator

Content & Design

Ship

The open-vocabulary segmentation is what excites me most—being able to say 'segment the red jacket' rather than clicking a point means non-technical creative professionals can actually use this in video workflows. The speed improvement makes it viable in real-time editing tools.

Community Sentiment

Overall870 mentions
72% positive20% neutral8% negative
Hacker News210 mentions

2x speed improvement

Reddit280 mentions

real-time multi-object tracking

Twitter/X380 mentions

H100 throughput benchmarks