Back to reviews
TurboVec

TurboVec

2-4 bit vector compression that beats FAISS with zero training

TurboVec is an unofficial open-source implementation of Google's TurboQuant algorithm (ICLR 2026) for extreme vector compression, written in Rust with Python bindings via PyO3. It compresses high-dimensional vectors down to 2–4 bits per coordinate — a 15.8x compression ratio vs FP32 — with near-optimal distortion and zero training required. The algorithm works in three steps: normalize vectors, apply a random rotation to smooth the data geometry, then run Lloyd-Max quantization with SIMD-accelerated bit-packing. Search runs directly against codebook values. On ARM (Apple M3 Max), TurboVec matches or beats FAISS on query speed while using a fraction of the memory. At 4-bit compression it achieves 0.955 recall@1 vs FAISS's 0.930. For anyone building RAG pipelines, semantic search, or memory systems for AI agents, this is the most efficient open-source vector quantization library available today. The "zero indexing time" property is especially valuable for production systems that need to index new content in real-time without the expensive training phase that FAISS requires.

Panel Reviews

The Builder

The Builder

Developer Perspective

Ship

Zero training time alone makes this worth evaluating for any production vector search system. If the FAISS recall and speed benchmarks hold up in your embedding space, switching could cut memory bills dramatically. Python bindings make it a drop-in experiment.

The Skeptic

The Skeptic

Reality Check

Skip

This is an unofficial implementation of an ICLR paper — there's no versioned release yet and the license isn't even specified. The benchmarks are self-reported on one specific hardware configuration (M3 Max). Real-world embedding distributions can behave very differently from benchmark datasets.

The Futurist

The Futurist

Big Picture

Ship

Long-context AI agents need massive vector memories. The bottleneck is always memory bandwidth and storage cost. TurboQuant-style compression — if it lands in mainstream vector DBs — could 10x the practical context length agents can afford to maintain.

The Creator

The Creator

Content & Design

Skip

Interesting infrastructure work but not relevant for most creators unless you're building your own RAG pipeline. Wait for this to get packaged into Chroma, Weaviate, or Pinecone before worrying about it.

Community Sentiment

Overall540 mentions
60% positive28% neutral12% negative
Hacker News220 mentions

Missing citations in Google's original blog post

Reddit130 mentions

Faster than FAISS with less memory

Twitter/X190 mentions

15x compression ratio