Back to reviews
Mercury Coder Next Edit

Mercury Coder Next Edit

Sub-100ms next-edit prediction for VS Code and JetBrains — powered by diffusion LLMs

Inception Labs launched Next Edit inside the Continue extension, bringing Mercury Coder's diffusion-based architecture to VS Code and JetBrains. Unlike autoregressive autocomplete that generates left-to-right, Mercury predicts multi-line edits across your entire file simultaneously — deletions, additions, and structural changes at once. Common patterns it handles: converting callbacks to async/await, extracting functions, renaming variables across call sites, and squashing code smells. Latency is under 100ms so suggestions appear before you finish thinking. The diffusion architecture ($0.25/M input, $1/M output) is 5-10x faster than comparable autoregressive models. Available via Models Add-On in Continue.

Panel Reviews

The Builder

The Builder

Developer Perspective

Ship

I've used next-edit features in other tools but the sub-100ms latency here is genuinely different — it's below my perception threshold, which means it doesn't break flow. The multi-line simultaneous edit understanding is real; it caught a refactor pattern I was about to manually do across 6 call sites.

The Skeptic

The Skeptic

Reality Check

Skip

The benchmarks are impressive but 'trained on real edit sequences' is doing a lot of work here. Until I see how it handles domain-specific refactors in large codebases with complex type hierarchies, I'm skeptical it beats Cursor's native next-edit on anything beyond textbook patterns.

The Futurist

The Futurist

Big Picture

Skip

Diffusion LLMs applied to code editing is the most underrated architectural bet in AI tooling right now. Autoregressive generation was always the wrong primitive for editing — you don't write a diff token by token. Mercury's approach is structurally correct and the speed numbers suggest it scales without compromise.

The Creator

The Creator

Content & Design

Ship

Even for non-heavy-coders, the 'fix code smells' and 'rename across call sites' use cases are exactly the tedious tasks that make coding feel like work instead of creation. Sub-100ms means zero cognitive interrupt. This is the kind of AI assist that disappears into the background in a good way.

Community Sentiment

Overall1,560 mentions
70% positive20% neutral10% negative
Hacker News380 mentions

Diffusion architecture latency advantage over autoregressive models generating heated debate

Reddit460 mentions

Continue extension users excited about native integration without switching tools

Twitter/X720 mentions

Head-to-head comparisons with Cursor's Supercomplete dominating replies