Back to reviews
Arcee Trinity-Large-Thinking

Arcee Trinity-Large-Thinking

399B open-weight reasoning model, 13B active params, Apache 2.0

Arcee AI, a 30-person startup, has released Trinity-Large-Thinking — a 399B sparse mixture-of-experts reasoning model under Apache 2.0. Only 13B parameters activate per token, giving it inference speed 2-3x faster than comparable dense models. In internal benchmarks and early community testing, it ranks #2 on PinchBench, trailing only Anthropic's Opus 4.6, at a list price of $0.90/M output tokens — roughly 96% cheaper than frontier closed models. The model was trained in a $20M, 33-day run on 2,048 NVIDIA Blackwell GPUs. Arcee trained it using a constitutional AI-style process with synthetic chain-of-thought data generated from multiple frontier models, then applied a reinforcement learning phase using outcome-based rewards on math, code, and logic benchmarks. Trinity-Large-Thinking is the strongest open-weight reasoning model released to date on a commercial-friendly license. For companies with privacy requirements or custom deployment needs, it represents a credible alternative to frontier closed APIs — especially for code generation, mathematical reasoning, and structured data tasks where the gap between open and closed models has historically been widest.

Panel Reviews

The Builder

The Builder

Developer Perspective

Ship

A #2 benchmark result from a 30-person startup under Apache 2.0 is legitimately shocking. The sparse MoE architecture means you can run 399B at a reasonable cost — and $0.90/M output is almost too cheap to believe for this performance tier. This is going in our eval suite immediately.

The Skeptic

The Skeptic

Reality Check

Skip

Benchmark numbers from the releasing company always look better than real-world deployment. PinchBench is also relatively new and the community hasn't stress-tested whether it correlates with production quality. Wait for independent evals before betting a product on this.

The Futurist

The Futurist

Big Picture

Ship

This is the model that closes the open vs. closed frontier gap. When a 30-person startup can train a near-frontier reasoner for $20M on a commercial license, the economics of AI completely change. Enterprises that couldn't afford frontier APIs will rebuild their stacks around self-hosted models like this.

The Creator

The Creator

Content & Design

Ship

For long-form creative work requiring multi-step reasoning — worldbuilding, complex narrative planning, detailed research synthesis — a 399B model at this price point is transformative. The chain-of-thought always-on design means it actually shows its reasoning, which helps when I need to redirect it mid-task.

Community Sentiment

Overall1,890 mentions
78% positive16% neutral6% negative
Hacker News420 mentions

Apache 2.0 license and 96% cost reduction vs frontier closed models dominating discussion

Reddit580 mentions

r/LocalLLaMA exploding with benchmark comparisons and 'Arcee just changed the game' takes

Twitter/X890 mentions

AI researchers sharing PinchBench scores and praising the $0.90/M pricing