Back to reviews
Sup AI

Sup AI

Runs 339 LLMs in parallel and downweights the hallucinating ones.

Sup AI is an ensemble AI assistant that runs your query through 339 language models simultaneously, measures per-segment confidence across all responses, and synthesizes a final answer that amplifies agreement and suppresses likely hallucinations. The team claims a 52.15% score on Humanity's Last Exam (HLE) — 7.41 percentage points above the single best model — which, if verified, would make it the highest-scoring system on the benchmark to date. The underlying mechanism works like an LLM panel: each model votes on sub-claims within the response, confidence is estimated by agreement density, and the final output surfaces high-confidence segments while flagging uncertain ones. It's designed to reduce hallucination rate on factual tasks, not improve reasoning per se — the models in the ensemble aren't doing collaborative chain-of-thought, they're voting on outputs. Sup AI was built by Ken Mueller (Stanford, CEO) and Scott Mueller (AI Research Scientist) and launched on Product Hunt today. Pricing starts with $10 in free credits, no auto-charge, with a credit card required to start. The HLE benchmark claim is the headline and will face scrutiny — if verified, this is a meaningful research result. If it's cherry-picked, it's still a usable product with a differentiated architecture.

Panel Reviews

The Builder

The Builder

Developer Perspective

Ship

The HLE claim needs independent verification, but the underlying ensemble approach is architecturally sound for factual Q&A tasks. Running 339 models is expensive — pricing will be the gating factor for production use. The $10 free credit is a fair trial.

The Skeptic

The Skeptic

Reality Check

Skip

Extraordinary claims require extraordinary evidence. A 7.41 point jump on HLE via ensembling — without publishing methodology — smells like benchmark gaming. The latency of running 339 models in parallel is also a real concern for anything other than async research tasks.

The Futurist

The Futurist

Big Picture

Ship

Model ensembling is an underexplored direction in the race to reduce hallucination. If Sup AI's approach scales, it could be more durable than fine-tuning individual models — you get the wisdom of the crowd across model families, training data, and architectures simultaneously.

The Creator

The Creator

Content & Design

Skip

For creative work, ensemble outputs tend to regress toward the mean — you get the most-agreed-upon version of something, which is usually the least interesting version. This is a tool for factual accuracy, not creativity. I'd stick with a single strong model for writing.

Community Sentiment

Overall340 mentions
62% positive25% neutral13% negative
Product Hunt60 mentions

HLE benchmark score legitimacy

Reddit80 mentions

Latency of 339 parallel model calls

Twitter/X200 mentions

Anti-hallucination ensemble claim