Sup AI
Confidence-weighted AI ensemble that topped Humanity's Last Exam
Sup AI uses a confidence-weighted ensemble of multiple AI models to answer hard questions. Each model rates its own confidence, and the system aggregates responses weighted by that confidence. Achieved 52.15% on Humanity's Last Exam benchmark, outperforming individual models.
Panel Reviews
The Futurist
Big Picture
“Confidence-weighted ensembling is the quiet breakthrough everyone is sleeping on. Individual models plateau — but smart aggregation keeps pushing the frontier. Sup AI scoring 52% on Humanity's Last Exam when no single model breaks 40% proves the thesis.”
The Skeptic
Reality Check
“The benchmark result is legitimately impressive and the methodology is transparent. My concern is latency — querying multiple models and aggregating adds significant time. For research and high-stakes questions it is worth the wait. For everyday chat it is overkill.”
The Builder
Developer Perspective
“No API, no self-hosting option, and the ensemble approach means your per-query cost is 3-5x a single model call. The benchmark numbers are compelling but I cannot integrate this into a product. Ship an API and I will reconsider.”