Back to reviews
GLM-5.1

GLM-5.1

#1 on SWE-Bench Pro — 744B MoE model that runs autonomously for 8 hours

GLM-5.1 is Z.AI's post-training upgrade of the 744B Mixture-of-Experts GLM-5 model, and it has just claimed the top spot on SWE-Bench Pro with a score of 58.4 — beating GPT-5.4 (57.7), Claude Opus 4.6 (57.3), and Gemini 3.1 Pro (54.2). The model is designed for long-horizon agentic tasks and can run autonomously for up to 8 hours across thousands of iterations on a single problem. The agentic capabilities include extended context retention, tool-calling with recovery loops, and a reinforcement-trained "persistence" mode that keeps the model on-task through failures and dead ends rather than surfacing errors to the user. The model was trained entirely on Huawei Ascend 910B chips using the MindSpore framework — no US silicon, no CUDA. The geopolitical dimension is as significant as the technical one: GLM-5.1 is direct evidence that US export controls on Nvidia hardware have not meaningfully slowed China's frontier model development. The 8-hour autonomous execution window is also a step-change from current agentic systems that struggle past 20-30 minutes of coherent work — if this benchmark holds up in real-world testing, it's a genuine advancement in the class of problems AI agents can independently solve.

Panel Reviews

The Builder

The Builder

Developer Perspective

Ship

If the 8-hour autonomous execution claim is real and not cherry-picked, this changes the calculus for using AI on genuinely hard engineering problems. SWE-Bench Pro #1 is also a credible metric — I want to test this on my own repos immediately.

The Skeptic

The Skeptic

Reality Check

Skip

SWE-Bench benchmarks have historically shown poor correlation with real-world coding productivity, and the '8-hour autonomous' claim needs independent validation. Z.AI is also a relatively unknown quantity compared to Anthropic or Google — API reliability and pricing are completely unproven.

The Futurist

The Futurist

Big Picture

Ship

The strategic significance of a Chinese lab hitting #1 on the coding benchmark using zero US hardware cannot be overstated. The export control strategy is officially not working as intended, and GLM-5.1 will accelerate the geopolitical AI arms race in ways that reshape the entire industry.

The Creator

The Creator

Content & Design

Skip

For creative work, I need a model with strong multimodal capabilities and reliable API access — both unproven for GLM-5.1. The coding benchmark lead is impressive but not directly relevant to my workflows. I'll wait for independent reviews before switching.

Community Sentiment

Overall1,027 mentions
66% positive20% neutral14% negative
Hacker News267 mentions

Trained on Huawei chips with no US silicon — export controls aren't working

Reddit340 mentions

SWE-Bench Pro #1 is huge — need to test this on real repos

Twitter/X420 mentions

8-hour autonomous execution is the stat everyone is talking about