Back to reviews
MDArena

MDArena

Benchmark your CLAUDE.md files against real PRs to see if they actually help

MDArena is an open-source benchmarking tool that answers a question every Claude Code user eventually asks: do my CLAUDE.md context files actually improve agent performance, or am I just adding tokens? It mines merged PRs from your repository, strips or injects context files, runs your actual test suite, and measures success rates with statistical significance tests. The methodology mirrors SWE-bench: use `git archive` to create history-free checkpoints so agents can't peek at future commits, detect test commands from CI/CD configs automatically, and run paired t-tests to determine whether differences are real or noise. The project was motivated by academic research showing many CLAUDE.md files reduce agent success rates by 20% while consuming more tokens. For any team investing heavily in Claude Code infrastructure, MDArena provides empirical feedback that most developers currently lack. It's a small, focused tool that solves an annoying but real problem in the emerging AI coding workflow.

Panel Reviews

The Builder

The Builder

Developer Perspective

Ship

I've spent real time crafting CLAUDE.md files with no way to know if they help. A tool that uses my actual test suite against real PRs to measure context file effectiveness is exactly the feedback loop I've been missing. The `git archive` anti-cheat approach shows this was built by someone who's thought carefully about methodology.

The Skeptic

The Skeptic

Reality Check

Skip

Benchmarking on merged PRs is circular — the agent is being tested on tasks that were already solved by humans, which may not reflect the actual distribution of tasks you need it for. Statistical significance from your codebase's PR history also doesn't generalize: what works in one repo will vary wildly in another. Interesting research tool, limited practical signal.

The Futurist

The Futurist

Big Picture

Ship

Context engineering is becoming a real discipline as AI coding agents proliferate, and right now it's entirely vibes-based. MDArena represents the first step toward empirical context optimization — within two years, running something like this before shipping an agent configuration will be standard practice.

The Creator

The Creator

Content & Design

Skip

The audience here is squarely developer teams with established test suites and PR histories — not a tool for creators or smaller codebases without CI/CD. The value proposition is real, but only lands for teams already deep in Claude Code infrastructure.

Community Sentiment

Overall470 mentions
68% positive23% neutral9% negative
Hacker News160 mentions

Context files hurting rather than helping agent performance

Reddit110 mentions

Scientific approach to CLAUDE.md optimization

Twitter/X200 mentions

Context engineering as an emerging discipline