Mercor Hit by Cyberattack via Compromised LiteLLM Package
AI recruiting startup Mercor confirmed a data breach after attackers exploited a compromised version of the open source LiteLLM package, marking a notable supply chain attack targeting the AI developer toolchain. An extortion group is reported to have stolen user data through the vulnerability.
Original sourceAI recruiting startup Mercor has confirmed it was the victim of a cyberattack in which hackers exfiltrated data from its systems by exploiting a compromised version of LiteLLM, a popular open source library used to interface with large language model APIs. The attack is attributed to an extortion-focused hacking crew that leveraged a supply chain compromise — meaning the malicious code was embedded in the dependency itself, not Mercor's own codebase directly.
LiteLLM is widely used across the AI development ecosystem as a unified interface for calling models from OpenAI, Anthropic, Google, and others. A compromise at that layer is particularly dangerous because it sits upstream of many applications, meaning a single poisoned package can affect dozens or hundreds of downstream products and services simultaneously. The incident underscores how the rapid adoption of AI tooling has outpaced security scrutiny of the open source components powering it.
Mercor, which uses AI to match job candidates with employers and has processed significant volumes of applicant data, has not disclosed the full scope of what was stolen. The extortion crew's involvement suggests stolen data may be used as leverage, raising concerns about the sensitivity of the recruitment and personal information potentially exposed. The company says it is notifying affected users and working with security researchers to assess the damage.
This incident joins a growing list of supply chain attacks targeting developer infrastructure — from the SolarWinds breach to the xz Utils backdoor — but signals a new front specifically within the AI toolchain. As AI startups increasingly rely on shared open source middleware to accelerate development, the security posture of those dependencies becomes a critical and often underexamined attack surface.
Panel Takes
The Builder
Developer Perspective
“This is the xz Utils moment for the AI toolchain, and it was only a matter of time. LiteLLM is embedded in so many AI stacks that a compromise there is essentially a master key — if you're pulling it in without pinning versions or auditing your dependency tree, you're exposed. Every team shipping AI products needs to treat open source LLM middleware with the same scrutiny as any other critical infrastructure dependency.”
The Skeptic
Reality Check
“AI companies have been racing to ship products built on a pile of unvetted open source glue, and this is exactly the predictable result. Mercor is handling sensitive recruitment and personal data — this isn't a toy app — and yet a third-party package compromise was enough to bring the whole thing down. The AI industry's "move fast" culture has a serious security debt coming due, and users are the ones who pay it.”
The Futurist
Big Picture
“The attack surface for AI systems isn't just the models — it's the entire middleware stack that stitches them together, and that stack is largely open source and under-resourced. As AI becomes load-bearing infrastructure for hiring, healthcare, and finance, supply chain security needs to be a first-class concern at the regulatory and investment level, not an afterthought. Incidents like this will accelerate calls for software bills of materials (SBOMs) and mandatory dependency audits for AI products handling sensitive data.”
The Creator
Content & Design
“From a trust and brand perspective, this is brutal for Mercor — recruitment platforms live and die on the confidence that sensitive career data is handled with care. The story here isn't just a technical breach; it's a reminder that the invisible plumbing behind AI products has very real consequences for the humans whose data flows through it. Companies need to communicate their dependency risks as clearly as their feature sets.”