Back
The DecoderPolicyThe Decoder2026-04-06

The New York Times Drops Freelancer After AI Tool Copied an Existing Book Review

A New York Times freelancer was dropped after their AI writing tool generated a book review that closely copied a previously published piece — apparently without the writer's knowledge. The incident highlights the growing gap between AI tools' tendency to reproduce training data and publishers' zero-tolerance policies for plagiarism.

Original source

The New York Times has dropped a freelance contributor after their AI-assisted writing tool copied substantial text from an existing book review, producing a submission that passed initial human review before the duplication was discovered. The writer, whose name has not been publicly released, said the output was generated by an AI writing assistant they used for drafts, and that they did not knowingly submit plagiarized work.

The incident adds to a growing body of cases where AI writing tools — particularly those fine-tuned on books, articles, and published criticism — regurgitate recognizable phrases, arguments, and sentences from their training data. Unlike factual hallucination (making things up), this type of failure is harder to detect: the text is accurate, stylistically plausible, and cohesive. It's plagiarism that reads like original work.

The Times has not publicly updated its AI policy in response to the incident, but the paper has previously stated that AI-generated content must be disclosed to editors and that factual verification remains the responsibility of the writer. In practice, this case reveals the limits of those policies: a writer using AI for drafts who doesn't know the tool copied from an existing source cannot disclose what they don't know.

For the freelance journalism community, the stakes are particularly high. Staff writers have institutional support for error correction; freelancers bear full liability for what they submit. Several other publications including The Atlantic and The Guardian have issued updated AI writing guidelines in recent weeks, with most landing on "disclose use and verify everything" — policies that are reasonable in principle but difficult to enforce when the copying isn't obvious.

The incident is likely to accelerate adoption of AI plagiarism detection tools specifically tuned for AI-generated content, a category that's distinct from traditional plagiarism detection. Tools like Copyleaks and Originality.ai are seeing record demand as publishers realize that checking for human plagiarism doesn't catch AI regurgitation.

Panel Takes

The Builder

The Builder

Developer Perspective

This is a retrieval failure, not an alignment failure — the model is surfacing memorized training data without flagging it as such. The technical fix exists (output filtering, similarity checks) but it requires AI tool vendors to prioritize it. Expect lawsuits to accelerate that timeline.

The Skeptic

The Skeptic

Reality Check

The freelancer bears the responsibility for what they submit regardless of what tool they used — that's always been true for research assistants and ghostwriters. The real story is that AI writing tools are being sold as productivity upgrades while quietly creating legal liability for the humans who use them.

The Futurist

The Futurist

Big Picture

This signals an inflection point for AI writing tools: the providers who ship plagiarism detection as a first-class feature will capture the professional market, and those who don't will be liability grenades for their users. The incumbents who move fastest here will own newsrooms.