Article
Article
Fraud now arrives from every direction: from students buying essays to researchers buying papers. Fake studies are now produced at an industrial scale. These so-called “paper mills” – companies that sell fabricated or manipulated papers – have grown quickly. A study in Proceedings of the National Academy of Sciences shows that papers linked to these mills have been doubling about every 1.5 years, far faster than scientific literature as a whole. The PNAS study shows that suspected paper-mill papers already exceed annual retractions and may soon surpass the papers scientists flag as suspicious.
In short, the growth of fraudulent work is outpacing the systems meant to catch it.
Some investigations describe these mills as “cartel-like” operations that supply fabricated studies to major publishers’ journals. Wiley’s Hindawi division has retracted more than 11,300 papers linked to such mills. The damage was severe: Wiley closed the Hindawi brand, reported an $18 million revenue hit, and said the clean-up would cost tens of millions. Journals affiliated with Elsevier, SAGE and others have pulled hundreds more paper mill articles, underscoring how deeply these networks can penetrate the academic record.
India Research Watch, an academic integrity watchdog, says China, India, Pakistan and – more recently, Iraq – are where it sees the most organised misconduct. The group’s founder, Achal Agrawal, says high retraction counts often reflect the pressure on academics to “publish or perish” and oversight that cannot keep up.
“Some universities have a high number of retractions, and it shows the scale of the misconduct,” he says. Some retractions are benign, the result of error rather than misconduct. But about 67 per cent involve wrongdoing, according to a study in the PNAS.
Still, raw totals can mislead. China, for instance, has more than 3,000 universities — far more than Europe — so higher counts are inevitable. As Elena Denisova-Schmidt of the University of St.Gallen (HSG) notes, Chinese researchers are “highly conscious of how the country’s research quality is perceived internationally” and take allegations seriously.
Paper mills churn out material that often contains fabricated data, duplicated images and plagiarised content.
They can even manipulate peer review – where journals send papers to independent experts for checks to secure publication. They do this by providing fake reviewer contacts, allowing them to file favourable reviews and move the paper through the system.
They also sell authorship slots, letting people add their names to work they did not do.
And the tools for fakery are easy to find: basic image-editing software makes it simple to alter or duplicate figures, helping weak or fabricated results look credible. But authorship itself can also be manipulated, even without paper mills. As Denisova-Schmidt notes, supervisors or senior colleagues are sometimes added as authors “with or without their knowledge,” while juniors may feel pressured to include powerful figures on papers. In other cases, she says, names are added merely as favours or out of deference, regardless of contribution.
Artificial intelligence has pushed this further, making the line between proper use and misconduct far less clear. That is especially evident in the classroom. “AI is the single biggest cause of change” in student behaviour, says Thomas Lancaster, an expert in academic integrity at Imperial College London. “The barrier between what’s acceptable and what isn’t is thinner than it’s ever been,” he adds, with some students breaking rules without realising it.
Contract cheating – where students pay others to complete their work – has grown more organised. Nick Watmough of the UK’s Quality Assurance Agency, the sector’s key integrity body, says there are now “reports of essay-mill activities being part of international criminal enterprises”. These groups target students through WhatsApp channels and use generative AI to produce assignments. Watmough says some students are then blackmailed after buying work from these operators.
Global rankings such as Times Higher Education and QS place heavy weight on research performance, including output, citations and research reputation – which can push universities to prioritise publishing more studies. “Universities care a lot about the rankings because they affect the number of students getting in, and the price they can charge. It’s their bottom line,” says Agrawal, adding this feeds directly into behaviour on the ground.
Denisova-Schmidt warns that this reliance on rankings can backfire, invoking Goodhart’s law: “When a measure becomes a target, it ceases to be a good measure.”
The result is rising output, uneven quality and a system in which publication counts, and metrics such as the H-index matter a great deal in how researchers are judged. In that environment, misconduct becomes part of the everyday economics of publish-or-perish. “The root cause is actually the flawed incentive system … we need new metrics to take this into account,” Agrawal says.
But incentives are not the only factor.
A lack of staff time is also fuelling the rise in misconduct. “Staff themselves haven’t had the time to understand what teaching and assessment means in an AI-first world,” Lancaster says, leaving many unsure how to design assessments that are less vulnerable to misuse. Detection tools are unreliable, he notes, so universities struggle to prevent problems when they cannot confidently identify them.
Misconduct exposes universities to wider risks. Watmough warns that it threatens “the integrity of academic standards”. If it becomes common, he says, “the reputation of the value of learning and qualifications” is at stake – and employers may question what those qualifications really mean.
Inside universities, enforcement is uneven. Agrawal says the main problem is not detection but willingness to act. “Most universities profit from having a higher volume of publications, so if they clamp down on retractions, their researchers will publish less,” he explains.
Institutions can also be reluctant to acknowledge problems publicly, and when they do investigate, the outcomes are often uneven. “Most of the time it’s the students who get thrown under the bus,” says Agrawal – referring to junior researchers who often shoulder the blame while senior academics can face few consequences.
The risk grows when misconduct turns into misrepresentation. “The problem of fraud occurs much more when a student or graduate deliberately misrepresents their qualifications or abilities to attempt to get a job,” Lancaster says, and that can undermine employer trust.
Attempts to tackle the problem are taking shape, but progress remains uneven. Major publishers are tightening their screening of new submissions, using AI tools that can flag suspicious text or manipulated images before papers reach peer review. A small community of academic sleuths also plays a role, flagging suspect papers and spotting duplicated images or fabricated data.
What happens in the classroom also matters. Misconduct often falls when the work is meaningful and hard to outsource. “Assessment design, actively engaging students and setting assignments they feel are genuinely useful … that’s the single most effective way to reduce misconduct,” says Lancaster.
Agrawal says punitive measures alone will not fix the problem. “Retraction penalties… treat the symptom of the problem. The root cause is actually the flawed incentive system,” he says. Researchers are still judged on publication and citation counts, including the H-index. He argues the system needs more meaningful measures of performance; reducing the weight of self-citations in performance reviews would be a start.
A declaration last year from scientists and journal editors at the Royal Swedish Academy of Sciences made similar points. It urged universities to stop rewarding publication volume and focus instead on research quality. It also called for sanctions on institutions that drive output pressure or overlook misconduct, and for greater openness in how research is reviewed and published.
The direction of travel is clear: without changes to how research is assessed and rewarded, and how openly those decisions are made, attempts to strengthen integrity will struggle to gain traction.
Seb Murray is a journalist and consulting editor who writes for, among others, The Times, The Guardian, The Economist and The Financial Times.
Image: Adobe Stock / Michael Wolf

Book
This Handbook provides an overview of corruption within the context of higher education. Through a variety of international case studies, theoretical frameworks and methodologies, it examines the underlying issues involved in corruption as well as the damaging impact on scholarly cultures and the academic enterprise.

Article
In her article, Roohi Mariam Peter explains how commercial "paper mills" are driving a fast‑growing wave of scientific fraud by selling fabricated research papers to paying authors, undermining journals and peer review.