From the low-quality output of paper mills to increasingly convincing content generated by artificial intelligence, peer reviewers are being inundated with questionable research manuscripts. A growing number of AI tools can detect fraudulent elements in papers, but they can be expensive to use. Such tools are probably better deployed by journal publishers rather than individual reviewers, says Elisabeth Bik, a science-integrity consultant in San Francisco, California, especially because feeding unpublished content into AI tools can compromise confidentiality and is generally frowned on during peer review.
What makes an undercover science sleuth tick? Fake-paper detective speaks out
The good news is that recognizing a problematic manuscript is “way easier than you would believe”, says Reese Richardson, a metascientist at Northwestern University in Evanston, Illinois. But the work is time-consuming, especially for beginners, he says.
“We know the fraudsters are going to [commit] fraud,” Bik says. By teaching people how to spot bad papers, however, “we’re gonna make it a little bit harder for them”.
Whether you’re a peer reviewer, an aspiring science sleuth or just reviewing the literature for your own research, here are five strategies to identify potentially untrustworthy papers.
Check the references
Yagmur Ozturk begins vetting a paper by flipping straight to the references. “To me, that’s the most important part,” says Ozturk, a computer scientist and integrity sleuth at the European Research Council, who is based in Grenoble, France.
For example, if several references seem unrelated to the research in the article, it could indicate that the author is being paid to include those citations, or that a paper mill (a business that produces fake or low-quality research articles) is citing its own papers, says Ozturk. Or the citations could be fake — a consequence of fabricated responses, known as hallucinations, from large language models or attempts to evade plagiarism detectors. Ozturk recalls seeing the phrase ‘1 others’ tacked onto the end of an author list in one citation, possibly an AI approximation of the Latin et al. that is often used to shorten long author lists.

Yagmur Ozturk says the reference section of a paper can often reveal whether the article is a fake.Credit: Andrew Vuth
Problematic papers also tend to cite other problematic publications — ones that have been retracted or flagged on PubPeer (a website on which readers can discuss papers with suspected scientific integrity issues), says Richardson. And even when bad actors do cite reputable research, they will often misrepresent findings or link the citation to an irrelevant statement. “The only way to catch them is by looking,” Richardson says. “That’s the reason why we have these citations in papers, so that you can trace the provenance of claims.”
Although it might not be realistic to check every single reference, it’s good practice to verify some of the most important claims in a paper, he suggests.
Luckily, there are tools that can speed up the process. The open-source reference manager Zotero has a plug-in that checks files for articles in the Retraction Watch database, and PubPeer offers a browser extension that alerts the user when any paper mentioned on a website has comments on PubPeer.
Check authors and affiliations
The authors and affiliations section of an article can also reveal when something is amiss.
“I have seen papers that pretended to be submitted by people from my alma mater,” says integrity sleuth Solal Pirelli, who graduated from the Swiss Federal Institute of Technology in Lausanne (EPFL), where he is now also a software engineer. After receiving a “dodgy-looking call for papers” from what seemed to be a predatory journal, he found an article authored by a researcher at EPFL on the journal’s website. But he could find no evidence that someone by that name had ever worked at the institute, he says, and the affiliated department doesn’t exist. The paper’s purported association with EPFL was probably an attempt to make the journal seem more reputable, Pirelli thinks.
The do’s and don’ts of scientific image editing
Anna Abalkina, a social scientist at the Free University of Berlin, suggests that when it comes to checking international and interdisciplinary collaborations, aspiring sleuths should “use common sense”. If all the participants in a clinical paper come from a single hospital but all the authors are located in different countries, is that plausible?, she asks. Do the authors’ departmental affiliations make sense with the topic of the article and the expertise of the other authors?
Another red flag is when none of the authors has published before, says Bik, which you can check by looking up their ORCID profile page — the digital identifier links people to their research. Normally, a paper has a mix of junior and senior authors; a paper with only first-time authors could be an indication that the authors were invented by a paper mill.
Pay attention to the science
Although some fraudulent papers use falsified data or doctored images to sell the illusion of breakthrough discoveries, many do the opposite. Pirelli says that he’ll sometimes see papers that look like the science could be real, but it’s “just really, really boring stuff” that even resource-limited researchers are unlikely to do.
Paper-mill manuscripts tend to be formulaic, have few or no experimental controls and offer little in the way of new findings in their field, says Richardson. For example, it’s common for them to mine large publicly available data sets to report (often erroneous) associations between two variables. Although such studies might be technically legitimate, says Pirelli, they’re often not worth publishing, because they do not make a substantial contribution to their field or advance it.

Solal Pirelli co-led a project that helps people to identify scientific-integrity issues.
And sometimes, they just downright don’t make sense. Bik says she once read a published clinical study about prostate cancer in which half of the study participants were women. She instantly flagged it as coming from a paper mill — these organizations seem to power their apparently inexhaustible supply of phony manuscripts by plugging different combinations of molecules, pathways and diseases into article templates.
Richardson says a fellow sleuth has been tracking down published papers about mitochondrial stress in bacteria. The topic might not seem inherently problematic — until you remember that bacteria don’t have mitochondria.
Check for irregularities in the text
Jennifer Byrne, a publication-integrity researcher at the University of Sydney, Australia, says that she can get a good sense of the quality of the paper from just the abstract. “A nice, tightly written abstract is a pretty obvious thing,” she says. Abstracts that are “overblown, unclear, very long” or that describe findings that are implausible or uninteresting can signal a low-quality paper.

