Summary: A new artificial intelligence system developed by computer scientists automatically screens open-access journals to identify potentially predatory publications. These journals often charge high fees for publishing without peer review, which undermines scientific credibility.
The AI analyzed more than 15,000 journals and flagged more than 1,000 as questionable, giving researchers a scalable way to detect risks. While the system is not perfect, it serves as an important first filter, with human experts making the final decision.
Key facts
- Predatory publications: Journals exploit researchers by charging fees without standard peer review.
- AI Test: The system identified more than 1,000 suspect journals out of 15,200 analyzed journals.
- Firewalls for science: Helps maintain confidence in research by protecting it from inaccurate data.
Source: University of Colorado
A team of computer scientists led by the University of Colorado Boulder has developed a new artificial intelligence platform that automatically finds “questionable” scientific journals.
The study, published August 27 in the journal Science Advances, highlights a dangerous trend in the research community.
Daniel Acuña, the study’s lead author and associate professor in the Department of Computer Science, gets a reminder of this in his email inbox several times a week: These spam messages come from people posing as editors of scientific journals, usually journals Acuña has never heard of, offering to publish his articles for a hefty fee.
These publications are sometimes called “predatory journals.” They target scientists and convince them to pay hundreds or even thousands of dollars to publish their research without peer review.
“There’s a growing effort among scientists and organizations to review these journals,” Acuña said. “But it’s like playing whack-a-mole. You catch one, and then another pops up, usually from the same company. They just build a new website and come up with a new name.”
His group’s new AI tool automatically reviews scientific journals and evaluates their websites and other online data based on specific criteria: Do the journals’ editorial boards consist of renowned researchers? Do their websites have many grammatical errors?
Acuña emphasizes that the tool isn’t perfect. Ultimately, he believes, it’s human experts, not machines, who should make the final decision about a magazine’s reputation.
But in an era when leading figures are questioning the legitimacy of science, it is more important than ever to curb the spread of questionable publications.
“In science, you don’t begin from nothing — you build on the work of others,” Acuna said. “If that foundation collapses, the entire structure is at risk.” The shake down
When scientists submit a new study to a prestigious journal, it usually goes through a process called peer review. Outside experts read the study and assess its quality or at least, that’s the goal.
More and more companies are trying to stop this process in order to make a profit. In 2009, Jeffrey Bell, a librarian at the University of Colorado Denver, coined the term “predatory journals” to describe these publications.
They often target researchers outside the United States and Europe, such as China, India, and Iran. These are countries where scientific institutions are still young and researchers have greater pressure and incentives to publish.

“They say, ‘If you pay $500 or $1,000, we’ll review your work,'” Acuña said. “They don’t actually offer any services. They just take the PDF and put it on their website.” Several groups have tried to stop this process, including the non-profit Directory of Open Access Journals (DOAJ).
Since 2003, DOAJ volunteers have flagged thousands of journals as suspect based on six criteria. (For example, reputable journals often post detailed descriptions of their peer review policies on their websites.)
But it is a difficult task for people to keep track of the distribution of these publications. The team trained their system with data from DOAJ and then asked the AI to review an online list of about 15,200 open access journals. Of these journals, the AI initially flagged more than 1,400 as potentially problematic.
Acuña and his colleagues asked human experts to review some of the suspect journals. According to the experts, the AI made mistakes, marking about 350 publications as questionable, even though they were likely legitimate. It left out more than 1,000 journals that the researchers had deemed questionable.
“I think it should be used as a tool to pre-select a large number of journals,” he said. “But the final analysis should be done by human professionals.”
A firewall for science
Acuña added that the researchers didn’t want their system to become a “black box” like other AI platforms.
“With ChatGPT, for example, it’s often difficult to understand why it suggests something,” says Acuña. “We’ve tried to make our point as understandable as possible.”
For example, the team found that dubious journals published an unusually high number of articles. These included authors who had higher affiliations with more reputable journals, and authors who cited their own research more often than other scientists.
The new AI system is not publicly available, but the researchers hope to make it available to universities and publishers soon. Acuña sees the tool as a way for researchers to protect their field from poor data, what he calls a “firewall for science.”
“As a computer scientist, I often use the example of when a new smartphone comes out,” he said.
We know that phone software will have bugs, and we hope that they will be fixed in the future. We should probably do the same with science.
About this AI and science research news
Author: Daniel Strain
Source: University of Colorado
Contact: Daniel Strain – University of Colorado
Image: The image is credited to StackZone Neuro
Original Research: Open access.
“Estimating the predictability of questionable open-access journals” by Daniel Acuña et al. Science Advances
Abstract
Assessing the predictability of questionable open access journals
Questionable journals pose a threat to the integrity of global research, but manual review can be slow and cumbersome.
In this article, we explore the potential of artificial intelligence (AI) to systematically identify such places by analyzing website design, content, and post metadata.
When evaluated against large, human-annotated datasets, our method is highly accurate and reveals previously overlooked indicators of journal legitimacy.
By adjusting the decision threshold, our method can prioritize either complete diagnosis or accurate identification with low noise.
At a balanced threshold, we identify over 1,000 suspect journals, which collectively publish millions of articles, receive millions of citations, receive funding from major institutions, and attract authors from developing countries.
Error analysis reveals challenges related to discontinued titles, book series misclassified as magazines, and small media outlets with limited online presence. These issues can be addressed by improving data quality.
Our results demonstrate that AI has the potential to perform scalable integrity checks. At the same time, they emphasize the need to combine automated classification with expert assessment.

