Scientific studies. Credit: Artem Podrez, Pexels
Researchers have used AI to screen approximately 15,000 open-access journals and flagged more than 1,000 as potentially problematic, according to a study from the University of Colorado Boulder, published August 27 in Science Advances.
The tool spots red flags like ultra-fast publication times, high self-citation, and opaque fees – and even flagged titles owned by big, reputable publishers.
How the AI works when screening science journals
The system scans journal websites and published papers for patterns tied to dubious practice. It was trained in part on best-practice criteria from the Directory of Open Access Journals (DOAJ). This is the main watchdog and directory for trustworthy open-access journals.
Co-author of the University of Colorado Boulder study, Daniel Acuña, stresses the tool is a prescreener, not judge and jury, “A human expert should be part of the vetting process” before any action is taken. (Cited by Nature.)
However, cancer researcher at the University of Sydney, Australia Jennifer Byrne, said, “there’s a whole group of problematic journals in plain sight that are functioning as supposedly respected journals that really don’t deserve that qualification.”
The University of Colorado Boulder team adds that they “tried to make [the AI] as interpretable as possible,” and frame it as a “firewall for science.”
What exactly is a “peer-reviewed” study?
The phrase “peer-reviewed study” got thrown around a lot during the Covid pandemic. A simple definition is that a peer review refers to the evaluation of a manuscript by an author’s peers – independent experts who assess whether a study is sound before it’s published. Major journals use it to protect reliability and reputation.
But peer review isn’t flawless. Different models (single-blind, double-blind, open) have pros and cons, and history shows it can miss errors or be gamed, which is precisely why detecting questionable journals matters.
Follow the money – who funds research?
A comprehensive scoping review in the American Journal of Public Health found, “Industry-sponsored studies tend to be biased in favor of the sponsor’s products.” It also concluded, “Corporate interests can drive research agendas away from questions that are the most relevant for public health.”
The same review documented common tactics across industries (tobacco, food, pharma): steer funding toward commercially useful topics, prioritise lines of inquiry that support legal/policy positions, and build credibility through publications and conferences.
How should you evaluate “scientific” claims?
- Check the journal – is it indexed by DOAJ?
- Does the site clearly describe peer-review policies, fees, and licences? (The AI flagged journals for exactly these gaps.)
- Look for the peer-review trail: Do editors name reviewers or publish reports (transparent/open review), or is the process opaque?
- Follow funding disclosures: Who paid? Are conflicts declared? Funding can shift research agendas and outcomes.
- Beware speed and spam: Ultra-fast acceptances and mass solicitation emails are red flags.
AI can spotlight anomalies at scale, such as journals with odd citation patterns, suspicious turnaround times, and murky governance. If used well, it may become a powerful early-warning system. But even its creators insist on final human judgement. Tools don’t tell us what’s true; they help us decide what deserves our attention.
View all technology news.
View all world news.


