Publishing Strategy6 min readUpdated Jan 1, 2026

MDPI Journals in 2026: A Quality Assessment by Signal, Not Stereotype

The internet answer to 'Are MDPI journals good?' is usually tribal. The useful answer is more conditional. MDPI is a legitimate major publisher, but journal quality inside the portfolio is uneven enough that authors should assess titles one by one.

Senior Researcher, Oncology & Cell Biology

Author context

Specializes in manuscript preparation and peer review strategy for oncology and cell biology, with deep experience evaluating submissions to Nature Medicine, JCO, Cancer Cell, and Cell-family journals.

Next step

Choose the next useful decision step first.

Use the guide or checklist that matches this page's intent before you ask for a manuscript-level diagnostic.

Open Journal Fit ChecklistAnthropic Privacy Partner. Zero-retention manuscript processing.Run Free Readiness Scan

If you ask the internet whether MDPI journals are good, you usually get one of two answers.

One camp says MDPI is obviously legitimate because the journals are indexed, open access, and full of real papers by real researchers.

The other camp says MDPI is obviously suspect because the review cycles are fast, the special-issue machine is huge, and the publisher's growth has outrun trust.

Both camps are missing the same point.

MDPI is too large, too uneven, and too field-dependent to judge as one thing.

Short answer

The honest assessment in 2026 is this:

Claim
Verdict
MDPI is a real scholarly publisher
True
Many MDPI journals are legitimately indexed
True
Some MDPI journals are actively respected in their fields
True
The portfolio has reputation problems around speed and scale
Also true
You should judge an MDPI title one by one, not by logo alone
Definitely true

So the answer is not "MDPI is good" or "MDPI is bad."

The answer is: MDPI contains journals that are acceptable, journals that are strong, and journals that carry enough reputation risk that careful authors should think twice.

What the publisher-level signals say

At the publisher level, MDPI has several legitimacy markers that are hard to dismiss.

MDPI's own public materials state that:

  • hundreds of its journals are indexed in DOAJ
  • more than 300 MDPI journals are indexed in Web of Science
  • many titles are also indexed in Scopus
  • the publisher follows COPE principles in its research and publication ethics framework

Those are not trivial signals. A fake or obviously predatory publisher does not usually maintain that scale of formal indexing presence.

MDPI's ethics page also states that editorial boards are independent, that manuscripts undergo ethics checks, and that the publisher follows COPE guidance on authorship, conflicts, misconduct, and peer-review integrity.

That means the serious critique of MDPI is usually not "this is a scam publisher." It is more often:

  • can quality control stay reliable at this scale?
  • do some titles move too fast?
  • are special issues being used too aggressively?
  • does the field I care about trust this journal?

Those are different questions, and they matter more.

What the journal-level signals say

The real assessment happens at the journal level.

For example, the current repo dataset tracks MDPI-linked titles such as:

  • Molecules
  • Sensors
  • Nutrients
  • Applied Sciences
  • Remote Sensing
  • Materials

These titles do not all sit in the same reputational position.

Molecules

The journal page says Molecules is peer reviewed, open access, indexed in Scopus, SCIE, PubMed, MEDLINE, PMC, and other major databases. It also reports a 2024 impact factor of 4.6, a first decision around 15.1 days, and acceptance to publication in 2.6 days for papers published in the second half of 2025.

That combination creates both appeal and suspicion.

Appeal because:

  • it is clearly visible and indexed
  • chemistry authors know the journal exists
  • the workflow is fast

Suspicion because:

  • the timeline is so fast that some researchers immediately question review depth
  • the volume and speed fit the classic "scale first" critique of MDPI

Both reactions are understandable.

Why speed is the center of the MDPI debate

If there is one metric that drives MDPI reputation, it is speed.

Fast review is not automatically bad. Many researchers are tired of journals taking four months to assign an editor and nine months to send a trivial rejection.

But very fast review becomes a credibility problem when authors infer that:

  • reviewers did not have enough time
  • editor oversight was thin
  • the journal may be privileging throughput over filtration

MDPI journals often publish timing data proudly. That is transparent, but it also forces authors to ask whether the speed reflects operational efficiency or a lighter editorial bar.

The answer varies by title.

That is why you should not treat "fast" as either a positive or a negative by itself. Fast plus respected board plus solid published papers can be fine. Fast plus weak field reputation plus thin article quality is a red flag.

The strongest critique: unevenness, not fraud

The fairest serious criticism of MDPI in 2026 is unevenness.

A lot of researchers do not believe every MDPI journal is bad. They believe the portfolio is too variable to trust automatically.

That view is reinforced by:

  • the huge number of journals and special issues
  • inconsistent editorial experiences reported across fields
  • periodic reevaluation or scrutiny of particular titles by indexing services
  • the sense that a journal's strength may depend heavily on its editor-in-chief and board, not just the publisher

Retraction Watch's reporting on reevaluations and downgrades has amplified this perception. Even when a concern applies to a subset of journals rather than the whole publisher, it contributes to the broader sense that authors need to inspect titles carefully instead of assuming publisher-level safety.

That broader sense is reasonable.

A practical assessment framework

If you are considering an MDPI journal, do not ask "Is MDPI okay?" Ask these:

Question
Why it matters
Is the specific journal indexed in the databases your field values?
Basic legitimacy and discoverability
Do senior people in your subfield publish there without embarrassment?
Real reputational signal
Is the editorial board credible and active?
Better predictor than publisher brand alone
Do the latest papers look technically solid?
Article-level quality still matters
Does the review speed feel plausible for the field?
Too fast can be a warning sign
Has the title faced recent reevaluation, suppression, or major criticism?
Risk management

This framework is not anti-MDPI. It is just how careful researchers should evaluate any large publisher portfolio that varies widely across titles.

When an MDPI journal can be a reasonable choice

An MDPI journal can be a reasonable choice when:

  • the specific title is indexed where your field expects it to be indexed
  • your audience actually reads it
  • the published papers look comparable to other mid-tier journals in the space
  • the APC is manageable or covered
  • you value speed and open access
  • the manuscript is strong but not clearly aimed at a more selective society or flagship field journal

For some technical, applied, or interdisciplinary work, this is a real use case. The paper may need visibility and speed more than prestige compression.

When you should think twice

Think twice when:

  • the field treats the title as a volume-driven outlet rather than a trusted venue
  • the journal has a huge special-issue footprint and weak apparent curation
  • the recent issues look noisy in topic quality
  • the title's impact factor or indexing status is doing more of the reputational work than the papers themselves
  • you are early-career and the paper is strategically important for job, promotion, or grant review

That last point is not snobbery. It is risk management.

If a committee in your field is likely to read an MDPI title skeptically, it may not be the place to spend one of your strongest papers even if the journal is technically legitimate.

The role of special issues

This is where a lot of the discomfort sits.

Special issues are not inherently bad. Many good journals use them. But MDPI's scale of special-issue publishing has made some researchers worry that topic volume and guest-editor recruitment can outpace careful curation.

Once that happens, the journal's reputation stops depending only on peer review. It starts depending on whether the community believes the issue pipeline is being governed tightly enough.

That is one reason some MDPI titles can look strong on paper and still feel reputationally unstable in conversation.

What the positive case for MDPI gets right

The positive case is not empty marketing.

It is true that MDPI offers:

  • immediate open access
  • high discoverability
  • fast workflows
  • wide indexing across many titles
  • journals that in some fields are fully mainstream

It is also true that many researchers publish solid work in MDPI journals without harming their careers.

Blanket dismissal is lazy.

What the skeptical case gets right

The skeptical case also gets real things right.

It is reasonable to worry when:

  • publisher growth is very rapid
  • article volume is huge
  • reported review times are extremely short
  • quality appears inconsistent across titles
  • external scrutiny has touched some journals in the portfolio

Blanket endorsement is also lazy.

Bottom line

MDPI journals in 2026 should be evaluated like a portfolio with wide internal variance, not like a single journal and not like a single scandal.

The publisher is legitimate. Many titles are indexed and usable. Some titles are actively respected. But the combination of speed, scale, special-issue volume, and uneven community trust means authors should judge specific journals, not the logo alone.

The cleanest rule is this: If the only argument for the journal is that it is indexed, that is not enough. If the journal is indexed, field-trusted, and publishing good papers your audience reads, it may be a reasonable choice.

If you want title-level context rather than publisher-level context, start with Molecules impact factor, compare it against real acceptance rates, and run your manuscript through Manusights AI Review before choosing speed over fit.

References

Sources

  1. MDPI research and publication ethics
  2. MDPI DOAJ-indexed journals list
  3. Molecules journal page
  4. Over 300 MDPI journals indexed in Web of Science
  5. Retraction Watch on MDPI journal reevaluation at Scopus
  6. Retraction Watch on Finland publication-forum downgrades

Reference library

Use the core publishing datasets alongside this guide

This article answers one part of the publishing decision. The reference library covers the recurring questions that usually come next: how selective journals are, how long review takes, and what the submission requirements look like across journals.

Open the reference library

Before you upload

Choose the next useful decision step first.

Move from this article into the next decision-support step. The scan works best once the journal and submission plan are clearer.

Use the scan once the manuscript and target journal are concrete enough to evaluate.

Anthropic Privacy Partner. Zero-retention manuscript processing.

Internal navigation

Where to go next

Open Journal Fit Checklist