Is Your Paper Ready for Advanced Functional Materials? Function Over Novelty
Advanced Functional Materials prioritizes demonstrated function over pure novelty. Learn the acceptance rate, scope fit, and how AFM differs from Advanced Materials.
Senior Researcher, Oncology & Cell Biology
Author context
Specializes in manuscript preparation and peer review strategy for oncology and cell biology, with deep experience evaluating submissions to Nature Medicine, JCO, Cancer Cell, and Cell-family journals.
Next step
Choose the next useful decision step first.
Use the guide or checklist that matches this page's intent before you ask for a manuscript-level diagnostic.
Advanced Materials and Advanced Functional Materials share a publisher, a color scheme, and half a name. They don't share an editorial philosophy. Understanding the gap between these two Wiley-VCH journals is the first thing you need to sort out before deciding where your manuscript belongs, because getting it wrong wastes months and burns editorial goodwill at both titles.
Advanced Materials (IF ~29.4) wants papers that push the boundaries of materials science broadly. It's asking: "Is this a new material, a new phenomenon, or a new design principle that changes how the field thinks?" Advanced Functional Materials (IF ~18.5) is asking a different question entirely: "Does this material do something useful, and have you proven it?" That distinction sounds subtle. It isn't. It's the difference between a paper about a novel perovskite lattice structure and a paper showing that perovskite performs specific work in a real device. AFM doesn't need you to rewrite materials science. It needs you to show your material working.
What AFM editors screen for at the desk
AFM accepts roughly 20-25% of submissions, desk-rejects 40-50%, and evaluates manuscripts primarily on whether the claimed function is demonstrated, not merely implied. The journal is published by Wiley-VCH, with an impact factor around 18.5 and typical review times of 4-8 weeks after passing the desk.
Here's what the handling editor is thinking during triage.
Is the function real or hypothetical? This is the single biggest filter. If your paper characterizes a new material and then says "this material could be useful for energy storage" in the conclusion, you haven't written an AFM paper. You've written a materials characterization study with speculative applications tacked on. Editors have seen that move thousands of times. They won't send it out. The function has to be demonstrated in the manuscript itself. A battery cycling test, a sensing calibration curve, a catalytic turnover measurement, a biocompatibility assay. Something has to happen besides characterization.
Is the performance meaningful? Demonstrating function isn't enough if the performance is mediocre. You don't need to break a world record, but your material needs to compete with existing solutions or offer a clear advantage in some dimension. A new photocatalytic material that degrades methylene blue at half the rate of commercial TiO2 isn't going to excite anyone. A photocatalytic system that works under visible light in real wastewater, even at moderate rates, tells a different story because it addresses a real limitation.
Is there something new about the material, not just the application? AFM isn't a device journal. It's a materials journal with a functional emphasis. If you've taken a well-known material off the shelf and put it in a new device configuration, that's device engineering. The material itself needs novelty. Maybe it's a new composition, a new morphology, a new surface chemistry, or a new synthesis route that enables properties you couldn't access before. But the material has to bring something to the table.
The desk rejection problem: why 40-50% never reach review
Nearly half of AFM submissions get returned without review. That's a lot of wasted effort, and the reasons are predictable enough that you can avoid most of them.
Characterization studies disguised as functional materials papers. This is the most common pattern. The manuscript has 8 figures of XRD, SEM, TEM, XPS, and Raman spectroscopy. The final figure shows one basic measurement vaguely related to a function. Editors recognize this structure instantly. It tells them the authors are fundamentally interested in the material itself, not what it does. There's nothing wrong with that interest, but the paper belongs in Chemistry of Materials or the Journal of Physical Chemistry C, not AFM.
"Me too" functional demonstrations. Your group made a variant of a MXene electrode and tested it in a supercapacitor. The capacitance is within 10% of what five other groups have already reported with similar materials. What's the story? Unless you can explain why your variant enables something the others can't, or reveals a structure-function relationship that wasn't understood before, this won't clear the desk. AFM isn't collecting data points. It wants insight.
Wrong Wiley journal. Wiley-VCH runs an entire family: Advanced Materials, Advanced Functional Materials, Advanced Energy Materials, Advanced Healthcare Materials, Small, and several others. Each has a distinct scope. A purely energy-focused paper might fit better in Advanced Energy Materials (IF ~24.4). Biomedical materials work might belong in Advanced Healthcare Materials. Editors at these journals communicate, and they can tell when you've submitted broadly across the family. Target precisely.
Pure simulation without experimental validation. DFT studies predicting that a hypothetical material would have interesting functional properties don't work at AFM unless paired with synthesis and measurement. Computational predictions belong in journals like npj Computational Materials or Physical Review Materials. AFM wants demonstrated, measured function.
How AFM compares to competing journals
This is where the decision gets practical. You've got a functional materials paper. Where should it go?
Factor | AFM | Advanced Materials | ACS AMI | J. Mater. Chem. A | Small |
|---|---|---|---|---|---|
Impact Factor (2024) | ~18.5 | ~29.4 | ~8.3 | ~10.7 | ~13.3 |
Acceptance rate | ~20-25% | ~15-20% | ~25-30% | ~25-30% | ~25-30% |
Publisher | Wiley-VCH | Wiley-VCH | ACS | RSC | Wiley-VCH |
Scope | Functional materials | All materials | Applied materials | Energy materials | Micro/nano |
Key criterion | Demonstrated function | Broad novelty | Application breadth | Energy relevance | Scale-specific |
AFM vs. Advanced Materials. If your paper introduces a design principle that materials scientists across multiple subfields would find interesting, try Advanced Materials first. If the work is excellent but lives within one functional domain, AFM is the right call. Here's my honest take: some of the best papers I've seen in AFM wouldn't have survived the Advanced Materials desk, not because they're weaker scientifically, but because their impact is deep rather than broad. That's exactly what AFM is for. Don't think of it as a consolation prize. It isn't one. An AFM paper with 200 citations matters more to your career than an Advanced Materials rejection letter.
AFM vs. ACS Applied Materials & Interfaces. ACS AMI is more permissive about scope and novelty. It'll publish solid application work that doesn't necessarily advance materials understanding, as long as the engineering is sound and the performance data is thorough. If your paper's main contribution is "we applied material X to problem Y and it worked well," ACS AMI is a natural home. If there's a materials insight behind the performance, AFM is worth the gamble. The impact factor gap (18.5 vs. 8.3) is substantial.
AFM vs. Journal of Materials Chemistry A. JMC-A from the RSC focuses on energy and sustainability applications. If your functional material targets energy storage, catalysis, or energy conversion specifically, there's real overlap with AFM. The deciding factor is usually the strength of the materials story versus the application story. JMC-A is more forgiving of papers where the device data carries the manuscript. AFM wants the material to be the protagonist.
AFM vs. Small. Small covers micro- and nanoscale science with a lower bar for novelty. It's a solid journal, but the editorial expectations are less demanding. If you aren't sure your functional demonstration is strong enough for AFM, Small is a reasonable alternative within the Wiley family. But don't self-reject prematurely. If your material shows genuine function and you can explain why it works, try AFM first.
What a strong AFM paper actually looks like
I've read enough AFM papers to identify the pattern that editors reward. It isn't complicated, but it's specific.
The material is new or newly understood. You've synthesized something with a novel structure, composition, or morphology. Or you've discovered a property in a known material that nobody reported before. Either way, the material itself has a story.
The function is measured, not inferred. You don't just show optical properties and suggest the material could be a sensor. You build the sensor, expose it to analytes, measure the response, and report detection limits. The function is real data, not a paragraph in the discussion section.
The structure-function relationship is explained. This is what separates an AFM paper from an ACS AMI paper. You don't just report that the material works. You explain why it works in terms of its structure, composition, or design. What about this material makes it good at this job? If you can't answer that question with data, your paper probably isn't ready for AFM.
The comparison is fair. You've benchmarked your material against existing solutions under comparable conditions. Cherry-picking comparison data from papers that used different testing protocols doesn't fool reviewers. They'll check.
The review process and what to expect
Once you're past the desk, here's the typical trajectory.
Week 1-2: Desk decision. A handling editor reads the abstract and scans the figures. They're looking for evidence of function, not just characterization. If the TOC graphic shows a material and a device or application, that's a good sign. If it shows only a crystal structure and some spectra, the editor's already skeptical.
Weeks 3-8: Peer review. Papers go to 2-3 reviewers. AFM reviewers tend to be thorough on the functional testing. They'll question whether your conditions are realistic, whether your stability data is sufficient, and whether your performance metrics match what you claim. The review period typically runs 4-8 weeks, though it can stretch longer if a reviewer is slow.
Revision. Most papers that survive review get a "revise and resubmit," not immediate acceptance. Expect requests for additional functional testing, long-term stability data, or more rigorous controls. You'll usually have 4-8 weeks to revise. Use the time wisely. Reviewers notice when revision responses are rushed.
Total timeline. From submission to acceptance, plan for 3-5 months. If you need a second round of review, add another 6-8 weeks.
Open access and costs
AFM offers both subscription and open access publication. The article processing charge for gold open access is approximately $5,500, which is steep but in line with other high-impact Wiley journals. If your funder mandates OA, budget for this early. Some institutions have Wiley agreements that cover or discount APCs, so it's worth checking with your library before you pay full price.
For authors who can't afford the APC, the subscription pathway remains available. Your paper won't have reduced visibility within the materials science community since most research institutions have Wiley subscriptions. But if you're in a field where preprint sharing is common, consider posting to arXiv or ChemRxiv before submission. AFM allows this.
Strategic advice for targeting AFM
Lead with the function, not the synthesis. Your abstract should mention what the material does before explaining how you made it. Editors are scanning for function-first framing. If your abstract reads like a synthesis paper that happens to include some testing, you're framing it wrong for this journal.
Don't bury the application data in supplementary information. If the functional demonstration is the core of your AFM submission, it needs to be in the main text with full figures. Putting your best device data in the SI while filling the main text with characterization sends the wrong message about what you think the paper is about.
Invest in stability and cycling data. One thing that distinguishes a strong AFM paper from a borderline one is evidence that the function persists over time. A sensor that works once isn't useful. A photocatalytic material that deactivates after 10 cycles isn't interesting. Include long-term performance data wherever possible. It's often the difference between "accept" and "major revision."
Frame structure-function relationships explicitly. Don't make the reviewer connect the dots between your XRD data and your device performance. Draw the line yourself. "The preferential orientation along the (001) plane increases ion transport pathways, which explains the 40% improvement in rate capability compared to randomly oriented films." That sentence does more work than three figures of characterization.
Use a pre-submission review to check your framing. At a journal with 40-50% desk rejection, the way you present your work matters as much as the work itself. Running your manuscript through a pre-submission review can flag whether your functional demonstration reads as the paper's core story or as an afterthought. That distinction often determines whether you reach reviewers.
When AFM isn't the right target
There's no shame in recognizing that your paper fits better elsewhere. If your material is novel but you haven't tested its function yet, Chemistry of Materials or the Journal of Physical Chemistry C will evaluate it on characterization quality alone. If your paper is really about device optimization and the material is incidental, journals like ACS Energy Letters or Nano Energy might be better fits. If the work is outstanding and broadly impactful across materials science, try Advanced Materials. And if you're unsure whether the functional data is compelling enough, ACS AMI accepts strong application papers without demanding the same depth of materials insight that AFM requires.
The worst outcome isn't rejection. It's spending three months in review at the wrong journal when the right journal would have accepted you in half that time.
Sources
- Wiley-VCH Advanced Functional Materials Author Guidelines: https://onlinelibrary.wiley.com/journal/16163028
- Clarivate Journal Citation Reports (2024 JCR): https://jcr.clarivate.com
- Wiley-VCH Open Access Pricing: https://authorservices.wiley.com/open-research/open-access/journal-specific-costs.html
Reference library
Use the core publishing datasets alongside this guide
This article answers one part of the publishing decision. The reference library covers the recurring questions that usually come next: how selective journals are, how long review takes, and what the submission requirements look like across journals.
Dataset / reference guide
Peer Review Timelines by Journal
Reference-grade journal timeline data that authors, labs, and writing centers can cite when discussing realistic review timing.
Dataset / benchmark
Biomedical Journal Acceptance Rates
A field-organized acceptance-rate guide that works as a neutral benchmark when authors are deciding how selective to target.
Reference table
Journal Submission Specs
A high-utility submission table covering word limits, figure caps, reference limits, and formatting expectations.
Before you upload
Choose the next useful decision step first.
Move from this article into the next decision-support step. The scan works best once the journal and submission plan are clearer.
Use the scan once the manuscript and target journal are concrete enough to evaluate.
Anthropic Privacy Partner. Zero-retention manuscript processing.
Where to go next
Supporting reads
Conversion step
Choose the next useful decision step first.
Use the scan once the manuscript and target journal are concrete enough to evaluate.