Small AI Policy: ChatGPT and Generative AI Disclosure Rules for Small Authors
Small (Wiley) requires AI disclosure under Wiley rules. AI cannot be an author. This guide covers where to disclose, what to disclose, and the consequences of non-compliance for Small submissions.
Next step
Choose the next useful decision step first.
Use the guide or checklist that matches this page's intent before you ask for a manuscript-level diagnostic.
Small at a glance
Key metrics to place the journal before deciding whether it fits your manuscript and career goals.
What makes this journal worth targeting
- IF 12.1 puts Small in a visible tier — citations from papers here carry real weight.
- Scope specificity matters more than impact factor for most manuscript decisions.
- Acceptance rate of ~~15-25% means fit determines most outcomes.
When to look elsewhere
- When your paper sits at the edge of the journal's stated scope — borderline fit rarely improves after submission.
- If timeline matters: Small takes ~~100-140 days median. A faster-turnaround journal may suit a grant or job deadline better.
- If open access is required by your funder, verify the journal's OA agreements before submitting.
Quick answer: The Small AI policy follows Wiley's rules calibrated to nanoscale science submissions. AI tools can be used for manuscript preparation but every use must be disclosed in the Methods section, with Small's editorial team checking specifics at desk-screen. AI cannot be listed as an author of any Small paper. AI-generated figures and schematics representing original research data are prohibited under Small's image-integrity standard. Small (Wiley) editors treat undisclosed use as a publication-ethics violation per ICMJE + COPE.
Run the Small submission readiness check which includes an automated AI-disclosure audit, or work through this guide manually. Need broader context? See the Small journal overview.
The Manusights Small readiness scan. This guide tells you what Small (Wiley)'s editors look for when verifying AI disclosure at desk-screen. The scan tells you whether YOUR Methods section has the required language before you submit. We have reviewed manuscripts targeting Small (Wiley) and peer venues; the named patterns below are the same ones Jos Lenders and Wiley's AI ethics working group flag at the desk-screen and editorial-board consultation stages. 60-day money-back guarantee. We do not train AI on your manuscript and delete it within 24 hours.
Editorial detail (for desk-screen calibration). Editor-in-Chief: Jos Lenders (Wiley) leads Small editorial decisions. Editorial-board listings change; verify the current incumbent at the journal's editorial-team page before quoting the name in a submission cover letter. Submission portal: https://onlinelibrary.wiley.com/journal/16136829. Manuscript constraints: 200-word abstract limit and 8,000-word main-text cap (Small enforces during desk-screen). We reviewed Wiley's AI policy framework against current Small author guidelines (accessed 2026-05-08); evidence basis includes both publicly documented Wiley policy and our internal anonymized submission corpus. The applicable word limit at Small is shown below: 200-word abstract limit and 8,000-word main-text cap (Small enforces during desk-screen).
The manuscript word limit at this journal is 8,000 words for main text (verify article-type-specific caps in the latest author guidelines). The named editorial-culture quirk: Small editors emphasize cross-subdiscipline impact at the nano/micro scale; subdiscipline-bounded papers extend revision rounds.
What does Small (Wiley)'s AI policy require?
Small authors must follow four rules under Wiley's AI framework, all enforced at desk-screen:
Rule 1: Disclose every AI tool used in manuscript preparation
Authors must name every generative AI tool used, its version, and how it was used. The disclosure goes in the Methods section, not the Acknowledgments. Examples that REQUIRE disclosure at Small:
- For Small-targeted manuscripts addressing nanoscale science: using ChatGPT, Claude, Gemini, or similar to draft, polish, or edit manuscript text passing through Small editorial review
- For Small submissions: using AI to generate boilerplate text for limitations, ethics statements, or Small-specific response-to-reviewers letters that cite Wiley's framework
- For Small (Wiley) submissions: using AI to translate manuscript text into English from another language, with Wiley expecting disclosure of the source language and translation chain
- For Small literature reviews: using AI for citation discovery or summarizing prior Small work; Wiley's policy applies regardless of citation context
- For Small analytical pipelines: AI-assisted code generation requires Methods + code disclosure under ICMJE + COPE, particularly when code touches nanoscale science analysis
Examples that do NOT require AI disclosure:
- At Small, using grammar/spell checkers (Word, Grammarly basic) that do not generate new content for the manuscript
- For Small submissions, using reference managers (Zotero, EndNote) for citation formatting against Wiley's style guide
- For Small (Wiley) statistical analysis, using established statistical software (R, Stata, SPSS) where the algorithm is the established tool documented in Small's methodological norm, not a generative AI
Rule 2: AI cannot be an author
No AI tool can be listed as an author of a Small paper, particularly for nanoscale science-class submissions. Under Wiley's policy: authorship requires the ability to take responsibility for the content, agree to be accountable for accuracy, and to consent to publication. AI tools cannot do any of these in Small's editorial framework. This rule is consistent across all Wiley-published journals and applied at Small's desk-screen.
Rule 3: AI-generated figures are prohibited for original research data
Small (Wiley) editorial team does not accept AI-generated images, figures, or schematics that represent original research data in nanoscale science-class submissions. AI tools may assist with figure layout (axis labeling, color schemes) but the underlying data visualization must come from the actual research. AI-generated diagrams used for conceptual illustrations (e.g., a schematic of a hypothesized mechanism) require explicit disclosure and a statement that the diagram is conceptual.
Rule 4: Disclose AI use in peer review participation
Reviewers writing reports for Small cannot use generative AI to draft their reports without disclosing it to the editor. Some Wiley journals prohibit AI-assisted reviewing entirely; Small follows Wiley's default of disclosure-required. The editor decides whether the report is acceptable based on disclosure.
How does Small (Wiley)'s AI policy compare to peer journals?
Rule | Small stance | Wiley default | ICMJE/COPE alignment |
|---|---|---|---|
AI authorship | Prohibited | Prohibited | ICMJE-aligned |
Disclosure location | Methods section | Methods section | ICMJE-aligned |
AI-generated figures | Prohibited for original data | Prohibited | COPE image-integrity-aligned |
Reviewer AI use | Disclosure required | Disclosure required | COPE peer-review-aligned |
Enforcement intensity | Desk-screen check | Desk-screen check | Pre-publication enforcement |
Source: https://authorservices.wiley.com/ethics-guidelines/ai-policy.html (accessed 2026-05-08) plus Small author guidelines.
What does AI disclosure look like in a Small Methods section?
Acceptable disclosure language for Small submissions:
"For our nanoscale science-focused manuscript at Small, we used ChatGPT-4o (OpenAI, version dated October 2024) to polish English-language phrasing in the Introduction and Discussion sections. We did not use generative AI for data analysis, figure generation, or substantive manuscript content. All authors reviewed and edited the AI-assisted text and take responsibility for the final manuscript."
Or, for AI-assisted code:
"For this Small submission addressing nanoscale science, initial Python code for the Bayesian regression analysis was drafted with Claude 3.5 Sonnet (Anthropic, version dated December 2024). All code was reviewed, modified, and validated by the authors before use; the final version is available at [repository URL]. Statistical inference was performed using the established R package brms."
What does NOT pass Small's desk-screen:
- For Small addressing nanoscale science: "AI tools were used in manuscript preparation." Too vague for Wiley editorial review of Small submissions; the Small editorial team needs the specific tool name, version, and specific use case
- "We acknowledge AI assistance in the Acknowledgments." (Wrong location; must be Methods)
- "ChatGPT helped write this paper." (Insufficient detail on use case)
- No disclosure when AI was used (publication-ethics violation)
What do pre-submission reviews reveal about Small's AI-disclosure desk-screen failures?
In our pre-submission review work on Small-targeted manuscripts, three patterns most consistently predict AI-policy desk-screen flags at Small (Wiley). Of the manuscripts we screened in 2025 targeting Small and peer venues, the patterns below are the same ones Wiley's AI ethics working group flags during editorial review.
AI disclosure missing despite obvious AI-assisted phrasing. Small editors identify AI-drafted text by patterns like overuse of em-dashes, formulaic transitions ("In conclusion," "Furthermore"), and uniform sentence length variance. When the manuscript shows these patterns but contains no AI disclosure, it triggers an editorial query. Check whether your manuscript reads as AI-assisted
AI disclosure in Acknowledgments instead of Methods. Small editorial team flags this as a common mistake against nanoscale science submissions. Wiley's policy specifies Methods placement so that the disclosure is part of the methodological record, not a courtesy under Small's editorial culture. Misplaced disclosures get flagged at desk-screen and require resubmission. Check whether your AI disclosure is in the right section
Generic disclosure language without tool name and version. Small editorial team requires the specific tool, its version (or access date), and the specific use case. "AI tools were used" without specifics gets returned. Check whether your AI disclosure has the required specificity
What is the Small AI-policy compliance timeline?
Stage | Duration | What happens |
|---|---|---|
Author drafts AI disclosure | 30-60 minutes | Identify all AI use, gather tool versions, write Methods paragraph |
Co-author review of disclosure | 1-2 days | All authors confirm the disclosure is complete and accurate |
Editorial desk-screen check | 1-2 weeks | Small's editorial team verifies disclosure against the manuscript |
Editorial query (if disclosure incomplete) | 5-10 days | Editor requests revision before sending to peer review |
Reviewer AI-disclosure check | During peer review | Reviewers verify the disclosure matches the manuscript style |
Source: Manusights internal review of Small-targeted submissions, 2025 cohort.
Submit If
- For Small (Wiley) submissions on nanoscale science: the manuscript explicitly discloses every AI tool used, with name, version, and specific use case in the Methods section, calibrated to Small's editorial expectations
- For Small: no AI tool is listed as an author; all listed authors meet ICMJE authorship criteria, agree to take responsibility, and Wiley expects this acknowledgment in the cover letter
- For Small (Wiley): figures and schematics representing original research data come from the actual research, not AI generation, with Small editorial team checking image-integrity at desk-screen
- For Small submissions: the disclosure includes a statement that all human authors reviewed and edited the AI-assisted text, with Wiley requiring this acknowledgment per ICMJE + COPE
Readiness check
Run the scan while the topic is in front of you.
See score, top issues, and journal-fit signals before you submit.
Think Twice If
- The manuscript shows AI-drafted text patterns (em-dash overuse, formulaic transitions) but contains no AI disclosure; Small desk-screen will flag this.
- The AI disclosure is in the Acknowledgments instead of the Methods section, against Wiley's explicit guidance.
- The disclosure language is generic ("AI tools were used") without specifying tool name, version, and use case; Small editors return manuscripts with this gap.
- Any figure or schematic representing original research data was generated by AI; Small prohibits this regardless of disclosure.
Manusights submission-corpus signal for Small (Wiley). Of the manuscripts our team screened before submission to Small and peer venues in 2025, the AI-policy compliance gap most consistent across the cohort is generic disclosure language without tool-version specificity. In our analysis of anonymized Small-targeted submissions, manuscripts with complete AI disclosure (tool name, version, specific use case, all-author confirmation) clear desk-screen at the same rate as manuscripts without AI use; manuscripts with incomplete or missing disclosure trigger editorial queries that add 1-2 weeks to the timeline. Wiley's AI ethics working group reviews disclosures against ICMJE + COPE framework requirements, and Small (Wiley) applies that framework consistently with Wiley's broader policy. Recent retractions in the Small corpus include 10.1002/smll.202205614, 10.1002/smll.202100539, and 10.1002/smll.202307215. Citing any of these without acknowledging the retraction is an automatic publication-ethics flag, separate from AI-disclosure issues.
What can Small authors do to stay ahead of AI policy changes?
Wiley's AI policy framework continues to evolve as 2026 brings new ICMJE recommendations, COPE guidance refinements, and journal-specific clarifications. Small authors targeting nanoscale science submissions should track three signals throughout 2026:
Quarterly policy updates from Wiley. Wiley's AI ethics working group reviews the AI framework on a rolling basis. Small authors who pre-register their disclosure language at submission time tend to face fewer revisions during the 2026 transition period than authors who write boilerplate disclosures.
Field-specific clarifications for nanoscale science. Different research domains see different AI use patterns. Small's editorial team has been refining what counts as "substantive AI use" versus "ancillary AI assistance" for nanoscale science work. Authors who err on the side of more disclosure rather than less avoid the publication-ethics gray zone.
Reviewer disclosure norms. As Wiley extends AI-disclosure rules to peer reviewers, the response rate from Small reviewers may shift. Authors should expect that Small reviewers' use of AI tools is now also disclosed and factored into editorial decisions.
- Manusights internal preview corpus (2025 cohort)
Frequently asked questions
Yes, with mandatory disclosure. Small (Wiley) follows Wiley's AI policy under the ICMJE + COPE framework. AI tools can be used for language editing, manuscript preparation, and analysis support, but all use must be disclosed in the Methods section. AI cannot be listed as an author, and human authors bear full responsibility for the content.
In the Methods section. Authors must name the specific AI tool (e.g., ChatGPT-4o, Claude 3.5 Sonnet), its version, and describe how it was used. The disclosure should confirm that all human authors reviewed and take responsibility for the AI-assisted content. Small's editorial team checks this disclosure during desk-screen.
No. Small (Wiley) prohibits AI-generated figures, schematics, and images intended to represent original research data. AI tools may assist with figure layout and labeling, but the underlying data and visualizations must come from the actual research. This rule is part of Wiley's broader image-integrity policy.
Small treats undisclosed AI use as a publication-ethics violation following COPE guidelines. Consequences range from required correction to expression of concern or retraction, depending on severity. Wiley may notify the authors' institution in serious cases.
The core requirements (disclosure in Methods, no AI authorship, no AI-generated figures) are consistent across Wiley-published journals. Small applies these rules consistently with Wiley's broader policy framework. The journal-specific element is enforcement intensity at desk-screen, which at Small is calibrated by small editors emphasize cross-subdiscipline impact at the nano/micro scale.
Sources
- Wiley AI policy (accessed 2026-05-08)
- Small author guidelines (accessed 2026-05-08)
- ICMJE recommendations on AI use (accessed 2026-05-08)
- COPE guidance on AI in research publication (accessed 2026-05-08)
Before you upload
Choose the next useful decision step first.
Move from this article into the next decision-support step. The scan works best once the journal and submission plan are clearer.
Use the scan once the manuscript and target journal are concrete enough to evaluate.
Anthropic Privacy Partner. Zero-retention manuscript processing.
Where to go next
Start here
Same journal, next question
- Small Submission Guide: What Editors Want, What to Fix, and When to Submit
- How to Avoid Desk Rejection at Small in 2026
- Small Pre Submission Checklist: 12 Items Editors Verify Before Peer Review
- Advanced Functional Materials vs Small
- Small Review Time: What Authors Can Actually Expect
- Small Journal Cover Letter: What Editors Actually Need to See
Supporting reads
Conversion step
Choose the next useful decision step first.
Use the scan once the manuscript and target journal are concrete enough to evaluate.