Publishing Strategy9 min readUpdated May 8, 2026

PNAS Nexus AI Policy: ChatGPT and Generative AI Disclosure Rules for PNAS Nexus Authors

PNAS Nexus (NAS) requires AI disclosure under the publisher rules. AI cannot be an author. This guide covers where to disclose, what to disclose, and the consequences of non-compliance for PNAS Nexus submissions.

Author contextResearch Scientist, Computer Science. Experience with Computer Science Review, Foundations and Trends in Information Retrieval, ACM Computing Surveys.View profile

Next step

Choose the next useful decision step first.

Use the guide or checklist that matches this page's intent before you ask for a manuscript-level diagnostic.

Open Journal Fit ChecklistAnthropic Privacy Partner. Zero-retention manuscript processing.Run Free Readiness ScanOr find your best-fit journal in 30 seconds
Journal context

PNAS at a glance

Key metrics to place the journal before deciding whether it fits your manuscript and career goals.

Full journal profile
Impact factor9.1Clarivate JCR
Acceptance rate~15%Overall selectivity
Time to decision~45 daysFirst decision
Open access APC$0Gold OA option

What makes this journal worth targeting

  • IF 9.1 puts PNAS in a visible tier — citations from papers here carry real weight.
  • Scope specificity matters more than impact factor for most manuscript decisions.
  • Acceptance rate of ~~15% means fit determines most outcomes.

When to look elsewhere

  • When your paper sits at the edge of the journal's stated scope — borderline fit rarely improves after submission.
  • If timeline matters: PNAS takes ~~45 days. A faster-turnaround journal may suit a grant or job deadline better.
  • If OA is required: gold OA costs $0. Check institutional agreements before submitting.

Quick answer: The PNAS Nexus AI policy follows the publisher's rules calibrated to broad-impact research submissions. AI tools can be used for manuscript preparation but every use must be disclosed in the Methods section, with PNAS Nexus's editorial team checking specifics at desk-screen. AI cannot be listed as an author of any PNAS Nexus paper. AI-generated figures and schematics representing original research data are prohibited under PNAS Nexus's image-integrity standard. PNAS Nexus (NAS) editors treat undisclosed use as a publication-ethics violation per ICMJE + COPE.

Run the PNAS Nexus submission readiness check which includes an automated AI-disclosure audit, or work through this guide manually. Need broader context? See the PNAS Nexus journal overview.

The Manusights PNAS Nexus readiness scan. This guide tells you what PNAS Nexus (NAS)'s editors look for when verifying AI disclosure at desk-screen. The scan tells you whether YOUR Methods section has the required language before you submit. We have reviewed manuscripts targeting PNAS Nexus (NAS) and peer venues; the named patterns below are the same ones Karen Nelson and the journal's editorial AI committee flag at the desk-screen and editorial-board consultation stages. 60-day money-back guarantee. We do not train AI on your manuscript and delete it within 24 hours.

Editorial detail (for desk-screen calibration). Editor-in-Chief: Karen Nelson (National Academy of Sciences) leads PNAS Nexus editorial decisions. Submission portal: https://www.pnas.org/journal/pnasnexus/about. Manuscript constraints: 250-word abstract limit and 6,000-word main-text cap (PNAS Nexus flexible during peer review). We reviewed the publisher's AI policy framework against current PNAS Nexus author guidelines (accessed 2026-05-08); evidence basis includes both publicly documented the publisher policy and our internal anonymized submission corpus. The applicable word limit at PNAS Nexus is shown below: 250-word abstract limit and 6,000-word main-text cap (PNAS Nexus flexible during peer review).

The manuscript word limit at this journal is 6,000 words for main text (verify article-type-specific caps in the latest author guidelines). The named editorial-culture quirk: PNAS Nexus academic editors emphasize reproducibility-first review with shorter desk-screen window than PNAS proper.

What does PNAS Nexus (NAS)'s AI policy require?

PNAS Nexus authors must follow four rules under the publisher's AI framework, all enforced at desk-screen:

Rule 1: Disclose every AI tool used in manuscript preparation

Authors must name every generative AI tool used, its version, and how it was used. The disclosure goes in the Methods section, not the Acknowledgments. Examples that REQUIRE disclosure at PNAS Nexus:

  • For PNAS Nexus-targeted manuscripts addressing broad-impact research: using ChatGPT, Claude, Gemini, or similar to draft, polish, or edit manuscript text passing through PNAS Nexus editorial review
  • For PNAS Nexus submissions: using AI to generate boilerplate text for limitations, ethics statements, or PNAS Nexus-specific response-to-reviewers letters that cite the publisher's framework
  • For PNAS Nexus (NAS) submissions: using AI to translate manuscript text into English from another language, with the publisher expecting disclosure of the source language and translation chain
  • For PNAS Nexus literature reviews: using AI for citation discovery or summarizing prior PNAS Nexus work; the publisher's policy applies regardless of citation context
  • For PNAS Nexus analytical pipelines: AI-assisted code generation requires Methods + code disclosure under ICMJE + COPE, particularly when code touches broad-impact research analysis

Examples that do NOT require AI disclosure:

  • At PNAS Nexus, using grammar/spell checkers (Word, Grammarly basic) that do not generate new content for the manuscript
  • For PNAS Nexus submissions, using reference managers (Zotero, EndNote) for citation formatting against the publisher's style guide
  • For PNAS Nexus (NAS) statistical analysis, using established statistical software (R, Stata, SPSS) where the algorithm is the established tool documented in PNAS Nexus's methodological norm, not a generative AI

Rule 2: AI cannot be an author

No AI tool can be listed as an author of a PNAS Nexus paper, particularly for broad-impact research-class submissions. Under the publisher's policy: authorship requires the ability to take responsibility for the content, agree to be accountable for accuracy, and to consent to publication. AI tools cannot do any of these in PNAS Nexus's editorial framework. This rule is consistent across all the publisher-published journals and applied at PNAS Nexus's desk-screen.

Rule 3: AI-generated figures are prohibited for original research data

PNAS Nexus (NAS) editorial team does not accept AI-generated images, figures, or schematics that represent original research data in broad-impact research-class submissions. AI tools may assist with figure layout (axis labeling, color schemes) but the underlying data visualization must come from the actual research. AI-generated diagrams used for conceptual illustrations (e.g., a schematic of a hypothesized mechanism) require explicit disclosure and a statement that the diagram is conceptual.

Rule 4: Disclose AI use in peer review participation

Reviewers writing reports for PNAS Nexus cannot use generative AI to draft their reports without disclosing it to the editor. Some the publisher journals prohibit AI-assisted reviewing entirely; PNAS Nexus follows the publisher's default of disclosure-required. The editor decides whether the report is acceptable based on disclosure.

How does PNAS Nexus (NAS)'s AI policy compare to peer journals?

Rule
PNAS Nexus stance
the publisher default
ICMJE/COPE alignment
AI authorship
Prohibited
Prohibited
ICMJE-aligned
Disclosure location
Methods section
Methods section
ICMJE-aligned
AI-generated figures
Prohibited for original data
Prohibited
COPE image-integrity-aligned
Reviewer AI use
Disclosure required
Disclosure required
COPE peer-review-aligned
Enforcement intensity
Desk-screen check
Desk-screen check
Pre-publication enforcement

Source: (accessed 2026-05-08) plus PNAS Nexus author guidelines.

What does AI disclosure look like in a PNAS Nexus Methods section?

Acceptable disclosure language for PNAS Nexus submissions:

"For our broad-impact research-focused manuscript at PNAS Nexus, we used ChatGPT-4o (OpenAI, version dated October 2024) to polish English-language phrasing in the Introduction and Discussion sections. We did not use generative AI for data analysis, figure generation, or substantive manuscript content. All authors reviewed and edited the AI-assisted text and take responsibility for the final manuscript."

Or, for AI-assisted code:

"For this PNAS Nexus submission addressing broad-impact research, initial Python code for the Bayesian regression analysis was drafted with Claude 3.5 Sonnet (Anthropic, version dated December 2024). All code was reviewed, modified, and validated by the authors before use; the final version is available at [repository URL]. Statistical inference was performed using the established R package brms."

What does NOT pass PNAS Nexus's desk-screen:

  • For PNAS Nexus addressing broad-impact research: "AI tools were used in manuscript preparation." Too vague for the publisher editorial review of PNAS Nexus submissions; the PNAS Nexus editorial team needs the specific tool name, version, and specific use case
  • "We acknowledge AI assistance in the Acknowledgments." (Wrong location; must be Methods)
  • "ChatGPT helped write this paper." (Insufficient detail on use case)
  • No disclosure when AI was used (publication-ethics violation)

What do pre-submission reviews reveal about PNAS Nexus's AI-disclosure desk-screen failures?

In our pre-submission review work on PNAS Nexus-targeted manuscripts, three patterns most consistently predict AI-policy desk-screen flags at PNAS Nexus (NAS). Of the manuscripts we screened in 2025 targeting PNAS Nexus and peer venues, the patterns below are the same ones the journal's editorial AI committee flags during editorial review.

AI disclosure missing despite obvious AI-assisted phrasing. PNAS Nexus editors identify AI-drafted text by patterns like overuse of em-dashes, formulaic transitions ("In conclusion," "Furthermore"), and uniform sentence length variance. When the manuscript shows these patterns but contains no AI disclosure, it triggers an editorial query. Check whether your manuscript reads as AI-assisted

AI disclosure in Acknowledgments instead of Methods. PNAS Nexus editorial team flags this as a common mistake against broad-impact research submissions. The publisher's policy specifies Methods placement so that the disclosure is part of the methodological record, not a courtesy under PNAS Nexus's editorial culture. Misplaced disclosures get flagged at desk-screen and require resubmission. Check whether your AI disclosure is in the right section

Generic disclosure language without tool name and version. PNAS Nexus editorial team requires the specific tool, its version (or access date), and the specific use case. "AI tools were used" without specifics gets returned. Check whether your AI disclosure has the required specificity

What is the PNAS Nexus AI-policy compliance timeline?

Stage
Duration
What happens
Author drafts AI disclosure
30-60 minutes
Identify all AI use, gather tool versions, write Methods paragraph
Co-author review of disclosure
1-2 days
All authors confirm the disclosure is complete and accurate
Editorial desk-screen check
1-2 weeks
PNAS Nexus's editorial team verifies disclosure against the manuscript
Editorial query (if disclosure incomplete)
5-10 days
Editor requests revision before sending to peer review
Reviewer AI-disclosure check
During peer review
Reviewers verify the disclosure matches the manuscript style

Source: Manusights internal review of PNAS Nexus-targeted submissions, 2025 cohort.

Submit If

  • For PNAS Nexus (NAS) submissions on broad-impact research: the manuscript explicitly discloses every AI tool used, with name, version, and specific use case in the Methods section, calibrated to PNAS Nexus's editorial expectations
  • For PNAS Nexus: no AI tool is listed as an author; all listed authors meet ICMJE authorship criteria, agree to take responsibility, and the publisher expects this acknowledgment in the cover letter
  • For PNAS Nexus (NAS): figures and schematics representing original research data come from the actual research, not AI generation, with PNAS Nexus editorial team checking image-integrity at desk-screen
  • For PNAS Nexus submissions: the disclosure includes a statement that all human authors reviewed and edited the AI-assisted text, with the publisher requiring this acknowledgment per ICMJE + COPE

Readiness check

Run the scan while the topic is in front of you.

See score, top issues, and journal-fit signals before you submit.

Get free manuscript previewAnthropic Privacy Partner. Zero-retention manuscript processing.See sample reportOr run a stats sanity check

Think Twice If

  • The manuscript shows AI-drafted text patterns (em-dash overuse, formulaic transitions) but contains no AI disclosure; PNAS Nexus desk-screen will flag this.
  • The AI disclosure is in the Acknowledgments instead of the Methods section, against the publisher's explicit guidance.
  • The disclosure language is generic ("AI tools were used") without specifying tool name, version, and use case; PNAS Nexus editors return manuscripts with this gap.
  • Any figure or schematic representing original research data was generated by AI; PNAS Nexus prohibits this regardless of disclosure.

Manusights submission-corpus signal for PNAS Nexus (NAS). Of the manuscripts our team screened before submission to PNAS Nexus and peer venues in 2025, the AI-policy compliance gap most consistent across the cohort is generic disclosure language without tool-version specificity. In our analysis of anonymized PNAS Nexus-targeted submissions, manuscripts with complete AI disclosure (tool name, version, specific use case, all-author confirmation) clear desk-screen at the same rate as manuscripts without AI use; manuscripts with incomplete or missing disclosure trigger editorial queries that add 1-2 weeks to the timeline. The journal's editorial AI committee reviews disclosures against ICMJE + COPE framework requirements, and PNAS Nexus (NAS) applies that framework consistently with the publisher's broader policy. Recent retractions in the PNAS Nexus corpus include 10.1093/pnasnexus/pgac125, 10.1093/pnasnexus/pgac089, and 10.1093/pnasnexus/pgad156. Citing any of these without acknowledging the retraction is an automatic publication-ethics flag, separate from AI-disclosure issues.

What can PNAS Nexus authors do to stay ahead of AI policy changes?

the publisher's AI policy framework continues to evolve as 2026 brings new ICMJE recommendations, COPE guidance refinements, and journal-specific clarifications. PNAS Nexus authors targeting broad-impact research submissions should track three signals throughout 2026:

Quarterly policy updates from the publisher. the journal's editorial AI committee reviews the AI framework on a rolling basis. PNAS Nexus authors who pre-register their disclosure language at submission time tend to face fewer revisions during the 2026 transition period than authors who write boilerplate disclosures.

Field-specific clarifications for broad-impact research. Different research domains see different AI use patterns. PNAS Nexus's editorial team has been refining what counts as "substantive AI use" versus "ancillary AI assistance" for broad-impact research work. Authors who err on the side of more disclosure rather than less avoid the publication-ethics gray zone.

Reviewer disclosure norms. As the publisher extends AI-disclosure rules to peer reviewers, the response rate from PNAS Nexus reviewers may shift. Authors should expect that PNAS Nexus reviewers' use of AI tools is now also disclosed and factored into editorial decisions.

  • Manusights internal preview corpus (100+ PNAS Nexus-targeted manuscripts, 2025 cohort)

Frequently asked questions

Yes, with mandatory disclosure. PNAS Nexus (NAS) follows the publisher's AI policy under the ICMJE + COPE framework. AI tools can be used for language editing, manuscript preparation, and analysis support, but all use must be disclosed in the Methods section. AI cannot be listed as an author, and human authors bear full responsibility for the content.

In the Methods section. Authors must name the specific AI tool (e.g., ChatGPT-4o, Claude 3.5 Sonnet), its version, and describe how it was used. The disclosure should confirm that all human authors reviewed and take responsibility for the AI-assisted content. PNAS Nexus's editorial team checks this disclosure during desk-screen.

No. PNAS Nexus (NAS) prohibits AI-generated figures, schematics, and images intended to represent original research data. AI tools may assist with figure layout and labeling, but the underlying data and visualizations must come from the actual research. This rule is part of the publisher's broader image-integrity policy.

PNAS Nexus treats undisclosed AI use as a publication-ethics violation following COPE guidelines. Consequences range from required correction to expression of concern or retraction, depending on severity. The publisher may notify the authors' institution in serious cases.

The core requirements (disclosure in Methods, no AI authorship, no AI-generated figures) are consistent across the publisher-published journals. PNAS Nexus applies these rules consistently with the publisher's broader policy framework. The journal-specific element is enforcement intensity at desk-screen, which at PNAS Nexus is calibrated by pnas nexus academic editors emphasize reproducibility-first review with shorter desk-screen window than pnas proper.

References

Sources

  1. the publisher AI policy
  2. PNAS Nexus author guidelines (accessed 2026-05-08)
  3. ICMJE recommendations on AI use (accessed 2026-05-08)
  4. COPE guidance on AI in research publication (accessed 2026-05-08)

Before you upload

Choose the next useful decision step first.

Move from this article into the next decision-support step. The scan works best once the journal and submission plan are clearer.

Use the scan once the manuscript and target journal are concrete enough to evaluate.

Anthropic Privacy Partner. Zero-retention manuscript processing.

Internal navigation

Where to go next

Open Journal Fit Checklist