Neuron's AI Policy: Cell Press Rules for Neuroscience Authors
Neuron follows Cell Press AI rules requiring disclosure in STAR Methods, with guidance on separating computational neuroscience research tools from manuscript preparation AI use.
Next step
Choose the next useful decision step first.
Use the guide or checklist that matches this page's intent before you ask for a manuscript-level diagnostic.
Neuron at a glance
Key metrics to place the journal before deciding whether it fits your manuscript and career goals.
What makes this journal worth targeting
- IF 15.0 puts Neuron in a visible tier — citations from papers here carry real weight.
- Scope specificity matters more than impact factor for most manuscript decisions.
- Acceptance rate of ~~8% means fit determines most outcomes.
When to look elsewhere
- When your paper sits at the edge of the journal's stated scope — borderline fit rarely improves after submission.
- If timeline matters: Neuron takes ~4 days. A faster-turnaround journal may suit a grant or job deadline better.
- If OA is required: gold OA costs $10,400 USD. Check institutional agreements before submitting.
Quick answer: Neuroscience has been building and using AI models for decades, from neural network architectures inspired by the brain to deep learning tools for calcium imaging analysis. Neuron, the premier Cell Press neuroscience journal, publishes research where AI is both a tool and an inspiration.
Neuron AI Policy at a Glance
- AI authorship: Prohibited. AI tools cannot be listed as authors and cannot take accountability for the work.
- AI disclosure: Required. Disclose use of AI tools (e.g., ChatGPT, Claude, Gemini) in the Methods or Acknowledgments section.
- AI-generated images: Prohibited. AI-created figures, illustrations, or visualizations are not permitted in the manuscript.
- Copy editing: All AI use, including copy editing, must be disclosed.
The Cell Press AI policy
Neuron follows the Cell Press AI policy identically. Same rules as Cell, Cancer Cell, Immunity, Molecular Cell, and Cell Reports:
- AI can't be an author. Generative AI tools don't meet Cell Press authorship criteria, they can't design experiments, take accountability, or approve manuscripts.
- AI use must be disclosed in STAR Methods. Specifically under Method Details.
- AI-generated images are prohibited. No figures, graphical abstracts, or illustrations from generative AI tools.
- Authors are fully accountable. Every co-author takes responsibility for all content.
- All preparation phases count. AI use during any stage of writing requires disclosure.
Cell Press is part of Elsevier, so the policy layers with Elsevier's broader guidelines. But Cell Press's STAR Methods requirement is more specific than what general Elsevier journals mandate. If you've submitted to a non-Cell-Press Elsevier journal before, don't assume the formatting is the same, it isn't.
Neuroimaging data and privacy
fMRI, PET, EEG, and MEG data can contain identifying information, particularly structural MRI scans, which can be used to reconstruct facial features. Many neuroimaging datasets are governed by strict data use agreements.
Don't input patient or participant neuroimaging data into cloud-based AI tools. This applies even to preprocessed or "anonymized" data, because:
- Cloud AI tools may store inputs for model training
- Structural MRI data can be re-identified even after defacing
- fMRI activation maps combined with demographic data may identify individuals
- Institutional data governance policies typically restrict data processing to approved systems
If your Neuron paper involves human neuroimaging, the AI disclosure should explicitly confirm that no participant data was processed through external AI tools.
Computational neuroscience tools vs. writing AI
Neuron papers frequently use AI-based research tools:
Research tools (standard STAR Methods): DeepLabCut (pose estimation), Suite2p (calcium imaging analysis), CaImAn (calcium imaging), FreeSurfer (brain segmentation), fMRIPrep (preprocessing pipeline), ANTs (image registration)
AI writing/code tools (STAR Methods AI disclosure): ChatGPT for language editing, Copilot for writing analysis scripts, Claude for restructuring text
The first category belongs in your standard computational methods description. The second requires a separate AI manuscript preparation disclosure. Don't conflate them, Neuron's reviewers include computational neuroscientists who will notice if your disclosure is unclear about which AI did what.
Brain-computer interface and neural decoding papers
If your paper describes an AI model for neural decoding, brain-computer interface control, or neural signal processing, that's your research subject, it isn't covered by the manuscript preparation policy. Describe it fully in STAR Methods as methodology. Your ChatGPT or Copilot usage for writing goes in a separate disclosure.
This is especially important at Neuron because the journal publishes both the computational methods and their neuroscientific applications. A paper on a new decoder architecture needs two distinct sections: the decoder's technical description (research) and the writing tool disclosure (if applicable).
Writing the STAR Methods disclosure
For a systems neuroscience paper:
"Two-photon calcium imaging data was analyzed using Suite2p (Pachitariu et al., 2017) as described in STAR Methods: Calcium Imaging Analysis. During manuscript preparation, the authors used ChatGPT (GPT-4, OpenAI) to improve the clarity of the Discussion section. GitHub Copilot (Microsoft) assisted with writing Python scripts for the population decoding analysis. All code was validated against manually computed results on a subset of recording sessions. The authors take full responsibility for the published content."
For a human neuroimaging paper:
"fMRI data was preprocessed using fMRIPrep v23.1 and analyzed using FSL and SPM12 (see STAR Methods: fMRI Analysis). No participant data was processed through any cloud-based AI tool. During manuscript preparation, Claude (Claude 3.5, Anthropic) was used to improve the language of the Introduction. All AI-suggested edits were reviewed by the corresponding author (M.L.). The authors take full responsibility for the content."
For a molecular/cellular neuroscience paper:
"During preparation of this manuscript, the authors used ChatGPT (GPT-4, OpenAI) to edit the Results section for conciseness. All content was verified against the experimental data by the senior author (K.P.). The authors take full responsibility for the published content."
What requires disclosure at Neuron
Use case | Disclosure required? | Neuroscience notes |
|---|---|---|
Grammar/spell check | No | Standard tools exempt |
ChatGPT for language editing | Yes | STAR Methods, Method Details |
DeepLabCut for pose tracking | No (research tool) | Standard STAR Methods |
Copilot for analysis scripts | Yes | Specify which analyses |
AI for brain region illustrations | Prohibited if generative | Use BioRender or Allen Brain Atlas images |
Suite2p for calcium imaging | No (research tool) | Standard STAR Methods |
AI for statistical code | Yes | Confirm validation |
AI for connectomics analysis code | Yes | Specify steps and validation |
AI-generated neuron diagrams | Prohibited | Hand-drawn or standard illustration tools |
AI for EEG/MEG processing scripts | Yes | Confirm against established pipelines |
The brain region illustration point matters. Neuron papers often include anatomical diagrams showing brain regions, circuit connectivity, or experimental targeting. These must be created with standard illustration tools (BioRender, Illustrator, Allen Brain Atlas templates) or derived from actual imaging data. They can't be generated by DALL-E or Midjourney, even if you plan to redraw them afterward. The generative origin disqualifies the image, it doesn't matter how much you edit it afterward.
Consequences of non-disclosure
Cell Press follows standard COPE-guided enforcement:
During review: Request to add disclosure. Neuron's reviewers include computational experts who may flag AI-generated patterns in code or text.
After publication:
- Correction for minor language editing non-disclosure
- Expression of concern if AI affected data analysis or interpretation
- Retraction for fabricated data or false claims
The clinical neuroscience concern: Some Neuron papers have translational implications for neurological and psychiatric disorders. If AI tools influenced how clinical data was interpreted or how treatment-relevant findings were described, the consequences extend beyond publication ethics into clinical responsibility.
Community dynamics: Neuron publishes roughly 200-250 articles per year. The neuroscience community at this level is tightly connected, many authors serve as reviewers for each other. A publication ethics issue at Neuron circulates quickly through conference networks and departmental channels.
Readiness check
Run the scan while the topic is in front of you.
See score, top issues, and journal-fit signals before you submit.
Timeline and policy stability
Cell Press formalized its AI policy in early 2023:
Date | Development |
|---|---|
January 2023 | Cell Press publishes editorial addressing AI tools and authorship |
Early 2023 | Formal policy added to author guidelines across all Cell Press journals |
Mid 2023 | Policy refined with clearer STAR Methods disclosure guidance |
2024 | Elsevier aligns company-wide policy; Cell Press policy stable |
2025–2026 | Enforcement integrated into editorial workflow |
The policy has been consistent since launch. Unlike AAAS (which initially banned AI text before reversing to a disclosure model), Cell Press went straight to disclosure from the start. For neuroscience labs with multi-year projects in the pipeline, this stability means the rules you're planning for today will almost certainly be the rules in effect when you submit.
Comparison with other neuroscience journals
Feature | Neuron | Nature Neuroscience | Brain | Journal of Neuroscience | eLife (Neuroscience) |
|---|---|---|---|---|---|
Publisher | Cell Press (Elsevier) | Springer Nature | Oxford UP | SfN (Oxford UP) | eLife Sciences |
AI authorship | Prohibited | Prohibited | Prohibited | Prohibited | Prohibited |
Disclosure location | STAR Methods | Methods | Methods | Methods | Methods |
AI image ban | Yes | Yes | Yes | Yes | Yes |
Computational neuro content | Very common | Very common | Common | Common | Very common |
Human data sensitivity | High | High | Very high | Moderate | High |
Nature Neuroscience uses free-form Methods (Springer Nature style); Neuron uses STAR Methods (Cell Press style). The substantive AI requirements are identical. If you're preparing a manuscript for both as backup options, the disclosure content is the same, only the formatting differs.
Brain (Oxford UP) has especially high sensitivity to patient data given its focus on clinical neurology. The Journal of Neuroscience (SfN) is the society journal with the broadest readership in the field; its AI policy follows Oxford UP guidelines.
How Cell Press's policy compares to the publisher-wide Elsevier stance
Aspect | Elsevier (general) | Cell Press / Neuron (specific) |
|---|---|---|
Policy text | Broad guidelines | More prescriptive |
Disclosure location | Flexible | STAR Methods, Method Details |
Example disclosure language | General | Provided in author guidelines |
Editorial screening | Varies | Active at Cell Press |
Scope | ~2,800 journals | 50+ Cell Press journals |
Practical advice for Neuron submissions
For electrophysiology and imaging papers:
- If AI helped write analysis code for spike sorting, calcium trace extraction, or connectivity analysis, disclose and validate
- Keep research tools (Kilosort, Suite2p, DeepLabCut) in standard STAR Methods
- AI-assisted code goes in a separate paragraph in Method Details
For human neuroimaging and clinical neuroscience:
- Never process participant data through cloud AI tools
- Explicitly state in your disclosure that patient data wasn't AI-processed
- Keep AI away from clinical interpretation sections
For computational neuroscience:
- If your paper develops an AI model, clearly separate the model description from any writing AI disclosure
- Deposit code in a public repository, Neuron expects code availability for computational papers
- AI-generated code should be independently validated
For behavioral neuroscience:
- If AI helped with behavioral analysis code (video tracking, classification), disclose and describe validation
- Behavioral scoring criteria should be human-defined, even if the scoring is automated
Before submission checklist:
- [ ] AI disclosure in STAR Methods → Method Details
- [ ] Research tools in standard STAR Methods
- [ ] Writing/code tools in separate AI disclosure
- [ ] No participant data processed through cloud AI
- [ ] No generative AI images or brain diagrams
- [ ] All analysis code validated and deposited
- [ ] All co-authors reviewed the disclosure
A Neuron submission readiness check can help verify your Neuron submission meets Cell Press standards before submission.
What should you do about Neuron's's AI policy?
Comply proactively if:
- You used any AI tool (ChatGPT, Grammarly, Copilot) during manuscript preparation
- The journal requires AI use disclosure in the methods or acknowledgments
- Your institution has its own AI use policy that may be stricter
Less concerned if:
- You used AI only for grammar/spell checking (most journals exempt this)
- The journal does not have a formal AI policy yet
- Your use was limited to literature search or reference management
Frequently asked questions
Yes, with mandatory disclosure in STAR Methods. Neuron follows the Cell Press AI policy: AI tools can be used for language editing and preparation, but all use must be disclosed under Method Details. AI can't be listed as an author.
No. Neuron follows the identical Cell Press policy as Cell, Cancer Cell, Molecular Cell, Immunity, and all other Cell Press journals. The publisher sets the policy centrally.
In the STAR Methods section under Method Details. This is the same structured format used across all Cell Press journals.
AI-based neuroimaging tools (FreeSurfer's deep learning components, DeepLabCut, etc.) are research methods described in standard STAR Methods. If ChatGPT or Copilot helped write analysis scripts, that's a separate manuscript preparation disclosure. Never input patient neuroimaging data into cloud-based AI tools.
Cell Press follows COPE guidelines. During review, disclosure must be added. After publication, consequences range from correction to retraction. For neuroscience papers with clinical applications, undisclosed AI in data interpretation could trigger additional institutional and regulatory review.
Sources
Before you upload
Choose the next useful decision step first.
Move from this article into the next decision-support step. The scan works best once the journal and submission plan are clearer.
Use the scan once the manuscript and target journal are concrete enough to evaluate.
Anthropic Privacy Partner. Zero-retention manuscript processing.
Where to go next
Start here
Same journal, next question
- Neuron Submission Guide
- How to Avoid Desk Rejection at Neuron
- Is Neuron a Good Journal? Fit Verdict
- Neuron Pre Submission Checklist: 12 Items Editors Verify Before Peer Review
- Neuron APC and Open Access: Current Price, Hybrid Model, and What the Fee Actually Buys
- Neuron Review Time: What Authors Can Actually Expect
Supporting reads
Conversion step
Choose the next useful decision step first.
Use the scan once the manuscript and target journal are concrete enough to evaluate.