Neuron's AI Policy: Cell Press Rules for Neuroscience Authors
Neuron follows Cell Press AI rules requiring disclosure in STAR Methods, with guidance on separating computational neuroscience research tools from manuscript preparation AI use.
Senior Researcher, Oncology & Cell Biology
Author context
Specializes in manuscript preparation and peer review strategy for oncology and cell biology, with deep experience evaluating submissions to Nature Medicine, JCO, Cancer Cell, and Cell-family journals.
Next step
Choose the next useful decision step first.
Use the guide or checklist that matches this page's intent before you ask for a manuscript-level diagnostic.
Neuroscience has been building and using AI models for decades, from neural network architectures inspired by the brain to deep learning tools for calcium imaging analysis. Neuron, the premier Cell Press neuroscience journal, publishes research where AI is both a tool and an inspiration. But the journal's AI policy doesn't address your connectomics pipeline or your DeepLabCut pose estimation system. It addresses whether you used ChatGPT to rewrite your Discussion. Two different things, but the distinction matters for your STAR Methods section.
The Cell Press AI policy
Neuron follows the Cell Press AI policy identically. Same rules as Cell, Cancer Cell, Immunity, Molecular Cell, and Cell Reports:
- AI can't be an author. Generative AI tools don't meet Cell Press authorship criteria, they can't design experiments, take accountability, or approve manuscripts.
- AI use must be disclosed in STAR Methods. Specifically under Method Details.
- AI-generated images are prohibited. No figures, graphical abstracts, or illustrations from generative AI tools.
- Authors are fully accountable. Every co-author takes responsibility for all content.
- All preparation phases count. AI use during any stage of writing requires disclosure.
Cell Press is part of Elsevier, so the policy layers with Elsevier's broader guidelines. But Cell Press's STAR Methods requirement is more specific than what general Elsevier journals mandate. If you've submitted to a non-Cell-Press Elsevier journal before, don't assume the formatting is the same, it isn't.
Neuroscience-specific AI considerations
Neuroimaging data and privacy
fMRI, PET, EEG, and MEG data can contain identifying information, particularly structural MRI scans, which can be used to reconstruct facial features. Many neuroimaging datasets are governed by strict data use agreements.
Don't input patient or participant neuroimaging data into cloud-based AI tools. This applies even to preprocessed or "anonymized" data, because:
- Cloud AI tools may store inputs for model training
- Structural MRI data can be re-identified even after defacing
- fMRI activation maps combined with demographic data may identify individuals
- Institutional data governance policies typically restrict data processing to approved systems
If your Neuron paper involves human neuroimaging, the AI disclosure should explicitly confirm that no participant data was processed through external AI tools.
Computational neuroscience tools vs. writing AI
Neuron papers frequently use AI-based research tools:
Research tools (standard STAR Methods): DeepLabCut (pose estimation), Suite2p (calcium imaging analysis), CaImAn (calcium imaging), FreeSurfer (brain segmentation), fMRIPrep (preprocessing pipeline), ANTs (image registration)
AI writing/code tools (STAR Methods AI disclosure): ChatGPT for language editing, Copilot for writing analysis scripts, Claude for restructuring text
The first category belongs in your standard computational methods description. The second requires a separate AI manuscript preparation disclosure. Don't conflate them, Neuron's reviewers include computational neuroscientists who will notice if your disclosure is unclear about which AI did what.
Brain-computer interface and neural decoding papers
If your paper describes an AI model for neural decoding, brain-computer interface control, or neural signal processing, that's your research subject, it isn't covered by the manuscript preparation policy. Describe it fully in STAR Methods as methodology. Your ChatGPT or Copilot usage for writing goes in a separate disclosure.
This is especially important at Neuron because the journal publishes both the computational methods and their neuroscientific applications. A paper on a new decoder architecture needs two distinct sections: the decoder's technical description (research) and the writing tool disclosure (if applicable).
Writing the STAR Methods disclosure
For a systems neuroscience paper:
"Two-photon calcium imaging data was analyzed using Suite2p (Pachitariu et al., 2017) as described in STAR Methods: Calcium Imaging Analysis. During manuscript preparation, the authors used ChatGPT (GPT-4, OpenAI) to improve the clarity of the Discussion section. GitHub Copilot (Microsoft) assisted with writing Python scripts for the population decoding analysis. All code was validated against manually computed results on a subset of recording sessions. The authors take full responsibility for the published content."
For a human neuroimaging paper:
"fMRI data was preprocessed using fMRIPrep v23.1 and analyzed using FSL and SPM12 (see STAR Methods: fMRI Analysis). No participant data was processed through any cloud-based AI tool. During manuscript preparation, Claude (Claude 3.5, Anthropic) was used to improve the language of the Introduction. All AI-suggested edits were reviewed by the corresponding author (M.L.). The authors take full responsibility for the content."
For a molecular/cellular neuroscience paper:
"During preparation of this manuscript, the authors used ChatGPT (GPT-4, OpenAI) to edit the Results section for conciseness. All content was verified against the experimental data by the senior author (K.P.). The authors take full responsibility for the published content."
What requires disclosure at Neuron
Use case | Disclosure required? | Neuroscience notes |
|---|---|---|
Grammar/spell check | No | Standard tools exempt |
ChatGPT for language editing | Yes | STAR Methods, Method Details |
DeepLabCut for pose tracking | No (research tool) | Standard STAR Methods |
Copilot for analysis scripts | Yes | Specify which analyses |
AI for brain region illustrations | Prohibited if generative | Use BioRender or Allen Brain Atlas images |
Suite2p for calcium imaging | No (research tool) | Standard STAR Methods |
AI for statistical code | Yes | Confirm validation |
AI for connectomics analysis code | Yes | Specify steps and validation |
AI-generated neuron diagrams | Prohibited | Hand-drawn or standard illustration tools |
AI for EEG/MEG processing scripts | Yes | Confirm against established pipelines |
The brain region illustration point matters. Neuron papers often include anatomical diagrams showing brain regions, circuit connectivity, or experimental targeting. These must be created with standard illustration tools (BioRender, Illustrator, Allen Brain Atlas templates) or derived from actual imaging data. They can't be generated by DALL-E or Midjourney, even if you plan to redraw them afterward. The generative origin disqualifies the image, it doesn't matter how much you edit it afterward.
Consequences of non-disclosure
Cell Press follows standard COPE-guided enforcement:
During review: Request to add disclosure. Neuron's reviewers include computational experts who may flag AI-generated patterns in code or text.
After publication:
- Correction for minor language editing non-disclosure
- Expression of concern if AI affected data analysis or interpretation
- Retraction for fabricated data or false claims
The clinical neuroscience concern: Some Neuron papers have translational implications for neurological and psychiatric disorders. If AI tools influenced how clinical data was interpreted or how treatment-relevant findings were described, the consequences extend beyond publication ethics into clinical responsibility.
Community dynamics: Neuron publishes roughly 200-250 articles per year. The neuroscience community at this level is tightly connected, many authors serve as reviewers for each other. A publication ethics issue at Neuron circulates quickly through conference networks and departmental channels.
Timeline and policy stability
Cell Press formalized its AI policy in early 2023:
Date | Development |
|---|---|
January 2023 | Cell Press publishes editorial addressing AI tools and authorship |
Early 2023 | Formal policy added to author guidelines across all Cell Press journals |
Mid 2023 | Policy refined with clearer STAR Methods disclosure guidance |
2024 | Elsevier aligns company-wide policy; Cell Press policy stable |
2025–2026 | Enforcement integrated into editorial workflow |
The policy has been consistent since launch. Unlike AAAS (which initially banned AI text before reversing to a disclosure model), Cell Press went straight to disclosure from the start. For neuroscience labs with multi-year projects in the pipeline, this stability means the rules you're planning for today will almost certainly be the rules in effect when you submit.
Comparison with other neuroscience journals
Feature | Neuron | Nature Neuroscience | Brain | Journal of Neuroscience | eLife (Neuroscience) |
|---|---|---|---|---|---|
Publisher | Cell Press (Elsevier) | Springer Nature | Oxford UP | SfN (Oxford UP) | eLife Sciences |
AI authorship | Prohibited | Prohibited | Prohibited | Prohibited | Prohibited |
Disclosure location | STAR Methods | Methods | Methods | Methods | Methods |
AI image ban | Yes | Yes | Yes | Yes | Yes |
Computational neuro content | Very common | Very common | Common | Common | Very common |
Human data sensitivity | High | High | Very high | Moderate | High |
Nature Neuroscience uses free-form Methods (Springer Nature style); Neuron uses STAR Methods (Cell Press style). The substantive AI requirements are identical. If you're preparing a manuscript for both as backup options, the disclosure content is the same, only the formatting differs.
Brain (Oxford UP) has especially high sensitivity to patient data given its focus on clinical neurology. The Journal of Neuroscience (SfN) is the society journal with the broadest readership in the field; its AI policy follows Oxford UP guidelines.
How Cell Press's policy compares to the publisher-wide Elsevier stance
Aspect | Elsevier (general) | Cell Press / Neuron (specific) |
|---|---|---|
Policy text | Broad guidelines | More prescriptive |
Disclosure location | Flexible | STAR Methods, Method Details |
Example disclosure language | General | Provided in author guidelines |
Editorial screening | Varies | Active at Cell Press |
Scope | ~2,800 journals | 50+ Cell Press journals |
Practical advice for Neuron submissions
For electrophysiology and imaging papers:
- If AI helped write analysis code for spike sorting, calcium trace extraction, or connectivity analysis, disclose and validate
- Keep research tools (Kilosort, Suite2p, DeepLabCut) in standard STAR Methods
- AI-assisted code goes in a separate paragraph in Method Details
For human neuroimaging and clinical neuroscience:
- Never process participant data through cloud AI tools
- Explicitly state in your disclosure that patient data wasn't AI-processed
- Keep AI away from clinical interpretation sections
For computational neuroscience:
- If your paper develops an AI model, clearly separate the model description from any writing AI disclosure
- Deposit code in a public repository, Neuron expects code availability for computational papers
- AI-generated code should be independently validated
For behavioral neuroscience:
- If AI helped with behavioral analysis code (video tracking, classification), disclose and describe validation
- Behavioral scoring criteria should be human-defined, even if the scoring is automated
Before submission checklist:
- [ ] AI disclosure in STAR Methods → Method Details
- [ ] Research tools in standard STAR Methods
- [ ] Writing/code tools in separate AI disclosure
- [ ] No participant data processed through cloud AI
- [ ] No generative AI images or brain diagrams
- [ ] All analysis code validated and deposited
- [ ] All co-authors reviewed the disclosure
A free manuscript assessment can help verify your Neuron submission meets Cell Press standards before submission.
Sources
Reference library
Use the core publishing datasets alongside this guide
This article answers one part of the publishing decision. The reference library covers the recurring questions that usually come next: how selective journals are, how long review takes, and what the submission requirements look like across journals.
Dataset / reference guide
Peer Review Timelines by Journal
Reference-grade journal timeline data that authors, labs, and writing centers can cite when discussing realistic review timing.
Dataset / benchmark
Biomedical Journal Acceptance Rates
A field-organized acceptance-rate guide that works as a neutral benchmark when authors are deciding how selective to target.
Reference table
Journal Submission Specs
A high-utility submission table covering word limits, figure caps, reference limits, and formatting expectations.
Before you upload
Choose the next useful decision step first.
Move from this article into the next decision-support step. The scan works best once the journal and submission plan are clearer.
Use the scan once the manuscript and target journal are concrete enough to evaluate.
Anthropic Privacy Partner. Zero-retention manuscript processing.
Where to go next
Conversion step
Choose the next useful decision step first.
Use the scan once the manuscript and target journal are concrete enough to evaluate.