Manuscript Preparation11 min readUpdated Apr 27, 2026

Pre-Submission Review for Human-Computer Interaction Papers

HCI papers need pre-submission review that checks contribution type, study design, ethics, interaction evidence, and venue fit.

Senior Researcher, Oncology & Cell Biology

Author context

Specializes in manuscript preparation and peer review strategy for oncology and cell biology, with deep experience evaluating submissions to Nature Medicine, JCO, Cancer Cell, and Cell-family journals.

Readiness scan

Find out if this manuscript is ready to submit.

Run the Free Readiness Scan before you submit. Catch the issues editors reject on first read.

Check my manuscriptAnthropic Privacy Partner. Zero-retention manuscript processing.See sample reportOr find your best-fit journal
Working map

How to use this page well

These pages work best when they behave like tools, not essays. Use the quick structure first, then apply it to the exact journal and manuscript situation.

Question
What to do
Use this page for
Getting the structure, tone, and decision logic right before you send anything out.
Most important move
Make the reviewer-facing or editor-facing ask obvious early rather than burying it in prose.
Common mistake
Turning a practical page into a long explanation instead of a working template or checklist.
Next step
Use the page as a tool, then adjust it to the exact manuscript and journal situation.

Quick answer: Pre-submission review for human-computer interaction papers should test whether the contribution type, participant evidence, study design, interaction claim, ethics, accessibility, transparency, and venue fit support the manuscript's claim. HCI reviewers often reject papers that have a usable system, thoughtful design, or interesting field study but do not make the human interaction contribution clear enough for the target subcommunity.

If you need a manuscript-specific readiness diagnosis, start with the AI manuscript review. If the paper is mainly about model performance, see pre-submission review for machine learning.

Method note: this page uses ACM CHI submission and reviewing guidance, ACM reproducibility guidance, CHI transparency and ethics materials, and Manusights HCI review patterns reviewed in April 2026.

What This Page Owns

This page owns HCI-specific pre-submission review. It applies to manuscripts about user studies, interaction techniques, design research, accessibility, CSCW, ubiquitous computing, human-AI interaction, visualization interaction, social computing, digital health interfaces, educational technology, usability, participatory design, and systems evaluated through human use.

Intent
Best owner
HCI manuscript needs interaction and study critique
This page
Model or benchmark performance dominates
Machine learning review
Broad AI governance dominates
Artificial intelligence review
Education intervention dominates
Education research review
Statistics-only issue
Statistical review

The boundary is interaction. The manuscript should show how people use, experience, interpret, adapt, resist, or are affected by a system, design, tool, interface, or socio-technical arrangement.

What HCI Reviewers Check First

HCI reviewers often ask:

  • what contribution type is this: empirical, system, design, theory, method, dataset, or critique?
  • is the subcommittee or venue the right home?
  • are participants, recruitment, context, and consent described well enough?
  • does the method match the interaction claim?
  • are qualitative, quantitative, design, and artifact evidence integrated rather than stacked?
  • are ethics, privacy, anonymity, and potential harms addressed?
  • are accessibility and inclusion considered where relevant?
  • are materials, code, instruments, or supplementary artifacts available enough for transparency?
  • do design implications follow from the evidence?

The manuscript has to tell reviewers how to evaluate it.

In Our Pre-Submission Review Work

In our pre-submission review work, HCI papers most often fail when authors make reviewers guess the paper's contribution category.

Contribution blur: the paper reads like a system, study, and design essay at once, without naming the main contribution.

Participant-context gap: recruitment, setting, demographics, expertise, compensation, accessibility, or consent details are too thin for interpretation.

Interaction evidence mismatch: the paper claims changed use, experience, trust, usability, or collaboration from evidence that only shows feasibility.

Design implication overreach: recommendations sound broader than the study population or design context supports.

Ethics thinness: privacy, sensitive data, workplace power, clinical settings, minors, marginalized groups, or AI-mediated interaction risks are handled too late.

A useful review should identify the first HCI-specific reason a reviewer would downgrade the work.

Public Field Signals

CHI author guidance emphasizes subcommittee selection, transparency, ethics, accessibility, and supplementary material where appropriate. CHI reviewing guidance asks reviewers to evaluate work using the criteria for the submission type and to consider transparency in different contribution categories. ACM review training also asks reviewers to notice whether methods are detailed enough to evaluate reproducibility and whether ethical issues are present.

Those signals mean HCI readiness is not just "did we run a user study?" It is whether the study, design, artifact, and contribution type align.

HCI Review Matrix

Review layer
What it checks
Early failure signal
Contribution type
empirical, system, design, theory, method, critique
Reviewer cannot tell how to judge it
Participants
recruitment, context, consent, demographics, expertise
Sample meaning is unclear
Interaction evidence
use, experience, collaboration, accessibility, trust
Claim exceeds evidence
Methods
qualitative, quantitative, mixed, design, deployment
Method does not answer question
Ethics
privacy, consent, harms, power, sensitive data
Ethics appears as a formality
Transparency
instruments, code, artifacts, materials, analysis
Evidence cannot be audited
Venue fit
CHI, CSCW, UIST, DIS, TOCHI, IMWUT, domain venue
Subcommunity mismatch

This matrix keeps the page distinct from AI and machine-learning review.

What To Send

Send the manuscript, target venue or subcommittee, study protocol, recruitment materials, consent language, instruments, interview guide, codebook, system screenshots, artifact or demo access if available, analysis plan, ethics approval or exemption notes, supplementary materials, and prior reviewer comments if available.

If the work involves AI, sensitive data, children, patients, employees, disabled users, or marginalized communities, include the risk and consent context.

What A Useful Review Should Deliver

A useful HCI pre-submission review should include:

  • HCI contribution verdict
  • venue and subcommittee fit critique
  • participant and study-design review
  • interaction-evidence check
  • ethics, privacy, and accessibility review
  • transparency and artifact-readiness note
  • submit, revise, retarget, or diagnose deeper call

The review should not only say "add more user evidence." It should identify which interaction claim needs support.

Common Fixes Before Submission

Before submission, authors often need to:

  • name the contribution type in the introduction
  • align the title and abstract with the target subcommunity
  • add participant and setting detail
  • separate feasibility, usability, experience, and impact claims
  • make ethics and privacy reasoning explicit
  • move design implications closer to evidence
  • include instruments, codebook, artifact notes, or supplementary materials
  • retarget from CHI to CSCW, DIS, UIST, IMWUT, TOCHI, or a domain venue

These fixes help reviewers evaluate the paper on the right terms.

Reviewer Lens By Paper Type

A system paper needs interaction novelty, use case, evaluation, and comparison to existing tools. A design paper needs design rationale, iteration, evidence, and contribution beyond the artifact. An empirical paper needs sampling, method fit, analysis transparency, and interpretation restraint. A human-AI paper needs user task, model role, trust, failure, and harm context. An accessibility paper needs community relevance, inclusive methods, and careful generalization. A CSCW paper needs collaboration, power, organization, and socio-technical context.

The AI manuscript review can flag whether the blocking risk is contribution type, study design, ethics, interaction evidence, or venue fit.

How To Avoid Cannibalizing AI Or ML Pages

Use this page when the manuscript's submission risk depends on human use, participant evidence, interaction design, usability, qualitative or mixed-methods rigor, ethics, or HCI venue fit. Use ML review when the model or benchmark is the primary contribution. Use AI review when governance, system policy, or broad AI framing dominates.

That distinction keeps the page focused on the HCI buyer's actual problem.

What Not To Submit Yet

Do not submit an HCI paper if the contribution type is still ambiguous. Reviewers can be generous across methods, but they need to know whether the manuscript is asking to be judged as a system, empirical study, design contribution, theory paper, method paper, or critique.

Also pause if the participant story is thin. HCI claims often depend on who used the system, under what conditions, with what power dynamics, and with what risks. Those details should be visible before the results section.

For human-AI papers, pause again if the manuscript treats model performance and user experience as interchangeable. A model can perform well while creating poor interaction, misplaced trust, new workload, or harm.

For prototype papers, pause if the artifact cannot be inspected well enough for reviewers to understand the interaction. Screenshots, task flows, prompts, study materials, and failure cases often matter as much as the reported effect size.

Submit If / Think Twice If

Submit if:

  • contribution type and venue fit are clear
  • participant context supports interpretation
  • ethics and privacy are explicit
  • interaction claims match evidence
  • materials or artifacts are transparent enough
  • design implications are proportionate

Think twice if:

  • the paper could be reviewed under three incompatible contribution types
  • ethics are handled only in a sentence
  • the system is evaluated only for feasibility
  • broad design claims come from narrow evidence

Readiness check

Run the scan to see how your manuscript scores on these criteria.

See score, top issues, and what to fix before you submit.

Check my manuscriptAnthropic Privacy Partner. Zero-retention manuscript processing.See sample reportOr find your best-fit journal

Bottom Line

Pre-submission review for HCI papers should protect the link between human interaction evidence and HCI claim. The manuscript needs contribution clarity, participant context, ethical transparency, interaction evidence, and a venue target that fits the subcommunity.

Use the AI manuscript review if you need a fast readiness diagnosis before submitting an HCI paper.

  • https://chi2024.acm.org/for-authors/papers/
  • https://chi2022.acm.org/for-authors/presenting/papers/guide-to-a-successful-submission/
  • https://reviewers.acm.org/training-course/review-criteria
  • https://www.acm.org/publications/reproducibility

Frequently asked questions

It is a field-specific review that checks whether an HCI manuscript is ready for CHI-style or journal submission, including contribution type, study design, participant evidence, ethics, interaction claims, transparency, and venue fit.

They often attack unclear contribution type, weak participant recruitment, thin interaction evidence, missing ethics detail, overclaimed design implications, poor qualitative rigor, and mismatch between systems, empirical, design, accessibility, or theory subcommunities.

AI and ML review focus on model contribution, benchmark evidence, and reproducibility. HCI review focuses on human use, interaction, participant context, design rationale, ethics, accessibility, qualitative or mixed-methods rigor, and contribution type.

Use it before submitting CHI, CSCW, UIST, TOCHI, DIS, IMWUT, or applied HCI papers where contribution framing, study design, ethics, and interaction evidence could decide review.

Final step

Find out if this manuscript is ready to submit.

Run the Free Readiness Scan. See score, top issues, and journal-fit signals before you submit.

Anthropic Privacy Partner. Zero-retention manuscript processing.

Internal navigation

Where to go next

Check my manuscript