Precision in Methods and Limitations: Phrases for Sensitivity Analyses and Robustness Checks
Worried reviewers will question whether your results hold up under alternative analytic choices? By the end of this short lesson you’ll be able to write concise, journal-aligned phrases that describe common sensitivity analyses and robustness checks and explain their implications for interpretation. You’ll get a clear map of where to place each check (Methods vs. Limitations), compact template sentences you can drop into manuscripts, real-world examples, and short exercises to test your reporting and editing skills.
Step 1 — Map the scope: what sensitivity analyses and robustness checks are, why they matter, and where to place them
Sensitivity analyses and robustness checks are systematic variations of your primary analytic approach designed to test whether your main conclusions are stable under reasonable alternative assumptions, data-handling decisions, or modeling choices. In clinical research, these checks serve three central purposes: (1) they assess the credibility of causal claims or effect estimates by exposing potential sources of bias, (2) they document the extent to which results depend on arbitrary or uncertain analytic choices, and (3) they provide transparent evidence that strengthens readers’ confidence (or warns readers of fragility) when interpreting the primary findings. Clear, discipline-specific phrasing for these elements is expected by journals and by editorial guidelines such as ICMJE and CONSORT; this is why learning concise, journal-aligned language matters as much as choosing appropriate analyses.
Common categories of sensitivity analyses and robustness checks and where to describe them
-
Model specification (Methods): Tests that explore alternative sets of covariates, functional forms (e.g., linear vs. spline), interaction terms, or link functions. These belong in the Methods because they are analytic choices made a priori or pre-specified in a protocol. Describe exactly which alternative models were tested and why.
-
Missing data handling (Methods): Approaches such as complete-case analysis, single imputation, multiple imputation, and pattern-mixture models. State which methods were primary and which were secondary checks, and give assumptions (e.g., missing at random) that motivate each approach.
-
Propensity score approaches and causal balancing (Methods): Alternate implementations—matching, stratification, inverse-probability-of-treatment weighting (IPTW), and doubly robust estimators—should be listed in Methods with their balance diagnostics. Propensity methods often replace or complement multivariable adjustment, and the phrasing should indicate whether they are primary or sensitivity approaches.
-
Outcome definition and measurement (Methods and Limitations): Alternate outcome windows, composite vs. individual endpoints, or different adjudication rules. Put the specification of alternate outcomes in Methods but note in Limitations when outcome redefinitions materially affect conclusions.
-
Inclusion/exclusion criteria and analytic sample (Methods and Limitations): Sensitivity checks that modify eligibility (e.g., excluding early events, restricting to high-adherence subgroups) should be planned and reported in Methods. If changes materially alter results, describe interpretive caveats in Limitations.
-
Noninferiority/superiority margins and trial assumptions (Methods): Varying the noninferiority margin or assumptions about event rates belongs in Methods; present how conclusions depend on these choices in Results and Limitations.
-
Adherence analyses: Per-protocol vs. intent-to-treat (Methods and Limitations): Describe both approaches in Methods and interpret divergence in Limitations, since per-protocol analyses may be susceptible to postrandomization selection bias.
For every category, insert short, explicit language in Methods that defines the alternative approach, the rationale for its use, and any a priori specification. Reserve Limitations for transparent discussion of what divergent sensitivity results mean for confidence in the study conclusions and for future research directions.
Step 2 — Phrase templates by type
Below are compact, journal-aligned sentence templates for the most common types of sensitivity and robustness checks. Each template is followed by a brief annotation on when to use it and how to report results succinctly.
-
Alternative covariate adjustment
- Template: “As a sensitivity analysis, we repeated the primary model with [alternative covariate set], including [list key covariates], to assess potential residual confounding.”
- When to use: Use when reviewers or readers may question covariate selection or when inclusion of additional covariates could change effect estimates. Report whether effect estimates and confidence intervals changed materially (e.g., “estimates were unchanged” or “point estimate attenuated from X to Y”).
-
Complete-case vs. multiple imputation
- Template: “To evaluate the impact of missing data, we conducted complete-case analyses and multiple imputation under the missing-at-random assumption; results were compared to the primary analysis.”
- When to use: Use when missingness is nontrivial. State the imputation method, number of datasets, and variables included in the imputation model. In Results say whether conclusions were consistent.
-
Propensity-score matching/weighting
- Template: “As a robustness check, we estimated propensity scores using [variables used], then applied [matching/ IPTW/ stratification]; balance was assessed with standardized differences.”
- When to use: Use when confounding by indication is a concern; report balance diagnostics and whether effect estimates changed.
-
Per-protocol vs. intent-to-treat (ITT)
- Template: “In addition to the primary ITT analysis, we performed a per-protocol analysis excluding participants who [nonadherence rule]; differences between ITT and per-protocol results were examined.”
- When to use: Use for trials and interventions with adherence issues. In Limitations explain why per-protocol estimates may be biased.
-
Alternative outcome definitions
- Template: “We tested alternative outcome definitions, including [narrow/broad composite definitions or different time windows], to assess robustness to outcome misclassification.”
- When to use: Use when outcome measurement is uncertain or variable; report whether the pattern of results was consistent across definitions.
-
Varying noninferiority margins
- Template: “For sensitivity, we examined robustness across noninferiority margins of [values]; conclusions were considered in light of margin uncertainty.”
- When to use: Use in noninferiority trials; present how inference changes with margin choice.
-
Excluding early events or high-risk subgroups
- Template: “We repeated analyses excluding events within [time window] (or excluding participants with [characteristic]) to test whether early/at-risk events drove the primary finding.”
- When to use: Use to probe reverse causation or selection effects.
For all templates, incorporate the SEO phrase naturally when possible: e.g., “We report these phrases for sensitivity analyses and robustness checks in the Methods section to ensure transparency and reproducibility.” Such a sentence can appear in Methods or a supplemental methods paragraph without compromising journal tone.
Step 3 — Reporting conventions and interpretation
Methods: specify, justify, and predefine
In Methods, explicitly state which sensitivity analyses and robustness checks were planned and why. Use brief, firm phrasing that cites protocol or statistical analysis plan if relevant: “Sensitivity analyses were prespecified in the protocol.” For each check, give enough technical detail that a knowledgeable reader can reproduce the approach: the covariates added or removed, the imputation algorithm and variables, the caliper for propensity-score matching, or the adherence threshold used for per-protocol analyses. Journals expect this level of detail; when space is limited, place extended technical detail in an appendix or online supplement and cite it in the Methods.
Results: be concise and quantitative
When reporting the results of sensitivity analyses in Results, state the direction and magnitude of any change plus whether the inference (statistical significance or clinical interpretation) changed. Use concise constructions: “Results were consistent with the primary analysis (adjusted hazard ratio 0.85, 95% CI 0.70–1.03 vs. primary HR 0.83, 0.69–1.00).” If results do not materially change, a short declarative sentence is appropriate: “Sensitivity analyses yielded similar estimates.” If they diverge, report the alternative estimates and quantify the difference.
Limitations: explain divergence and implications
When sensitivity analyses diverge from primary findings, Limitations must explain plausible reasons and the implications for interpretation. Use careful, balanced language: avoid exaggerating certainty or dismissing inconsistency. Phrases such as “These findings should be interpreted cautiously” or “Divergence suggests sensitivity to [specific assumption]” are appropriate. Provide guidance for readers about the strength of evidence: is the main conclusion weakened, overturned, or merely questioned? State how future research could address the uncertainty (e.g., larger samples, better measurement, randomized confirmation).
Justifying the analysis plan and aligning with ICMJE/CONSORT
Concise justification phrases improve credibility: “These analyses were prespecified in the protocol to address [specific concern]” or “Post-hoc sensitivity analyses were exploratory and are described as such.” Reference CONSORT or ICMJE expectations when appropriate: “Following ICMJE recommendations, we report sensitivity analyses to assess robustness of primary findings.” Such statements reassure editors and readers that the reporting follows community standards.
Embedding the SEO phrase
Integrate the primary SEO phrase—phrases for sensitivity analyses and robustness checks—sparingly and naturally. Place it where it adds clarity: for example, “The Methods include phrases for sensitivity analyses and robustness checks that specify imputation methods and matching algorithms.” This keeps discoverability while preserving journal tone.
Step 4 — Micro-editing checklist and three brief before/after rewrites
Micro-editing checklist
- Be explicit: name the alternative method, rule, or window you used.
- Be reproducible: give enough technical detail or point to a supplement.
- Be transparent about pre-specification: label analyses as prespecified or exploratory.
- Report effect direction and magnitude, not just significance.
- When divergent, explain plausible mechanisms and implications for conclusions.
- Use journal-aligned phrasing and follow ICMJE/CONSORT guidance.
- Include the keyword naturally if it aids clarity and discoverability.
Before/after rewrite principles (no full examples per constraints)
-
Vague: “We did some sensitivity analyses.”
- Improved: Replace with a precise sentence that names the checks, the rationale, and the outcome, and if space permits, points to supplementary technical detail.
-
Vague: “Results were similar after adjusting for missing data.”
- Improved: State the specific methods used (e.g., multiple imputation, number of imputations), present the comparative estimates, and indicate whether interpretation changed.
-
Vague: “We ran propensity-score analyses.”
- Improved: Specify variables used to estimate the score, the method (matching/weighting), the balance metric, and whether effect estimates were consistent.
Final note
Precise, journal-aligned wording for sensitivity analyses and robustness checks strengthens manuscripts by making analytic decisions transparent, reproducible, and interpretable. By mapping common checks to Methods and Limitations, using the template phrases above, following clear reporting conventions, and applying the micro-editing checklist, authors can craft concise, ICMJE-consistent language that both communicates rigor and incorporates the keyword phrases for sensitivity analyses and robustness checks without undermining scholarly tone.
- State and predefine sensitivity analyses and robustness checks in Methods with enough technical detail (e.g., covariates, imputation algorithm and m, matching caliper) so others can reproduce the work.
- Use concise, quantitative reporting in Results: give alternative estimates, direction and magnitude of change, and whether inference changed (not just “similar”).
- When sensitivity results diverge, explain plausible reasons and implications in Limitations, and label analyses as prespecified or exploratory to guide interpretation.
- Use brief, journal-aligned template phrases in Methods to name the alternative approach and rationale, and place extended technical parameters in a supplement if space is limited.
Example Sentences
- As a sensitivity analysis, we repeated the primary regression with an expanded covariate set including baseline comorbidity index and prior medication use to assess potential residual confounding.
- To evaluate the impact of missing data, we performed complete-case analyses and multiple imputation (m = 20) under the missing-at-random assumption and compared effect estimates to the primary model.
- As a robustness check, we estimated propensity scores using age, sex, and disease severity, then applied inverse-probability-of-treatment weighting and inspected standardized differences for balance.
- In addition to the primary ITT analysis, we conducted a per-protocol analysis excluding participants with less than 80% adherence; divergence between ITT and per-protocol results was examined in the Limitations.
- We tested alternative outcome definitions—narrow 30-day readmission and a broader 90-day composite endpoint—to assess robustness to outcome misclassification.
Example Dialogue
Alex: The reviewers asked for sensitivity analyses, so I planned to rerun the model with additional covariates and a propensity-score weighting approach.
Ben: Good—did you prespecify those checks in the protocol or label them as post-hoc?
Alex: They were prespecified in the statistical analysis plan; we’ll report the details in Methods and put technical parameters in the supplement.
Ben: And if the sensitivity analyses change the estimate materially, will you discuss that in Limitations?
Alex: Absolutely—we’ll quantify the difference and explain whether divergence suggests sensitivity to specific assumptions, like unmeasured confounding.
Exercises
Multiple Choice
1. Which section of a clinical research manuscript is the most appropriate place to pre-specify the imputation algorithm, number of imputations, and variables included in the imputation model for a missing-data sensitivity analysis?
- Introduction
- Methods
- Results
- Discussion/Limitations
Show Answer & Explanation
Correct Answer: Methods
Explanation: Technical details of planned analytic choices (like imputation methods and parameters) should be specified in Methods so that readers can reproduce the approach. Limitations may discuss consequences, but the specification belongs in Methods.
2. When reporting a robustness check using propensity-score weighting, which of the following details is least appropriate to omit from the Methods?
- Variables used to estimate the propensity score
- The caliper or algorithm used for matching
- Balance diagnostics (e.g., standardized differences)
- A narrative claim that results were ‘similar’ without numbers
Show Answer & Explanation
Correct Answer: A narrative claim that results were ‘similar’ without numbers
Explanation: Journals expect concise quantitative reporting. Saying results were ‘similar’ without providing estimates or diagnostics is insufficient. The other items are specific methodological details that should be included in Methods or supplement.
Fill in the Blanks
In Methods, sensitivity analyses were prespecified in the protocol to address potential ___ such as unmeasured confounding or informative missingness.
Show Answer & Explanation
Correct Answer: biases
Explanation: Sensitivity analyses are used to test for sources of bias (e.g., unmeasured confounding, informative missingness). 'Biases' succinctly names the threats these analyses probe.
When sensitivity analyses yield estimates that diverge from the primary results, describe the difference in Results and explain implications in the ___ section.
Show Answer & Explanation
Correct Answer: Limitations
Explanation: Divergent sensitivity results should be interpreted and their implications discussed in the Limitations section, per the guidance to explain plausible mechanisms and consequences for confidence in conclusions.
Error Correction
Incorrect: We ran several sensitivity analyses and put all the technical parameters in the Discussion so readers can reproduce the methods.
Show Correction & Explanation
Correct Sentence: We ran several sensitivity analyses and provided technical parameters in the Methods (or supplement) so readers can reproduce the methods.
Explanation: Technical parameters belong in Methods or a supplement for reproducibility. The Discussion is for interpretation; placing methods there hinders reproducibility and violates reporting conventions.
Incorrect: Because results were similar after multiple imputation, it is unnecessary to report the imputation algorithm or number of datasets.
Show Correction & Explanation
Correct Sentence: Even though results were similar after multiple imputation, we report the imputation algorithm and number of datasets to ensure reproducibility.
Explanation: Regardless of whether sensitivity analyses change conclusions, authors should report technical details (algorithm, number of imputations) so others can reproduce the approach; omitting them contradicts the micro-editing checklist.