Written by Susan Miller*

Making the Case to Payers: Budget impact model language examples that strengthen RWE value stories

Struggling to write budget impact language that payers trust—without drifting into cost‑effectiveness claims? In this lesson, you’ll learn to craft precise, payer‑centered BIM narratives that separate PMPM affordability from ICER/QALY value and spotlight the real drivers: eligibility, utilization, price, and time horizon. You’ll find clear, step‑wise guidance, model‑neutral phrasing templates, real‑world examples, and targeted exercises to test compliance and nuance. The tone is calibrated and audit‑ready—equipping you to produce defensible RWE‑anchored value stories under strict guardrails.

Step 1 — Frame the purpose and guardrails

A budget impact model (BIM) answers a simple, payer‑centered question: “If we add this intervention to our formulary, how will total spending change over the next few years, given our enrolled population and current patterns of care?” Its scope is financial and operational. It quantifies near‑term cash flow to the payer or budget holder. This is different from cost‑effectiveness analysis, which addresses whether the health benefits are worth the costs, typically by calculating an incremental cost‑effectiveness ratio (ICER) using outcomes measured in quality‑adjusted life years (QALYs) or other utility‑based metrics. In short:

  • BIM = budget impact over a defined time horizon with real‑world utilization patterns; payer affordability lens.
  • Cost‑effectiveness = value for money over a patient’s lifetime or long horizon; societal or health system efficiency lens.

Recognizing the difference protects your narrative from conflation. A BIM does not claim that a therapy is “cost‑effective” or “good value” because the model does not produce QALYs or ICERs. Instead, it presents how total costs might increase or decrease when market share, eligible population size, and unit prices shift. The language of a BIM must therefore be precise, time‑bound, and payer‑specific, avoiding implications that the results generalize to long‑term welfare, patient utility, or system efficiency.

Compliance guardrails flow from this purpose:

  • Use neutral, descriptive phrasing that isolates drivers (utilization, price, eligible population, time horizon). Avoid promotional adjectives such as “breakthrough savings,” “dominant budget win,” or “game‑changing impact.”
  • Report inputs, assumptions, and data sources transparently, indicating ranges and uncertainty. Avoid definitive claims where the analysis uses modeled or proxy data.
  • Maintain separation between budget impact findings and cost‑effectiveness claims. If you reference ICER or QALY evidence, do so as contextual information and not as a conclusion of the BIM itself.
  • Align time horizon and perspective with payer decision needs (e.g., 1–3 years for private plans, up to 5 years for public programs), and state these choices explicitly.

By adopting these guardrails, you create a narrative that payers recognize as credible and actionable. You also demonstrate awareness that real‑world evidence (RWE) is often heterogeneous. Because RWE inputs drive BIM results, your language should emphasize scenario‑based interpretation rather than single‑point conclusions.

Step 2 — Deconstruct the anatomy of budget impact text

Clear budget impact writing follows the anatomy of the model itself. The most effective text identifies the inputs and outputs upfront, explains the assumptions behind them, and adheres to reporting conventions that allow a payer to replicate or stress‑test the findings.

Essential inputs typically include:

  • Eligible population: Defined by epidemiology, diagnosis codes, clinical criteria, and plan enrollment. Specify how you moved from national prevalence to plan‑level numbers (e.g., age distribution, coverage type, clinical eligibility). State inclusion and exclusion rules.
  • Utilization and uptake: Baseline treatment distribution and projected market share for the new intervention. Describe switching assumptions, initiation rates, adherence or persistence, and any step‑therapy or prior authorization filters that shape real‑world use.
  • Costs and prices: Unit acquisition costs, administration costs, monitoring, and management of adverse events. Indicate the price sources (e.g., net of rebates if modeled, or list price if net unknown) and justify the choice. Clarify whether medical costs (e.g., hospitalizations, ER visits) are included and how they are sourced from RWE.
  • Time horizon: The period over which budget impact is calculated, usually 1, 3, or 5 years. Explain why that window is relevant to the payer and how it interacts with uptake and discontinuation dynamics.
  • Perspective and scope: Explicitly name the budget holder (e.g., commercial plan, Medicaid program, integrated delivery network) and the cost categories included or excluded.

Key outputs and reporting conventions:

  • Total and per‑member per‑month (PMPM) budget impact: Report absolute changes and PMPM to normalize across plan sizes. PMPM is often more interpretable for budget committees.
  • Breakdown by cost component: Acquisition versus medical offsets, administration, monitoring, and adverse event management. Payers need to see where changes occur.
  • Distribution over time: Year‑by‑year results reflecting uptake and any dynamic changes in utilization or adherence.
  • Subgroup or scenario results: Where relevant, show how budget impact varies by eligible population segment or plan type, but avoid over‑granularity that implies unsupported precision.

Model‑neutral phrasing keeps the narrative transparent and non‑promotional. Useful patterns include:

  • “From the [payer] perspective over [time horizon], the introduction of [intervention] is associated with an estimated change in total plan costs of [range], corresponding to [range] PMPM.”
  • “Results were sensitive to assumptions about [utilization driver], [eligible population], and [net price].”
  • “Medical cost offsets were estimated using observed rates from [RWE source] and applied uniformly to both comparators and [intervention], with scenario variation explored.”

Common pitfalls to avoid:

  • Conflating budget impact with cost‑effectiveness: Do not describe a therapy as “cost‑effective” within BIM text or imply that reduced PMPM equals economic efficiency.
  • Over‑precision: Avoid excessive decimal places or overly narrow ranges when input data are variable. Round to a degree that matches data quality and decision needs.
  • Unstated net price assumptions: If you cannot model net price, say so and provide scenarios at different discount levels. Do not imply access to confidential rebate information.
  • Implied clinical superiority: Budget impact may include differences in medical resource use, but the narrative should not make clinical claims beyond what is supported by referenced evidence.

By carefully structuring inputs and outputs and using neutral, payer‑aligned language, you enable readers to evaluate transferability to their own context and to adjust assumptions to reflect their plan’s realities.

Step 3 — Embed cross‑references to ICER/QALY and sensitivity analysis correctly

Many payers consider cost‑effectiveness alongside budget impact, but they are distinct lenses. Your BIM narrative should acknowledge relevant cost‑effectiveness evidence without importing its endpoints into budget claims. Clarity is achieved by clearly signaling the boundary between frameworks.

Appropriate ways to reference cost‑effectiveness evidence:

  • Acknowledge existence and scope: “Independent cost‑effectiveness analyses have reported ICERs relative to [comparator] using QALY‑based outcomes over a [lifetime/long‑term] horizon.” This situates readers without implying endorsement or merging findings.
  • Indicate methodological complementarity: “While ICER/QALY outcomes inform value for money, the present BIM evaluates near‑term budget impact under plan‑level utilization and price assumptions.” This establishes complementary roles.
  • Avoid endpoint conflation: Do not combine QALYs with PMPM in the same sentence as if they are commensurate. Keep financial outcomes (PMPM, total budget change) separate from utility‑based outcomes (QALYs, net health benefit).

Utilities and RWE in BIMs:

  • Utilities per se are not inputs in a standard BIM. If you mention utilities, make it clear they pertain to cost‑effectiveness, not to the budget impact calculation. If health state differences influence medical resource use, describe the resource use differential and its data source rather than introducing utilities.
  • If RWE shows differential hospitalization or outpatient utilization, incorporate those rates as medical costs in the BIM, citing the observational sources and addressing potential confounding through sensitivity analysis or ranges.

Sensitivity and uncertainty communication:

  • One‑way sensitivity analysis: Vary key drivers such as eligible population size, uptake trajectory, discontinuation rates, and net price. Describe which parameters most influence PMPM and why.
  • Scenario analysis: Present alternative coverage policies (e.g., prior authorization, step therapy), alternative adherence patterns, or different net discount assumptions. Clearly label each scenario without implying that any scenario is preferred or likely.
  • Probabilistic analysis (if included): If you conduct probabilistic sensitivity analysis, explain the distributional assumptions briefly and report the probability that budget impact falls below specified PMPM thresholds. Keep language descriptive and avoid prescriptive conclusions.

Payer‑relevant outcomes should remain central: emphasize PMPM, annual total cost change, and cost component shifts. If you discuss economic thresholds, make sure they are threshold budgets (e.g., internal PMPM tolerances), not willingness‑to‑pay thresholds for QALYs. This preserves the conceptual boundary and prevents readers from inferring unintended cost‑effectiveness claims.

Step 4 — Assemble a mini‑value narrative and checklist

Below is a concise, end‑to‑end structure you can adapt to common manuscript and HTA dossier sections, ensuring clarity and compliance while highlighting the real‑world budgeting lens.

Methods section focus:

  • Begin with perspective and time horizon: “This analysis evaluated the budget impact of adding [intervention] to the [payer] formulary over [1–5] years.”
  • Define population and comparators: “The eligible population was derived from [epidemiology/claims] and restricted to patients meeting [clinical criteria]. Comparators reflected current standard of care based on [guidelines/claims distribution].”
  • Describe utilization and uptake: “Projected uptake of [intervention] and changes in market share were estimated using [analog class adoption/RWE], with discontinuation and adherence rates applied uniformly unless specified in scenarios.”
  • Detail costs and medical resource use: “Drug acquisition costs were sourced from [price compendium] with [assumptions] about net discounts. Administration, monitoring, and adverse event management costs were included. Medical resource use rates (e.g., hospitalizations, outpatient visits) were obtained from [RWE source].”
  • Outline analyses: “Base case results are reported as total and PMPM changes. One‑way and scenario analyses evaluated uncertainty in population size, uptake, and net price. Probabilistic analysis was conducted where data supported distributional assumptions.”

Results section focus:

  • Present absolute and normalized changes: “In the base case, total plan spending changed by [range] in Year 1, corresponding to [range] PMPM.”
  • Break down components: “Changes were primarily driven by [acquisition costs], partially offset by [medical cost reductions], with smaller effects from [administration/monitoring].”
  • Show time evolution: “Budget impact increased in Year [X] due to uptake, stabilizing by Year [Y] as persistence and switching equilibrated.”

Sensitivity section focus:

  • Identify key drivers: “PMPM results were most sensitive to the eligible population estimate, net price assumptions, and uptake trajectory.”
  • Summarize uncertainty: “Across plausible ranges, PMPM varied between [low–high]. Under alternative coverage scenarios, PMPM remained within [band], indicating results are driven primarily by [driver].”
  • If probabilistic analysis is used: “The probability that PMPM remains below [payer‑relevant threshold] was [percentage].”

Limitations section focus:

  • Data and structural limits: “Claims‑based rates may reflect residual confounding. Net prices were modeled as scenarios rather than observed confidential discounts. The model did not include indirect costs or patient out‑of‑pocket expenditures.”
  • Transferability: “Results reflect [payer] perspective and may differ for plans with alternative demographic profiles or management policies.”
  • Time horizon: “Short‑term analysis may not capture longer‑term shifts in disease management or competitive dynamics.”

Value statement for payers:

  • Synthesize with restraint: “From a [payer] perspective over [time horizon], adding [intervention] is associated with a [directional] change in PMPM, primarily driven by [driver], with partial offsets from [medical cost changes]. Plan‑level results will vary with eligibility, uptake, and net pricing. These estimates support budgeting and inform coverage planning.”

Quick compliance checklist:

  • Perspective and time horizon stated clearly and aligned with payer needs.
  • Eligible population and data sources described with inclusion/exclusion criteria.
  • Utilization, uptake, and discontinuation assumptions disclosed and justified.
  • Price sources and net price assumptions transparently stated; scenarios used where net unknown.
  • Outcomes reported as total and PMPM, with breakdown by cost component and by year.
  • Sensitivity and scenario analyses presented; major drivers identified.
  • No conflation of PMPM with QALYs; ICER/QALY references, if any, contextual and separate.
  • Language neutral and non‑promotional; uncertainties acknowledged.
  • Limitations explicit, including transferability and data constraints.

When executed with these elements, your budget impact narrative does three things well. First, it answers the payer’s operational question about near‑term affordability using concrete, plan‑relevant measures. Second, it honors methodological boundaries by keeping ICER/QALY concepts in their own lane while still acknowledging their relevance to broader value discussions. Third, it uses compliant, transparent phrasing that invites payers to substitute their own inputs and reproduce results, strengthening credibility. The overall effect is a coherent RWE‑anchored value story that does not oversell, does not conflate endpoints, and equips decision makers with the exact levers—utilization, price, eligible population, and time horizon—they need to build and stress‑test their budgets.

  • A Budget Impact Model (BIM) reports near-term affordability for a specific payer (total and PMPM changes over 1–5 years) and must not be conflated with cost‑effectiveness metrics like QALYs or ICERs.
  • State perspective, time horizon, eligible population, utilization/uptake assumptions, and cost inputs transparently; report results by year with breakdowns (acquisition, medical offsets, administration, monitoring).
  • Use neutral, payer‑aligned language, disclose data sources and uncertainty, avoid over‑precision and promotional claims, and clearly separate any ICER/QALY context from BIM outcomes.
  • Conduct and report sensitivity/scenario analyses (e.g., eligible population, uptake, discontinuation, net price, coverage policies) to show how key drivers influence PMPM and total budget results.

Example Sentences

  • From a commercial payer perspective over a 3‑year horizon, adding the therapy is associated with an estimated change of $0.12–$0.18 PMPM, primarily driven by uptake and net price assumptions.
  • The eligible population was derived from ICD‑10 claims and restricted by prior authorization criteria, with inclusion and exclusion rules stated explicitly.
  • Results were most sensitive to the projected market share in Year 2, the discontinuation rate after six months, and the assumed net discount range of 15%–35%.
  • Medical cost offsets were estimated using hospitalization and ER visit rates from a real‑world evidence database and applied consistently across comparators, with scenario variation explored.
  • This BIM reports total and PMPM changes by year and does not make cost‑effectiveness claims or combine QALYs with financial outcomes.

Example Dialogue

Alex: We need language for the budget impact section—what’s the safest way to frame the results?

Ben: Start with perspective and time horizon: “From the plan perspective over three years, the intervention changes total costs by $1.2–$1.8 million, or $0.14 PMPM.”

Alex: Should we mention the ICER we saw in that independent review?

Ben: Yes, but keep it separate: note that cost‑effectiveness evidence exists using QALYs over a long horizon, while our BIM focuses on near‑term affordability.

Alex: Got it. I’ll also call out key drivers—eligible population size, uptake trajectory, and net price—and show year‑by‑year results.

Ben: Perfect, and add a scenario with prior authorization to show how utilization management shifts the PMPM band.

Exercises

Multiple Choice

1. Which statement correctly distinguishes a budget impact model (BIM) from cost‑effectiveness analysis?

  • A BIM evaluates value for money using QALYs over a lifetime horizon.
  • A BIM reports short‑term affordability metrics (e.g., PMPM) for a specific payer, without producing ICERs.
  • Cost‑effectiveness focuses on near‑term cash flow to the payer and uses PMPM.
  • Both BIM and cost‑effectiveness use the same outcomes and time horizons.
Show Answer & Explanation

Correct Answer: A BIM reports short‑term affordability metrics (e.g., PMPM) for a specific payer, without producing ICERs.

Explanation: BIMs quantify near‑term budget changes (often PMPM) for a payer and do not produce QALYs or ICERs; cost‑effectiveness evaluates value for money over longer horizons using QALYs and ICERs.

2. Which phrasing best follows compliance guardrails for BIM reporting?

  • “The therapy delivers breakthrough savings and is cost‑effective with a $0.10 PMPM.”
  • “From the Medicaid perspective over 3 years, adding the therapy is associated with a $0.09–$0.15 PMPM change; results were sensitive to uptake and net price.”
  • “PMPM decreased by $0.12, proving superior clinical outcomes and societal efficiency.”
  • “The model uses confidential net rebates known to our team to ensure precision.”
Show Answer & Explanation

Correct Answer: “From the Medicaid perspective over 3 years, adding the therapy is associated with a $0.09–$0.15 PMPM change; results were sensitive to uptake and net price.”

Explanation: Neutral, payer‑specific, time‑bound language is required; it reports PMPM with key drivers and avoids promotional claims or cost‑effectiveness conflation.

Fill in the Blanks

From a commercial plan perspective over a ___‑year horizon, total costs changed by $1.0–$1.6 million, corresponding to $0.11–$0.17 PMPM.

Show Answer & Explanation

Correct Answer: 3

Explanation: Common BIM horizons are 1, 3, or 5 years; the examples emphasize payer‑aligned 3‑year reporting with PMPM normalization.

Medical cost offsets were estimated using hospitalization rates from an ___ source and applied consistently across comparators, with scenario variation explored.

Show Answer & Explanation

Correct Answer: RWE

Explanation: BIMs use real‑world evidence (RWE) for medical resource use and apply those rates transparently, often testing scenarios for uncertainty.

Error Correction

Incorrect: Our BIM proves the therapy is cost‑effective because PMPM decreases by $0.10.

Show Correction & Explanation

Correct Sentence: Our BIM shows a $0.10 PMPM decrease; cost‑effectiveness was not assessed in this analysis.

Explanation: Do not conflate BIM outcomes (PMPM) with cost‑effectiveness claims; QALYs/ICERs are separate and not produced by a BIM.

Incorrect: We modeled precise net prices using undisclosed rebate data and therefore report results to three decimal places.

Show Correction & Explanation

Correct Sentence: We modeled list prices with discount scenarios due to unknown net prices and rounded results appropriately to reflect input uncertainty.

Explanation: Guardrails require transparency about price assumptions, avoidance of implying access to confidential rebates, and avoiding over‑precision given uncertain inputs.