Written by Susan Miller*

Regulatory English for AI/ML SaMD: Writing EU AI Act Alignment Statements for Clinical and Performance Files

Struggling to translate the EU AI Act into clean, audit-ready language for your CER/PER? In this lesson, you’ll learn to draft a concise, evidence-anchored alignment statement for AI/ML SaMD that maps risk, data governance, oversight, performance, transparency, and change control to your MDR/IVDR files—harmonized with US submissions. Expect precise guidance, a seven-block template, calibrated examples, and short exercises to validate tone, placement, and traceability. You’ll leave with a reusable, regulator-calibrated scaffold that reduces queries and speeds reviews.

1) Situate and define

An EU AI Act alignment statement is a short, evidence-anchored section inside your clinical evaluation (MDR Annex XIV) or performance evaluation (IVDR Annex XIII) documentation that demonstrates how your AI/ML Software as a Medical Device (SaMD) meets the intent and requirements of the EU AI Act as they relate to safety, performance, and governance. It does not replace your MDR/IVDR evidence; instead, it synthesizes and cross-references it, showing auditors where each AI Act theme is addressed by the materials already present in your technical documentation, clinical/performance files, post-market surveillance plan, and risk management file.

In practice, the alignment statement typically appears in the following locations:

  • Within the Clinical Evaluation Report (CER) under “Regulatory Context and Applicable Requirements,” or in an annex dedicated to “AI Governance and Compliance Alignment.”
  • Within the Performance Evaluation Report (PER) for IVDR devices in a parallel section, cross-linking to analytical performance, clinical performance, scientific validity, and data governance content.
  • In the technical documentation summary (Annex II/III) as a pointer section that references: risk management (ISO 14971), software lifecycle (IEC 62304), usability/human factors (IEC 62366), cybersecurity (IEC 81001-5-1), data protection (GDPR), bias/representativeness analyses, and post-market learning controls.

Your goal is to provide a concise but complete spine that connects the AI Act’s central themes—risk classification, data governance, transparency, human oversight, robustness, performance/monitoring, and lifecycle change control—with the MDR/IVDR evidence you already maintain. The tone is neutral and evidence-led: claims must be measurable, sourced, and traceable to artifacts (plans, protocols, reports, and procedures). Because the EU AI Act prioritizes high-risk AI systems in health, reviewers expect to see explicit recognition of risk status and methodical controls throughout the device lifecycle. The alignment statement therefore becomes a signpost: it points to—and does not duplicate—the verifiable details that sit in the CER/PER and technical file.

In addition, many manufacturers submit to both EU and US pathways. The alignment statement should harmonize with US submissions (e.g., 510(k), De Novo, PMA) by using consistent definitions, evidence points, and risk language. It must not contradict US claims or performance endpoints; rather, it should present the same facts in an EU style: precise, conservative, and grounded in primary documents and standards.

2) Decompose the statement: seven component blocks

Your alignment statement is built from seven blocks. Each block should include brief EU-style sentence frames that point to specific sections, procedures, and results. Keep the language measurable and anchored in sources.

A. Scope identification

Purpose: Define the device, its intended purpose and clinical context, the AI/ML functionality, and the lifecycle boundary of claims.

EU-style sentence frames:

  • “This alignment statement applies to [device name, version], a [SaMD type] intended for [intended purpose] in [target population/clinical setting].”
  • “The AI component comprises [algorithm type/architecture] supporting [clinical task], used under the conditions specified in [IFU/release notes].”
  • “The scope excludes [non-claimed features], which are not part of the intended purpose and are not relied upon for clinical decision-making.”

Evidence anchors:

  • IFU/labeling; intended purpose statement; software release identification; version control records.

B. Risk class mapping

Purpose: Map the device to MDR/IVDR classification and state the AI Act risk posture, with links to risk management.

EU-style sentence frames:

  • “The device is classified under [MDR/IVDR rule], with conformity assessment via [Notified Body route].”
  • “Under the EU AI Act, the AI functionality is treated as high-risk due to its medical intended purpose; controls are implemented as listed in Section [x].”
  • “Risk control measures and residual risk evaluations are documented in [Risk Management File reference], aligned with ISO 14971 and linked to post-market surveillance triggers.”

Evidence anchors:

  • Classification rationale; NB certificate scope; risk management plan/report; hazard analyses; benefit-risk determination.

C. Data governance and representativeness

Purpose: Demonstrate data quality, relevance, representativeness, and protection. Show bias mitigation and traceability from data source to model.

EU-style sentence frames:

  • “Training/validation/test datasets are described in [Data Management Plan/Model Card], including provenance, inclusion/exclusion, and de-identification consistent with GDPR.”
  • “Representativeness across [age/sex/clinical subgroups/site geography/device acquisition parameters] is quantified; subgroup performance is reported in [PER/CER sections].”
  • “Bias risks were identified and mitigated via [sampling strategies, reweighting, augmentation, threshold calibration], with residual bias monitored per [PMS/PMCF plan].”

Evidence anchors:

  • Data management plan; data dictionary; dataset lineage logs; GDPR DPIA/records of processing; subgroup analyses; statistical plans and reports.

D. Human oversight

Purpose: Explain the human-in-the-loop controls, operator qualifications, and usability safeguards that prevent over-reliance and support safe use.

EU-style sentence frames:

  • “The device supports clinician oversight; outputs are advisory and require confirmation per IFU Section [x].”
  • “User training and competency requirements are specified in [training materials/user manuals]; usability and mitigations follow IEC 62366 analyses.”
  • “Override, rejection, and reporting mechanisms are available; escalation pathways are described in [clinical workflow documentation].”

Evidence anchors:

  • IFU; user training curricula; usability/human factors reports; clinical workflow maps; risk controls addressing use error and automation bias.

E. Performance and monitoring

Purpose: State performance claims, verification/validation coverage, generalizability, and on-market monitoring of degradation.

EU-style sentence frames:

  • “Analytical and clinical performance endpoints, acceptance criteria, and statistical power are prespecified in [protocol/statistical analysis plan].”
  • “Performance is reported for overall and predefined subgroups; external validation results are documented in [CER/PER Annex].”
  • “On-market model monitoring tracks drift, calibration, false positive/negative rates, and alert volumes per [PMS plan]; triggers for corrective action are defined in [QMS procedure].”

Evidence anchors:

  • V&V reports; external validation datasets; calibration and drift analyses; PMS metrics and thresholds; complaint/vigilance procedures.

F. Transparency

Purpose: Clarify what information is communicated to users, patients, and regulators to enable informed use and oversight.

EU-style sentence frames:

  • “The device provides user-facing information on intended purpose, performance limits, known risks, and required context of use in [IFU/quick reference].”
  • “Model behavior explanations are offered at the level appropriate for clinical users [e.g., confidence scores, salient indicators], with limitations and contraindications documented.”
  • “Regulatory transparency is supported by traceable versioning, release notes, and change logs maintained in [configuration management system].”

Evidence anchors:

  • IFU and labeling; model cards or equivalent; release notes; configuration and change records; transparency statements for marketing materials.

G. Lifecycle change control (post-market learning)

Purpose: Explain how the ML model is maintained, updated, or retrained under controlled processes that preserve safety and performance.

EU-style sentence frames:

  • “Changes are governed by [change management SOP], including impact assessment on clinical performance, cybersecurity, and labeling.”
  • “For learning-enabled updates, pre-specified change control plans define permissible modifications, validation requirements, and review gates.”
  • “Post-market learning signals (real-world data, complaints, CAPA, field feedback) inform periodic model review; revalidation is documented in [PMS/PMCF reports].”

Evidence anchors:

  • Software lifecycle documentation (IEC 62304); algorithm change protocol; ACP/ML change plans; CAPA records; PMCF plans and reports; cybersecurity updates.

3) Draft and calibrate

To make drafting consistent and efficient, use a compact micro-template, followed by a model paragraph style and a checklist to harmonize with MDR/IVDR and US submissions. The goal is to be concise yet precise, pointing to evidence rather than repeating it.

Micro-template (headings and indicative sentence frames):

  • Scope and intended purpose: Identify device, version, intended purpose, and clinical context. Exclude non-claimed features. Reference IFU and release identifiers.
  • Risk mapping: State MDR/IVDR classification and AI Act risk posture. Link to risk management plan/report and benefit-risk conclusions.
  • Data governance and representativeness: Summarize dataset provenance, quality controls, subgroup representativeness, GDPR safeguards, and bias mitigation. Point to plans/reports.
  • Human oversight: Describe advisory nature, required user qualifications, usability controls, override and escalation mechanisms.
  • Performance and monitoring: Cite endpoints, external validation, subgroup performance, and on-market monitoring metrics and triggers.
  • Transparency: List user-facing disclosures, limitations, and versioning/change documentation.
  • Lifecycle change control: Outline change governance, pre-specified update controls, and PMCF/PMS integration for continuous learning.

Model paragraph style guidance:

  • Write in neutral, evidence-led EU English. Avoid promotional language. Use measurable claims (“external AUC prespecified; achieved 0.91 [95% CI...]” rather than “excellent performance”).
  • Cross-reference exact file locations (section numbers, annexes, SOP IDs, version numbers). Each claim should have a traceable citation.
  • Maintain consistency with US submissions: match endpoints, definitions, population descriptions, and known limitations. When naming standards, use the same editions and cite them consistently.

Calibration checklist for tone and claims:

  • Are all claims measurable and linked to a protocol or report? If not, remove or qualify.
  • Do the endpoints and populations match those in CER/PER and US submissions? If inconsistent, reconcile or explain rationale.
  • Are subgroup and representativeness statements backed by analyses? If absent, indicate planned evidence rather than asserting results.
  • Does the oversight description align with IFU and usability reports? Ensure consistency in user roles and required competencies.
  • Are change controls described in the same terms as your software lifecycle and ACP? Harmonize vocabulary and identifiers.

4) Validate and avoid pitfalls

Validation means demonstrating traceability and readiness for audit. Build a clear mapping between each AI Act theme and the location of supporting evidence. This mapping allows reviewers to navigate quickly and verify that claims are grounded.

Traceability mapping approach:

  • Create a simple table or matrix (internally) that lists each alignment block, the controlling requirement (AI Act theme, MDR/IVDR clause, harmonized standard), and the exact file references (document IDs, sections, versions). Maintain it under document control; summarize its structure in the alignment statement and offer the full mapping upon request.
  • Ensure version coherence: the device version, dataset versions, model hash/checksum, and IFU version should align across CER/PER, technical documentation, and release notes. Any mismatch is a common trigger for questions.

Typical reviewer objections and how to preempt them:

  • “Insufficient evidence of representativeness”: Preempt with clear subgroup definitions, counts, and confidence intervals; reference the analysis plan and results. If data are limited, state the limitation and the PMCF study that will close the gap, with timelines.
  • “Unclear human oversight and accountability”: Preempt by naming user roles, decision authority, escalation paths, and documentation that demonstrates training and competence verification. Align wording with IFU and clinical workflow.
  • “Performance not generalizable”: Preempt by demonstrating external validation across sites or devices and by explaining how drift monitoring will detect and correct performance degradation.
  • “Opaque lifecycle control for model updates”: Preempt with a defined change protocol, criteria for minor vs. major changes, revalidation obligations, and communication plans to users and regulators.
  • “Promotional or absolute claims”: Preempt by using conservative, evidence-led language, including uncertainty reporting and explicitly stated limitations.

Red-flag phrasing to remove or replace:

  • Absolute or unqualified assurances (e.g., “ensures safety,” “guarantees accuracy,” “bias-free”). Replace with conditional, evidence-based phrasing: “Risk is reduced via [control]; residual risk is monitored through [metric].”
  • Vague claims (e.g., “trained on diverse data”). Replace with quantifiable statements: “Training data include [n] patients across [m] sites with subgroup counts reported in [Annex x].”
  • Unsupported generalization (e.g., “works across all scanners/centers”). Replace with scoped claims: “Validated on [list/characteristics]; applicability outside these conditions has not been established and will be evaluated in [PMCF plan].”
  • Misalignment with US submissions (e.g., differing primary endpoints or contradictory performance). Replace with harmonized endpoints or provide explicit rationale for regional differences.

Final readiness checks before release:

  • Cross-document consistency: Verify that intended purpose, indications, contraindications, user profile, and operating conditions match across IFU, CER/PER, technical file, and regulatory submissions.
  • Evidence freshness: Confirm that referenced reports are finalized, approved, and under document control; draft or outdated versions undermine credibility.
  • Metric continuity: Ensure that the same statistical definitions and thresholds are used across documents (e.g., sensitivity/specificity definitions, prevalence assumptions, calibration metrics), and that confidence intervals and uncertainty are reported consistently.
  • Post-market linkages: Confirm that PMS/PMCF plans specify the exact monitoring metrics named in the alignment statement and that signal thresholds lead to documented CAPA or change control actions.

By assembling your alignment statement with these seven blocks and validation steps, you create a reusable scaffold that fits naturally into MDR/IVDR clinical and performance files. You help reviewers see, at a glance, how the AI Act’s expectations are operationalized through your existing quality system, risk management, evidence generation, and post-market processes. The language remains neutral and audit-ready: specific, measurable, and cross-referenced. This approach reduces review friction, supports harmonization with US submissions, and sets a firm foundation for safe, transparent, and well-governed AI/ML SaMD across its lifecycle.

  • An EU AI Act alignment statement is a concise, evidence-anchored signpost that cross-references existing MDR/IVDR documentation; it does not replace CER/PER or technical files.
  • Structure the statement around seven blocks: scope, risk class mapping, data governance/representativeness, human oversight, performance/monitoring, transparency, and lifecycle change control.
  • Use neutral, measurable, and traceable claims that align with MDR/IVDR and US submissions; cite exact documents, sections, and standards consistently.
  • Demonstrate audit-ready traceability and version coherence, and preempt common objections with subgroup evidence, clear oversight, external validation, defined update controls, and PMS/PMCF linkages.

Example Sentences

  • This alignment statement applies to CardioScan AI v2.3, a clinical decision support SaMD intended to prioritize suspected atrial fibrillation in adult emergency department patients.
  • Under the EU AI Act, the AI functionality is treated as high-risk due to its medical intended purpose; risk controls and residual risk evaluations are documented in RMF-012, Sections 3–6, aligned with ISO 14971.
  • Training, validation, and test datasets are detailed in DMP-CardioScan-05, with subgroup representativeness by age, sex, and site geography reported in CER Annex G and safeguarded per GDPR DPIA-2024-17.
  • Analytical and clinical performance endpoints and acceptance criteria were prespecified in SAP-CER-08; external validation achieved sensitivity 0.91 (95% CI 0.88–0.94) and specificity 0.87 (0.84–0.90) across three EU sites.
  • Changes to the model are governed by SOP-ACP-001 with predefined update gates; on-market monitoring tracks drift and alert volumes per PMS-Plan-ED-04, with CAPA triggers defined in QMS-PRO-021.

Example Dialogue

Exercises

Multiple Choice

1. Which sentence best reflects the required tone and structure for an EU AI Act alignment statement?

  • Our AI guarantees accurate results across all hospitals worldwide.
  • The model performs excellently and revolutionizes care for everyone.
  • External validation achieved sensitivity 0.91 (95% CI 0.88–0.94) as prespecified in SAP-CER-08; related drift monitoring metrics are defined in PMS-Plan-ED-04.
  • The device is amazing and totally safe, as shown by our users’ feedback.
Show Answer & Explanation

Correct Answer: External validation achieved sensitivity 0.91 (95% CI 0.88–0.94) as prespecified in SAP-CER-08; related drift monitoring metrics are defined in PMS-Plan-ED-04.

Explanation: The alignment statement must be neutral, measurable, and traceable to protocols and reports. The chosen option cites prespecified endpoints and file IDs, matching the guidance to avoid promotional or absolute claims.

2. Where is the alignment statement most appropriately placed for an IVDR device?

  • In a marketing brochure as a promotional highlight.
  • Only in the risk management file, replacing the PER.
  • Within the Performance Evaluation Report (PER) section that cross-links to analytical/clinical performance, scientific validity, and data governance.
  • Hidden in source code comments for developers.
Show Answer & Explanation

Correct Answer: Within the Performance Evaluation Report (PER) section that cross-links to analytical/clinical performance, scientific validity, and data governance.

Explanation: For IVDR devices, the statement typically appears in the PER, cross-referencing relevant evidence (analytical/clinical performance, scientific validity, data governance), as described in the lesson.

Fill in the Blanks

Under the EU AI Act, the AI functionality is treated as ___-risk due to its medical intended purpose; controls are implemented as listed in Section [x].

Show Answer & Explanation

Correct Answer: high

Explanation: Healthcare AI systems are typically high-risk under the EU AI Act; the statement should explicitly recognize this risk posture and link to implemented controls.

Training, validation, and test datasets are described in the ___, including provenance, inclusion/exclusion, and de-identification consistent with GDPR.

Show Answer & Explanation

Correct Answer: Data Management Plan (DMP)

Explanation: Data governance details should be anchored in a Data Management Plan (or model card), documenting provenance, inclusion/exclusion, and GDPR safeguards.

Error Correction

Incorrect: The alignment statement replaces MDR/IVDR evidence by summarizing key points from our CER/PER.

Show Correction & Explanation

Correct Sentence: The alignment statement does not replace MDR/IVDR evidence; it synthesizes and cross-references existing CER/PER and technical documentation.

Explanation: Per the guidance, the statement is a signpost that points to existing evidence, not a substitute for MDR/IVDR documentation.

Incorrect: We ensure safety and are bias-free, with results working across all scanners and centers.

Show Correction & Explanation

Correct Sentence: Risk is reduced via defined controls and residual bias is monitored; validation has been performed on specified sites/devices, with applicability beyond these conditions addressed in the PMCF plan.

Explanation: Absolute, unsupported claims must be avoided. Replace with qualified, evidence-led phrasing and scoped validation claims, referencing PMCF for gaps.