Written by Susan Miller*

Precision English for Security Docs: Crafting Neutral Threat Modeling Language for RFCs

Struggling to turn security concerns into RFC-ready text that wins fast consensus? This lesson gives you a precise, reusable approach for crafting neutral threat modeling language—fact-anchored, scope-clear, evidence-traceable, and RFC 2119/8174 compliant. You’ll get a compact template, high-signal transformations of biased/vague phrasing, real-world examples, and targeted exercises to validate mastery. Finish with a rubric-driven workflow you can apply immediately to produce auditable, testable requirements with measurable outcomes.

1) What “neutral threat modeling language for RFCs” means—and why it matters

Neutral threat modeling language is a disciplined way of describing security risks that prioritizes verifiable facts, defined scope, and traceable evidence over blame, emotion, or speculation. In the context of RFCs (Requests for Comments), neutrality is not merely stylistic. It supports consensus-building, repeatable review, and interoperability. Drafts that avoid emotionally loaded descriptors and unsupported claims are easier to audit, easier to implement, and less likely to create ambiguity or friction across teams and organizations.

Several traits define this style:

  • Fact primacy: Statements are grounded in observable behavior (e.g., system states, protocol flows, logs, formal diagrams). Assertions that cannot be tied to an artifact are clearly labeled as assumptions.
  • Scope clarity: The system boundaries, assets, and roles are introduced before risk statements. Each claim is mapped to a precise component or interface rather than “the system in general.”
  • Evidence orientation: Each claim has a pointer to supporting material (e.g., control documentation, telemetry, standards). Where evidence is absent, uncertainty is stated explicitly and bounded.
  • Bias-free phrasing: The language avoids pejoratives and person-blame. Instead of labeling users as “careless,” the text describes actors by authentication state, authorization level, location, and capabilities.
  • Controlled modality: Requirement levels use well-defined RFC terms (MUST/SHALL, SHOULD, MAY) as specified by RFC 2119 and 8174. These terms convey implementer obligations without rhetorical inflation.
  • Separation of likelihood and impact: Risk is articulated as two distinct dimensions, with assumptions documented. The language avoids certainty claims and avoids mixing impact magnitude with probability.
  • Mitigation measurability: Proposed controls are described in testable, observable terms. Verification methods, metrics, and control mappings are indicated.
  • Drafting discipline: Concise sentences, clear definitions, and consistent terminology reduce interpretive variance. Each term is introduced once and used consistently.

This style directly supports auditability and cross-team collaboration. Neutral phrasing reduces contention, keeps stakeholder focus on system behavior, and shortens review cycles because less time is spent debating wording and more time is spent validating evidence. In practice, neutrality increases confidence that security claims are reproducible and that mitigations can be verified independently.

2) A reusable, RFC-aligned micro-template

To produce consistent, neutral threat modeling text, use a micro-template that enforces a predictable flow and makes evidence and obligations explicit. The structure below aligns with the typical needs of RFC readers and reviewers:

  • Asset: Identify the protected resource or property. Define it in operational terms (e.g., data type, service function, cryptographic key, configuration state). Include scope boundaries.
  • Entry Point: Specify the interface, protocol, or control plane path through which interaction occurs. Indicate trust boundaries and preconditions (authentication, authorization, network location, key material).
  • Threat Actor: Describe the actor using capability-based, non-accusatory terms (e.g., unauthenticated external actor, authenticated tenant user with read privileges, on-path network observer). Avoid motive assumptions unless required by the model.
  • Attack Path: Provide a concise, stepwise description of the sequence by which the actor could achieve an objective. Keep it mechanism-focused and minimally inferential.
  • Impact: Articulate the security property affected (confidentiality, integrity, availability, non-repudiation, privacy). Quantify scope where feasible (e.g., affected tenants, data classes, time window). Separate impact from likelihood.
  • Mitigations: State controls using normative language with testable criteria. Identify detection and prevention measures, and specify how to verify deployment and effectiveness.
  • Assumptions: Declare the environmental or architectural assumptions that bound the analysis (e.g., key management trust model, time synchronization, certificate validation).
  • Residual Risk: After applying mitigations, describe what risk remains, the rationale, and monitoring or contingency measures.
  • References: List evidence and standards: architecture diagrams, log schemas, control frameworks, and RFCs. Include version or date where relevant.

By following this micro-template, each threat statement becomes a compact, auditable unit. Reviewers can trace each claim to artifacts and standards, verify mitigations against control catalogs, and validate terminology. The template also supports future updates because each element is independently maintainable: assets can evolve, mitigations can change, and references can be revised without rewriting the entire document.

3) Transforming biased or vague statements into neutral, compliant phrasing

Neutrality requires deliberate control of wording, particularly around blame, modality, and uncertainty. The following principles guide the transformation from biased/vague phrasing to standards-aligned, testable language:

  • Replace agent-blame with role/capability descriptors: Avoid terms implying fault or intent (e.g., “careless,” “malicious by nature”). Instead, classify actors by authentication state, privilege, network position, and tooling capability. This keeps analysis focused on system exposure rather than human characterizations.
  • Bound uncertainty explicitly: Vague qualifiers (“likely,” “probably,” “might be easy”) are replaced with bounded statements that state what is unknown and what evidence is absent. When confidence is low, declare the assumption and its effect on the assessment.
  • Apply RFC normative terms precisely: Use MUST/SHALL only when an implementation is required for conformance. Use SHOULD when exceptions may exist under documented conditions. Use MAY for optional behavior. Avoid stacking qualifiers (“strongly SHOULD”) or mixing terms within a single requirement.
  • Anchor claims to artifacts: Convert generalities into specific, traceable references. For example, instead of “logs will capture attempts,” specify the log schema, the event IDs, and sampling rules. Link to diagrams and controls by identifier.
  • Isolate likelihood from impact: Express them separately. For example, “If exploited, the impact is loss of confidentiality of X,” and separately, “Under current assumptions, the likelihood is assessed as [low/medium/high], pending telemetry from Y.” Do not conflate severity with probability.
  • State mitigations as verifiable controls: Move from intent (“ensure it is secure”) to measurable configuration or behavior (“MUST enforce TLS 1.3 with AEAD cipher suites; certificate validation MUST include revocation checking under policy P”). Define verification methods.
  • Use definition-first terms consistently: Introduce specialized terms before use. Use the same label across the document to avoid semantic drift. Where terms overlap with external standards, cite the source definition to prevent divergence.

This disciplined transformation removes ambiguity and makes each statement actionable. It also accelerates consensus: reviewers can interrogate assumptions, validate evidence, and argue effectively about mitigation choices without getting entangled in language disputes.

4) Mini end-to-end drafting process with rubric and checks

To produce RFC-ready threat modeling text, follow a stepwise drafting process that operationalizes the micro-template and embeds quality checks:

  • Step 1: Establish the glossary and scope. Define assets, actors, and trust boundaries before drafting threats. Create a short glossary for recurrent terms. Ensure that each term maps to architecture diagrams or system components. This prevents later inconsistencies and clarifies the unit of analysis.

  • Step 2: Collect evidence artifacts. Gather architecture diagrams, interface specifications, sequence charts, logging schemas, control configurations, and relevant standards. Assign each artifact an identifier. Note any gaps—these become declared uncertainties.

  • Step 3: Draft threat statements using the micro-template. For each asset and entry point, write one concise threat statement that includes the actor, attack path, impact, mitigations, assumptions, residual risk, and references. Keep sentences concise and avoid multi-idea clauses that can cause ambiguity.

  • Step 4: Apply modality and mitigation verification. Review each mitigation: if it is necessary for conformance, use MUST/SHALL; if it is a recommended default with documented exceptions, use SHOULD; if optional, use MAY. For each mitigation, define a verification method (e.g., configuration audit, test harness, synthetic transaction, log pattern) and link to measurement criteria.

  • Step 5: Separate and document risk dimensions. For each threat, articulate impact separately from likelihood. Include bounding assumptions and any available telemetry. Avoid certainty language; prefer “assessed as” or “based on observed X in Y.” Document residual risk explicitly.

  • Step 6: Perform bias and clarity edits. Replace any person-blame or intent assumptions with capability descriptors. Remove vague qualifiers unless they are tied to bounded uncertainty. Ensure consistent terminology and definition-first usage.

  • Step 7: Cross-reference and traceability. Insert references to RFC 2119/8174 for normative terms, to specific controls (e.g., least privilege, encryption in transit/at rest, audit logging), and to external frameworks where relevant. Check that each claim has at least one traceable reference.

  • Step 8: Final conformance pass. Validate that each requirement is testable, each mitigation is measurable, and each assumption is explicitly documented. Confirm that sentence structure is concise and that terms are used consistently throughout the draft.

To make this process objective, apply a simple rubric during review:

  • Neutrality and bias-free language: No pejoratives; actors are described by capability and context. Pass/fail with examples noted.
  • Structure adherence: Each threat statement contains asset, entry point, actor, path, impact, mitigations, assumptions, residual risk, and references. Score for completeness.
  • Normative accuracy: MUST/SHALL/SHOULD/MAY are used in accordance with RFC 2119/8174, without mixed or stacked modalities. Non-conforming usage is flagged.
  • Evidence sufficiency: Every claim is tied to at least one artifact or standard. Missing references are recorded as gaps.
  • Mitigation verifiability: Controls include clear verification methods and metrics. Mitigations that cannot be tested or measured are revised or removed.
  • Risk separation: Impact and likelihood are articulated separately with documented assumptions. Conflation results in revision.
  • Terminology consistency: Terms introduced in the glossary are used consistently; no synonyms are introduced later. Deviations are corrected.
  • Conciseness and clarity: Sentences are direct and free of unnecessary qualifiers. Long, compound sentences are split for readability.

Finally, embed practical checks into your workflow:

  • Checklist for each paragraph: Does it contain one idea? Does it use defined terms? Is any claim unreferenced? Is the modality correct?
  • Telemetry linkage: For any likelihood assessment, is there a data source specified (logs, metrics, incident history)? If not, is the uncertainty stated?
  • Control mapping: For each mitigation, is there an explicit mapping to a control framework concept (e.g., least privilege, encryption, audit logging), and is the mapping relevant to the stated impact?
  • Residual risk statement: After mitigations, is the remaining exposure clear, and is there monitoring or contingency noted?

Adhering to these steps and checks turns threat modeling into a repeatable, auditable drafting activity suitable for RFC publication. The outcome is language that can be implemented and reviewed reliably: requirements are normative and testable, claims are evidence-based, risks are transparent and bounded, and mitigations are traceable to recognized control practices. Over time, this approach builds institutional muscle memory; reviewers know what to expect, implementers know how to comply, and the organization gains a durable foundation for iterative improvement and external scrutiny.

  • Use neutral, evidence-based language: ground claims in artifacts, define scope and terms first, avoid blame, and state uncertainties explicitly.
  • Separate risk dimensions: describe impact and likelihood independently with documented assumptions and telemetry where available.
  • Apply RFC modality precisely: use MUST/SHALL, SHOULD, and MAY per RFC 2119/8174, with testable, measurable mitigations and clear verification methods.
  • Follow the micro-template and process: include asset, entry point, actor, attack path, impact, mitigations, assumptions, residual risk, and references; ensure consistency, traceability, and conciseness.

Example Sentences

  • The claim about token leakage is scoped to the API gateway and is supported by log artifact A-12.
  • An unauthenticated external actor MAY send malformed handshake messages through Entry Point EP-3; impact and likelihood are assessed separately.
  • Certificate validation MUST include revocation checking as defined in Policy P-7, with verification via test case TC-21.
  • Assumption A-2 states that time synchronization is within ±1 second; uncertainty increases if NTP drift exceeds this bound.
  • Residual risk remains for on-path observers despite TLS 1.3, limited to metadata exposure over a 24-hour window.

Example Dialogue

Alex: Your draft says the service is probably insecure; can you anchor that to an artifact?

Ben: You're right; I'll replace that with, “Under Assumption A-1, an authenticated tenant with read privileges CAN access EP-2; logs L-9 confirm three such requests.”

Alex: Good. Separate impact from likelihood and use RFC terms—what MUST the implementer do?

Ben: The client MUST enforce TLS 1.3 and SHOULD rotate API keys every 90 days, verifiable via control check CC-5.

Alex: Add the residual risk statement so reviewers see what remains after those controls.

Ben: Noted: residual risk is limited to rate-limited metadata enumeration by external actors; monitoring via metric M-4 applies.

Exercises

Multiple Choice

1. Which sentence best demonstrates bias-free phrasing aligned with the lesson?

  • Careless users keep leaking keys, so we must lock things down.
  • An unauthenticated external actor MAY attempt key enumeration via Entry Point EP-1; impact and likelihood are assessed separately.
  • Bad actors will definitely steal tokens unless we fix this immediately.
  • The API team SHOULD really try harder to stop obvious attacks.
Show Answer & Explanation

Correct Answer: An unauthenticated external actor MAY attempt key enumeration via Entry Point EP-1; impact and likelihood are assessed separately.

Explanation: It uses capability-based actor description, RFC 2119 modality (MAY), and separates impact from likelihood without blame or certainty claims.

2. Which requirement uses RFC 2119/8174 terms correctly and is testable?

  • Clients strongly SHOULD MUST use encryption.
  • The service should probably be secure if possible.
  • Clients MUST enforce TLS 1.3 with AEAD cipher suites; verification via test case TC-9.
  • We MAY or MAY NOT log stuff depending on how we feel.
Show Answer & Explanation

Correct Answer: Clients MUST enforce TLS 1.3 with AEAD cipher suites; verification via test case TC-9.

Explanation: It uses MUST correctly for a conformance requirement and specifies a testable verification method, aligning with mitigation measurability.

Fill in the Blanks

Each claim in a neutral RFC threat statement should be anchored to an artifact; if evidence is absent, the uncertainty must be ___ and bounded.

Show Answer & Explanation

Correct Answer: stated

Explanation: The lesson requires uncertainty to be explicitly stated and bounded when evidence is missing.

Risk must be expressed along two distinct dimensions: ___ and impact, without conflating the two.

Show Answer & Explanation

Correct Answer: likelihood

Explanation: The lesson mandates separation of likelihood and impact as distinct dimensions.

Error Correction

Incorrect: The system is probably insecure, and users are careless, so we MUST fix everything immediately.

Show Correction & Explanation

Correct Sentence: Under Assumption A-1, an unauthenticated external actor CAN reach Entry Point EP-2. Impact and likelihood are assessed separately. Implementers SHOULD apply control C-5; verification via test case TC-12.

Explanation: Removes blame and vague certainty, separates impact from likelihood, and uses precise RFC modalities with a verifiable control.

Incorrect: Logs will capture attempts.

Show Correction & Explanation

Correct Sentence: Audit logs L-7 MUST record authentication failures with event ID EV-14 at 100% sampling; verification via configuration check CC-3.

Explanation: Anchors the claim to specific artifacts and uses normative, testable language per evidence orientation and mitigation measurability.