Strategic English for Unexpected Results: How to Articulate Them in Patent Writing
Struggling to turn “surprising data” into persuasive patent language that stands up in US and EP prosecution? This lesson gives you a repeatable, attorney‑vetted method to frame unexpected results using Context → Contrast → Causation, backed by disciplined evidence, calibrated hedging, and clear causal links. You’ll see concise explanations, real‑world model sentences, and targeted exercises (MCQs, fill‑in‑the‑blank, error correction) so you can write claims and responses that signal non‑obviousness without hype. Finish with a toolbox you can apply immediately—precise, defensible English that converts experiments into strategic advantage.
Strategic English for Unexpected Results: How to Articulate Them in Patent Writing
Step 1: Define and diagnose “unexpected results” in the patent context
In patent practice, an “unexpected result” is not merely a good or impressive outcome. It is a result that a person skilled in the art would not reasonably predict from the teachings of the closest prior art and the general knowledge in the field. In the United States, after KSR v. Teleflex, obviousness analysis is flexible and holistic. An invention may be considered obvious if there was a reason to combine known elements with predictable results. Therefore, to counter an obviousness rejection, it helps to show that your claimed features produce more than a predictable improvement. Unexpected results fit into this analysis as a type of secondary consideration (also called objective indicia) that can tip the scale toward non-obviousness when they are credibly linked to the claim features.
In the European system, inventive step often follows the problem–solution approach. The examiner identifies the closest prior art, defines the objective technical problem, and asks whether the claimed solution would have been obvious to the skilled person. Unexpected effects (or “surprising effects”) are directly relevant to demonstrate that the claimed distinguishing features solve the problem in a way that the prior art would not lead the skilled person to expect. In both systems, the law asks for more than a claim of surprise; it asks for evidence that the deviation from expectation is robust, attributable to the features of the claim, and meaningful for the technical field.
The types of admissible evidence typically include well-documented experimental data, side-by-side comparisons with the closest prior art, and reproducible tests using established methods. Declarations or affidavits in the US, and experimental reports in both jurisdictions, can support the record. The key is to ensure that the evidence is commensurate in scope with the claims: the data must correspond to the features as claimed, not to a narrow, unclaimed subset unless you explain why the subset is representative. Similarly, in Europe, the submissions should be coordinated with the defined problem and the distinguishing technical features.
Because examiners are sensitive to overstatement, it is crucial to diagnose the language for hype. Avoid words such as “revolutionary,” “miraculous,” or “unprecedented” unless they are tied to quantitative anchors and a credible comparator. Replace slogans with calibrated statements that reference measurable parameters, baselines, and mechanisms. Ask yourself: Would the skilled person, reading the closest prior art, have predicted this degree, direction, or type of effect? If not, why not? Is the deviation statistically credible and replicable? Finally, remember that a mere change in degree (e.g., slightly better performance) rarely counts as unexpected, unless the field teaches a plateau or teaches away from the effect, or the improvement is disproportionate to routine optimization.
Diagnostic checks to avoid hype include:
- Does the claim identify the feature that produces the effect, and does the evidence isolate that feature? If not, the causal story is weak.
- Is the comparator the true closest prior art, not a convenient but weaker baseline? If the comparator is not the closest, the argument loses weight.
- Are your statements conclusory (e.g., “far better,” “surprising”) without data anchors (numbers, ranges, p-values, error bars, or confidence intervals)? If yes, refine the language.
- Do you rely on a single data point without replication or controls? If so, you risk challenges on reproducibility or selection bias.
- Is the effect commensurate with claim breadth? If the claim is broad but the data narrow, add reasoning (structure–function rationale) to bridge the gap.
Step 2: Introduce the articulation template (Context → Contrast → Causation)
To express unexpected results clearly and persuasively, use a three-part articulation template that guides the reader through legal and scientific logic.
1) Context: Identify the closest prior art and the skilled person’s expectation. In this segment, state the relevant baseline. Specify what the art teaches regarding methods, materials, parameter ranges, or mechanisms. Use neutral, factual language to establish predictability or lack thereof. Preferred verbs and phrases include: “discloses,” “suggests,” “teaches,” “predicts,” “would be expected to,” and “ordinarily yields.” The goal is to anchor the reader in the known landscape so the deviation is meaningful.
2) Contrast: Quantify the measured deviation from that baseline. This is where you present the delta. Use numerical measures (e.g., percentage change, fold increase, threshold crossing) and point out directionality (e.g., lower, higher, faster) and magnitude (e.g., by 35%, by a factor of 3). Verbs such as “exceeds,” “reduces,” “enhances,” “shifts,” and “surpasses” help you articulate the difference precisely. Connect the data to the specific claim feature (material, configuration, dosage, algorithmic parameter, etc.). Maintain clarity by identifying the comparator explicitly and keeping units consistent.
3) Causation/Mechanism: Explain why the claimed features lead to the result. This is not a demand for absolute proof of mechanism but for a plausible technical rationale that ties claim elements to the observed effect. Useful phrases include: “attributable to,” “arises from,” “is enabled by,” “is mediated by,” and “is linked to.” In both US and EP contexts, a credible linkage increases the probative value of the unexpected result. It also helps defend against “obvious to try” and “routine optimization” arguments by showing that the effect flows from a non-trivial interplay of features rather than from simply turning a standard knob.
Model sentence structure following the template:
- Context: “The closest prior art teaches [baseline method/parameter], which yields [typical outcome] and would lead the skilled person to expect [predictable range or behavior].”
- Contrast: “In contrast, the claimed configuration [identify feature] achieves [quantified result], representing [specific delta] relative to the prior art baseline.”
- Causation: “This deviation is attributable to [identified feature/mechanism], which [brief technical rationale linking feature to effect].”
This structure avoids conclusory statements by embedding comparators and data within a concise rhetorical flow. It also aligns naturally with the EP problem–solution approach, where the “Context” sets the closest prior art and the objective problem, the “Contrast” displays the unexpected solution effect, and the “Causation” ties the effect to distinguishing features.
Step 3: Embed evidence and language discipline
Unexpected results must be communicated with both scientific caution and legal precision. Three practices support this discipline: controlled evidence, calibrated hedging, and careful causal signaling.
First, controlled evidence. When you report data, specify the experimental conditions concisely and indicate controls. If the claim concerns a composition, show head-to-head comparisons against the closest formulation under identical conditions. If the claim concerns a process parameter, show gradients or dose–response patterns and identify statistically meaningful shifts. Mention replication (“n” size), dispersion (standard deviation or confidence intervals), and any blinding or randomization if appropriate for the field. Even brief cues such as “triplicate runs,” “95% CI,” or “p < 0.05, two-tailed” convey reliability without overburdening the text. Ensure measurement methods are standard or, if novel, described sufficiently to enable verification.
Second, calibrated hedging. Hedging is not weakness; it is scientific integrity and a credibility enhancer. Use phrases like “indicates,” “is consistent with,” “suggests,” and “supports the conclusion that” to avoid overstating causality. Combine hedging with specificity: “The data indicate a 42–48% reduction (n=12) under the stated conditions.” Avoid absolute claims such as “proves” or “always,” unless the evidence truly warrants them and the claim scope is narrow. Hedging should not dilute the delta; it should reflect uncertainty boundaries while preserving the message: the effect is real, reproducible, and linked to the claim feature.
Third, careful causal signaling. Examiners often push back on causality, especially when multiple variables change. Isolate variables in your description: highlight that the only differing feature between test and control is the claimed element. Use connective phrases that respect causality without overclaiming, such as “arises when,” “under otherwise identical conditions,” or “when [feature] is present while other parameters are held constant.” Where complete isolation is difficult, provide a mechanism-informed rationale and cross-reference converging lines of evidence (e.g., complementary assays or orthogonal measurements). This layered approach reduces the risk of being dismissed as routine optimization.
Language tools support these practices:
- Verbs for calibrated claims: “achieves,” “yields,” “confers,” “reduces,” “enhances,” “maintains,” “stabilizes,” “accelerates,” “enables.”
- Comparators and anchors: “relative to,” “compared with,” “against,” “baseline,” “control,” “closest prior art,” “delta,” “fold-change,” “percentage change,” “threshold,” “upper/lower bound.”
- Scope and qualifiers: “in at least,” “within,” “under the tested conditions,” “for embodiments wherein,” “in representative implementations.” These manage breadth without overcommitting.
- Legal connective phrases: “contrary to the expectation under [reference],” “not suggested by,” “not predictable in view of,” “teaches away from,” “inconsistent with the conventional trend,” “beyond routine optimization.” These help translate technical effects into legal reasoning.
When integrating statistics, be concise and transparent. State the primary metric, the direction and magnitude of change, and the confidence or variability. If data are noisy, acknowledge it and explain how the effect persists across replicates or conditions. This transparency strengthens credibility and aligns with both US and EP expectations for objective, verifiable assertions.
Step 4: Apply to prosecution scenarios
In the specification, embed the Context → Contrast → Causation structure in the technical description and results sections. Present the closest prior art as part of the background, carefully distinguishing it from your claimed features. In the detailed description, place side-by-side data that explicitly compares your embodiments to the closest art under matched conditions. Use headings that guide the reader: “Baseline Performance Relative to [Closest Prior Art],” “Observed Deviation Under Controlled Conditions,” and “Mechanistic Rationale for the Claimed Effect.” Such headings prime the legal relevance of the data without sounding argumentative. Ensure that claim language maps to the tested features so the evidence is commensurate with scope. If the claim is broad, include representative embodiments across the breadth and provide a unifying rationale explaining why the mechanism applies generally. This alignment prepares the ground for later reliance on the data during prosecution.
For US Office Action responses addressing KSR-style obviousness, start by articulating the examiner’s combination and the presumed predictable result. Then pivot to your unexpected results using the template. Identify the closest prior art baseline selected by the examiner; present your quantified delta; and link the delta to the specific claim feature. Use language that counters “predictable results” explicitly: “The observed effect is not a predictable consequence of combining [A] and [B] because [mechanistic rationale] and because the prior art reports [trend/plateau/contrary teaching].” Address “routine optimization” by showing disproportionate results or non-linear behavior that would not emerge from simple parameter tuning. Emphasize the presence of a threshold, inflection point, or synergy that the skilled person would not anticipate. If the examiner alleges “obvious to try,” respond that the field presented numerous alternatives without a reasonable expectation of success, and that your data shows a specific, robust effect tied to the claimed feature, not to arbitrary exploration.
When writing for EP, align with the problem–solution framework. State the closest prior art and the objective technical problem in neutral terms. Present the surprising effect as the solution enabled by the distinguishing features. Tie the effect explicitly to the problem, and avoid overbroad claims if your evidence supports a narrower technical teaching. Use phrases like “The distinguishing feature [X] solves the objective technical problem by producing [effect], which departs from the expectation created by [closest prior art].” If necessary, reformulate the problem to reflect the demonstrated effect, but ensure the reformulation is not tainted by hindsight. Reinforce that the prior art neither suggests the claimed feature as a solution nor provides a hint toward the observed direction or magnitude of the effect.
In both jurisdictions, tighten your rhetoric with disciplined connectors:
- To introduce the baseline: “The closest prior art discloses…,” “Under conventional conditions…,” “The skilled person would expect…”
- To present the delta: “In contrast…,” “Under otherwise identical conditions…,” “Relative to the baseline, the claimed feature yields…”
- To link cause and effect: “This deviation is attributable to…,” “This effect arises when…,” “The feature mediates…”
- To preempt counterarguments: “Not derivable from routine optimization because…,” “Contrary to [reference] which reports…,” “Inconsistent with the linear trend observed in…,” “Not suggested by the combination of [A] and [B] because…”
Finally, document structure matters. Place your strongest unexpected-result evidence where it is discoverable and citable: in the specification’s results, with clear tables or figures and textual anchors that explain conditions and comparators; in the claims, ensure that the features responsible for the effect are captured; and in prosecution submissions, cross-reference specific paragraphs and figure labels to maintain a tight chain of proof. Keep a consistent vocabulary across documents so the examiner recognizes the precise feature–effect linkage. Avoid drifting terminology (e.g., renaming the same parameter) because it weakens the causal narrative.
By combining careful definition, a disciplined articulation template, evidence-centered language, and prosecution-aware phrasing, you create a persuasive record of unexpected results. This approach satisfies both the scientific community’s demand for rigor and the legal system’s requirement for objective indicia. It also reduces the risk of your statements being disregarded as hype or conclusory. In practical terms, aim for a compact, repeatable pattern in your writing: define the expected baseline from the closest art; present a well-quantified, reproducible deviation; and tie that deviation to the claimed features with a plausible mechanism and appropriate hedging. Over time, this pattern becomes an efficient, reliable method to frame unexpected results as strategic evidence of inventive step and non-obviousness under both US and EP standards.
- Define “unexpected results” as credible, evidence-backed deviations from what the closest prior art and skilled person would predict, linked to specific claim features and commensurate with claim scope.
- Use the Context → Contrast → Causation template: set the prior-art baseline, quantify the delta with clear comparators and units, and provide a plausible mechanism tying the effect to the claimed feature.
- Present controlled, reproducible evidence (matched conditions, replication, statistics, and standard methods) and apply calibrated hedging to avoid hype while signaling reliability.
- Align drafting and prosecution with legal frameworks (US KSR and EP problem–solution): compare against the closest prior art, show effects beyond routine optimization or “obvious to try,” and keep terminology consistent across spec, claims, and responses.
Example Sentences
- The closest prior art teaches a titanium catalyst that ordinarily yields 60–65% conversion, whereas the claimed bimetallic catalyst achieves 91% ±2% under otherwise identical conditions, a deviation attributable to cooperative hydrogen spillover at the Ti–Pd interface.
- Contrary to the expectation under Smith 2019, which predicts linear gains below 5% when increasing surfactant concentration, the claimed co-solvent system reduces droplet size by 38% (n=9, p<0.01), consistent with a threshold micelle reorganization enabled by component X.
- Relative to the baseline algorithm disclosed in JP ’412, the claimed quantization scheme surpasses the expected accuracy plateau and improves F1 by 0.07 at 4-bit precision, an effect linked to the asymmetric clipping parameter constrained in claim 3.
- Under conventional annealing (the closest prior art), films crack at strains above 2%, but the claimed gradient-cured film maintains ductility up to 6.4% (95% CI: 6.1–6.7%), which is not predictable in view of routine optimization and arises from a crosslink-density gradient.
- Compared with the control formulation aligned to US ’003, the claimed stabilizer reduces API degradation from 12% to 2.1% over 30 days at 40°C/75% RH, a disproportionate reduction attributable to radical scavenging when parameter R is within 0.3–0.4 as recited.
Example Dialogue
Alex: The examiner says combining A and B would predictably give us a modest yield bump.
Ben: Then anchor the baseline—cite the closest prior art showing 55–60% yield under their exact conditions.
Alex: Good. Next, I’ll present our delta: with the claimed chelating ligand, we consistently hit 88% ±3% in triplicate, relative to that baseline.
Ben: And link it to causation—state that the effect arises when the ligand enforces cis coordination, which the prior art neither teaches nor suggests.
Alex: I’ll add that this behavior is inconsistent with the linear trend reported in Jones ’18, so it’s beyond routine optimization.
Ben: Perfect. That Context → Contrast → Causation flow makes the unexpected result both clear and legally relevant.
Exercises
Multiple Choice
1. Which phrasing best follows the Context → Contrast → Causation template without hype?
- Our invention is revolutionary and unprecedented compared with anything before.
- The closest prior art discloses a single-metal catalyst yielding 60–65% conversion; in contrast, the claimed bimetallic system achieves 91% ±2% under identical conditions, an effect attributable to hydrogen spillover at the interface.
- The prior art is old and wrong; our results are obviously better.
- We always get superior performance under any conditions due to our innovative approach.
Show Answer & Explanation
Correct Answer: The closest prior art discloses a single-metal catalyst yielding 60–65% conversion; in contrast, the claimed bimetallic system achieves 91% ±2% under identical conditions, an effect attributable to hydrogen spillover at the interface.
Explanation: This option explicitly sets the baseline (Context), quantifies the delta (Contrast), and links the effect to a feature/mechanism (Causation), while avoiding hype.
2. Which comparator is most appropriate to substantiate an unexpected result?
- A weaker baseline chosen because it shows a larger difference.
- An industry-average performance from five years ago.
- The closest prior art operating under matched conditions.
- A hypothetical model with assumed parameters.
Show Answer & Explanation
Correct Answer: The closest prior art operating under matched conditions.
Explanation: Unexpected results must be shown relative to the closest prior art under otherwise identical conditions to be persuasive in both US and EP practice.
Fill in the Blanks
"Under conventional conditions described in JP ’412, accuracy plateaus at 4-bit precision; in contrast, the claimed quantization scheme improves F1 by 0.07, a deviation ___ to the asymmetric clipping parameter in claim 3."
Show Answer & Explanation
Correct Answer: attributable
Explanation: The template’s Causation step uses causal linkers like “attributable to” to tie the result to the claimed feature.
"The data ___ a 42–48% reduction (n=12) relative to the baseline, under otherwise identical conditions."
Show Answer & Explanation
Correct Answer: indicate
Explanation: Calibrated hedging prefers verbs like “indicate,” “suggest,” or “support,” which convey credibility without overclaiming.
Error Correction
Incorrect: The closest prior art is worse, and our data proves a huge improvement without needing controls.
Show Correction & Explanation
Correct Sentence: The closest prior art discloses the baseline performance; under otherwise identical conditions, our data indicate a quantified improvement supported by appropriate controls.
Explanation: Avoid hype (“worse,” “proves a huge improvement”) and include matched conditions and calibrated hedging consistent with evidence-centered language.
Incorrect: The observed effect always occurs and is due to many changes we made at once.
Show Correction & Explanation
Correct Sentence: The observed effect arises when the claimed feature is present while other parameters are held constant, and it has been replicated across runs.
Explanation: Isolate the causal variable and avoid absolute claims like “always”; emphasize controlled comparisons and replication per the guidance.