Strategic English for AI Patent Examiner Interviews: Sample Agendas That Surface Training Data Questions
Struggling to keep AI examiner interviews focused—especially when training data drives §101 and §103? This lesson equips you to run a surgical, written agenda that isolates claim interpretation, ties eligibility to reproducible data-driven improvements, and tests obviousness via dataset assumptions. You’ll get crisp frameworks with timing and scripts, high-signal question stems, and documentation checklists—plus targeted exercises to validate your approach. Finish ready to shorten prosecution and memorialize commitments that support AFCP, after-final, or appeal pathways.
Why a written agenda matters in AI examiner interviews—and how training data issues drive §101 and §103
A written agenda does more than list talking points; it controls scope, sequence, and tone. In AI applications, the agenda also signals technical competence and transparency. When an examiner sees an agenda that clearly separates claim interpretation from technical evidence, and technical evidence from legal standards, they can respond more precisely. This precision is essential when the prosecution hinges on training data provenance, dataset bias, and reproducibility—issues that influence both eligibility under §101 and nonobviousness under §103.
Under §101, an AI claim may be characterized as an abstract idea (e.g., mathematical concepts or mental steps). However, if the agenda guides discussion toward how the training data and pipeline produce a specific, repeatable technical improvement (for instance, a measurable change to system performance under defined conditions), the dialogue shifts from abstraction to practical application. Training data provenance—where the data comes from, how it was curated, and why it is fit for purpose—helps frame the AI system as a technical solution based on concrete input conditions. Reproducibility—documented parameter settings, versioned datasets, and stable evaluation metrics—helps show that the improvement is not incidental or purely result-oriented.
Under §103, obviousness often turns on whether a person of ordinary skill in the art would have combined known techniques with a reasonable expectation of success. Here, training data issues can be determinative. If the prior art uses generic, noisy datasets while your claim relies on a purpose-built corpus with documented bias controls and a validated distribution that enables a non-trivial performance gain, the examiner’s rationale for combining references may weaken. The agenda, therefore, should guide the conversation to isolate which prior art assumptions about data availability, quality, or representativeness are actually supported. By asking focused questions about how the cited references treat dataset characteristics, you spotlight the evidence burden on the Office to substantiate a reasonable expectation of success.
A strong agenda keeps the interview outcomes-oriented. It prioritizes concrete agreements: what the examiner interprets each key limitation to mean; which evidence would overcome the rejections; and what claim adjustments might be persuasive. Time boxing prevents digression into broad discussions about “AI in general,” and instead anchors the meeting in the record—previous Office actions, cited references, and your pending amendments or declarations. The final benefit is strategic: a clear agenda aligns interview dialogue with the MPEP’s expectations, making it easier to memorialize commitments in a compliant summary that supports later appeal strategy or an AFCP/after-final pathway.
Building the agenda: three frameworks with timing and scripted language
A disciplined agenda uses predictable sections: opening frame, claim focus, evidence alignment, examiner asks, and close-out. Below are three frameworks tailored to common prosecution stages in AI matters. Each framework builds in time allocations and scripted transitions so you can manage the conversation while maintaining a professional tone.
Framework A: First Action on the Merits (FAOM) alignment
- Total duration: 25–30 minutes
- Goal: Align on claim interpretation and target the most efficient evidence paths for §101 and §103.
1) Opening frame (3 minutes)
- Scripted language: “We appreciate your time. Our goal is to align on the interpretation of the key limitations and understand what evidence would meaningfully advance prosecution. We will focus on training data provenance, bias controls, and reproducibility as they relate to §§101 and 103.”
2) Claim anchors (7–8 minutes)
- Focus on 1–3 critical limitations. Use precise references to claim numbers and words.
- Scripted language: “For ‘training a model using the curated dataset,’ we understand ‘curated’ to require documented selection criteria and bias assessment. Is this consistent with your reading, or do you view ‘curated’ as merely filtered or labeled?”
3) §101 technical improvement nexus (5–6 minutes)
- Tie the improvement to training data characteristics and controlled evaluation.
- Scripted language: “We aim to show a specific, repeatable performance improvement attributable to data curation protocols, not just model tuning. Would evidence demonstrating a statistically significant gain under a fixed evaluation set and locked hyperparameters address your eligibility concerns?”
4) §103 combination logic and expectation of success (5–6 minutes)
- Probe how cited references treat data availability and quality.
- Scripted language: “The prior art appears to assume off-the-shelf datasets without documented bias controls. Would you agree that no reference teaches the claimed provenance requirements? If not, which passage establishes that a person of ordinary skill would have expected the same improvement with generic data?”
5) Examiner evidence preferences and next steps (3–4 minutes)
- Scripted language: “If we provide a brief declaration outlining dataset lineage, curation steps, and reproducibility parameters, would that be sufficient, or do you prefer a claim amendment clarifying the provenance requirements?”
6) Close-out (2 minutes)
- Scripted language: “We will submit a concise summary memorializing our understanding. Please confirm any corrections so we proceed efficiently.”
Framework B: After-Final or Pre-Appeal triage
- Total duration: 20–25 minutes
- Goal: Identify the narrowest changes or evidence that could move the case under AFCP or justify a Pre-Appeal brief.
1) Opening frame (2–3 minutes)
- Scripted language: “We want to determine whether a targeted amendment or short declaration can place the application in condition for allowance under AFCP, or if a Pre-Appeal is more appropriate.”
2) Pinpointing dispositive limitations (5–6 minutes)
- Scripted language: “Our understanding is that the ‘provenance-verified corpus’ limitation is the central point in both §101 and §103. Do you agree this limitation is dispositive?”
3) Evidence sufficiency and format (5–6 minutes)
- Scripted language: “Would you consider a declaration that links each curation step to measurable performance effects sufficient? If so, what level of statistical detail do you need to see the improvement as a practical application rather than a result-oriented claim?”
4) AFCP feasibility vs. Pre-Appeal criteria (5–6 minutes)
- Scripted language: “If we clarify ‘provenance-verified’ to require versioned sources and documented exclusion of biased subsets, do you foresee an allowability path under AFCP? If not, what specific deficiency would remain for a Pre-Appeal to address?”
5) Commitments and timing (2–3 minutes)
- Scripted language: “We can file within five business days. If acceptable, we will confirm this plan in our interview summary.”
Framework C: Bias and Enablement deep-dive
- Total duration: 30–35 minutes
- Goal: Address concerns related to training data bias, reproducibility, and enablement, which often inform both §101 and §112 arguments and indirectly shape §103.
1) Opening frame (3 minutes)
- Scripted language: “We would like to focus on bias mitigation and reproducibility because these issues appear to underlie the §101 analysis and potential §112 concerns.”
2) Bias identification and mitigation (8–9 minutes)
- Scripted language: “We define ‘bias’ in relation to distributional skew and harmful performance disparities. Is your concern tied to lack of disclosure on detection methods, or on mitigation procedures, or both?”
3) Reproducibility standards (8–9 minutes)
- Scripted language: “Which parameters do you consider necessary for reproducibility—dataset versions, random seeds, preprocessing pipelines, or evaluation benchmarks? If we specify them in the claim or in the specification via amendment, would that resolve your concerns?”
4) §101 practical application bridge (5–6 minutes)
- Scripted language: “If the file history documents concrete, repeatable performance gains stemming from bias-controlled data, would that address the risk of characterizing the claim as a mere mathematical result?”
5) Closing confirmation (3 minutes)
- Scripted language: “We will provide a summary capturing the required disclosures and your preferences for claim scope versus evidence in the record.”
Conduct and probe: question stems that surface examiner reasoning
Your questioning should guide the examiner to articulate their positions on training data, model architecture, claim interpretation, and evidentiary burdens. Use narrow, verifiable stems that lead to actionable outcomes.
-
On claim interpretation and training data requirements:
- “Do you read ‘curated dataset’ as requiring documented selection criteria, or would any labeled dataset meet this term?”
- “Which words in the claim, if any, you view as merely intended use rather than limiting the training process?”
- “Is ‘provenance-verified’ understood as requiring source traceability and version control, or would a general citation to public data suffice?”
-
On §101 and technical improvement:
- “Which aspect of the claim, in your view, fails to show a practical application—data processing operations, evaluation methodology, or deployment context?”
- “Would a declaration linking the performance gain to dataset controls, under fixed model parameters, meet your expectation for a technical improvement?”
- “Are there particular benchmarks you consider industry-standard for recognizing a technical effect here?”
-
On §103 combinations and reasonable expectation of success:
- “Where in the cited art is there teaching or motivation to apply the claimed provenance and bias controls to the specific task?”
- “If the references do not identify comparable dataset quality, do you still see a reasonable expectation of achieving the claimed improvement?”
- “Would you consider evidence that generic datasets fail to achieve the result as relevant to rebutting the combination rationale?”
-
On model architecture and sufficiency of disclosure:
- “Does your analysis require details of the architecture beyond what is typical for the field, or are your concerns limited to the dataset pipeline?”
- “Are there any parameters you consider essential to a person of ordinary skill to implement the claimed training?”
-
On burdens and next steps:
- “Which specific facts would you need documented to withdraw the §101 rejection?”
- “For §103, what claim amendment or evidence would overcome the combination rationale without unduly narrowing scope?”
- “Do you have a preference for a declaration format versus an amendment that embeds the constraints directly in the claim language?”
These stems keep the examiner anchored to the record, compel clarity on how they interpret critical terms, and reveal the minimal sufficient evidence for allowance or targeted narrowing. They also create a clean basis for your interview summary, ensuring that your documentation is consistent with the MPEP and supports future procedural steps.
Document and memorialize: post-interview summaries, commitments, and best practices
The value of an interview is only as strong as its record. Your summary should be concise, neutral in tone, and aligned with MPEP guidance. It should separate factual agreements from your advocacy and should capture the examiner’s articulated positions with specificity.
Key documentation elements:
-
Identified claim limitations and interpretations
- Record the examiner’s interpretation of terms like “curated,” “provenance-verified,” “bias-controlled,” and “reproducible.” Avoid paraphrasing that could be read as argumentative. Quote short phrases when they affect scope.
-
Evidence requests and sufficiency thresholds
- Document exactly what the examiner said would suffice: a declaration detailing dataset lineage, bias detection methods, versioned sources, fixed random seeds, or particular benchmarks. If a statistical threshold, metric, or evaluation condition was mentioned, restate it clearly.
-
§101 and §103 pivot points
- Capture whether the examiner recognizes a technical improvement tied to the data pipeline and what remains missing. For §103, note any statements about a lack of teaching, suggestion, or motivation, or about reasonable expectation of success in the presence of generic datasets.
-
Next steps and timing
- Write down commitments, such as filing a targeted amendment, a short declaration, or an AFCP submission within a set time. If the examiner requested review of a proposed amendment, record the expected turnaround.
-
Neutral tone and compliance
- Keep the summary factual and avoid subjective adjectives. Confirm that the summary will be placed in the file wrapper and invite corrections to ensure accuracy.
Best practices to support later appeal or after-final strategy:
- Align your summary headings with the agenda sections so the progression is easy to follow in the record.
- Use consistent terminology for datasets, parameters, and benchmarks across amendments and declarations to avoid ambiguity.
- When possible, cross-reference specific claim lines and Office Action citations to fix issues in context.
- If the examiner indicated that particular evidence would be dispositive, restate that point with precision. This creates a clear marker for Board review if needed.
- Avoid over-characterizing the examiner’s comments. Where uncertainty remains, say so explicitly and propose how you will clarify in the next paper.
Finally, treat the interview and its summary as part of a single workflow: plan, conduct, and document. The agenda sets the structure for a disciplined conversation. The question stems extract the examiner’s factual and legal positions on training data, bias, and reproducibility in ways that inform §101 and §103 outcomes. The summary then memorializes these points in neutral, MPEP-aligned language, preserving a reliable record for AFCP, after-final action, or appeal. When executed together, these steps raise the professionalism of your interactions, shorten prosecution, and improve the coherence of your AI claim strategy.
- Use a written, time-boxed agenda that separates claim interpretation, technical evidence, and legal standards to keep interviews precise and outcomes-oriented for §§101 and 103.
- For §101, anchor eligibility in a specific, repeatable technical improvement tied to training data provenance, bias controls, and reproducibility (fixed parameters, versioned datasets, stable benchmarks).
- For §103, challenge combination rationales by scrutinizing prior art assumptions about dataset quality; show that a purpose-built, provenance-verified, bias-controlled corpus enables non-trivial gains not expected from generic data.
- Document the examiner’s interpretations, exact evidentiary thresholds, and concrete next steps in a neutral summary aligned with MPEP to support AFCP, after-final, or appeal strategy.
Example Sentences
- Our written agenda separates claim interpretation from technical evidence to keep the discussion anchored to §§101 and 103.
- We will highlight training data provenance, bias controls, and reproducibility to demonstrate a specific, repeatable technical improvement.
- Please confirm whether you read 'curated dataset' as requiring documented selection criteria rather than merely labeled data.
- To address §103, we will question the reasonable expectation of success when the prior art relies on generic datasets without provenance verification.
- A concise declaration linking performance gains to a bias-controlled, versioned corpus under fixed hyperparameters should meet the examiner’s evidentiary threshold.
Example Dialogue
Alex: I drafted an interview agenda that leads with claim anchors, then splits §101 improvement from §103 combination logic.
Ben: Good. Do we define 'provenance-verified' clearly enough to show source traceability and version control?
Alex: Yes, and I added a question asking whether a declaration with dataset lineage and fixed random seeds would resolve eligibility.
Ben: Nice. For §103, let’s probe whether the references actually assume high-quality data or just generic corpora.
Alex: Agreed, and we’ll time-box that section so we leave room to confirm the examiner’s evidence preferences.
Ben: Perfect—close by committing to file a short declaration within five business days if they signal it would be dispositive.
Exercises
Multiple Choice
1. In an AI examiner interview, why does a written agenda explicitly separating claim interpretation from technical evidence improve outcomes under §§101 and 103?
- It shortens the interview regardless of content
- It signals technical competence and lets the examiner respond precisely to each category
- It avoids the need to discuss prior art at all
- It ensures the examiner will withdraw all rejections
Show Answer & Explanation
Correct Answer: It signals technical competence and lets the examiner respond precisely to each category
Explanation: The lesson explains that separating claim interpretation from technical evidence improves precision, signaling competence and transparency, which supports targeted analysis for §101 eligibility and §103 obviousness.
2. Which evidence best shifts a §101 analysis from abstract idea to practical application in AI claims?
- A general assertion that the model is novel
- A chart of architecture layers without any data context
- Evidence of a repeatable performance improvement tied to curated training data and fixed evaluation conditions
- A citation to a public dataset without documentation
Show Answer & Explanation
Correct Answer: Evidence of a repeatable performance improvement tied to curated training data and fixed evaluation conditions
Explanation: Under §101, showing a specific, repeatable technical improvement attributable to data provenance and controlled evaluation reframes the claim as a practical application rather than an abstraction.
Fill in the Blanks
Under §103, the examiner’s combination rationale weakens if the prior art relies on generic, noisy datasets while the claim requires a purpose-built, ___ corpus with documented bias controls that yields a non-trivial gain.
Show Answer & Explanation
Correct Answer: provenance-verified
Explanation: The lesson emphasizes that a purpose-built, provenance-verified corpus with bias controls can undermine a reasonable expectation of success in §103 combinations.
A strong agenda should time-box discussion and anchor it to the record—previous Office actions, cited references, and pending amendments—so the interview remains ___-oriented.
Show Answer & Explanation
Correct Answer: outcomes
Explanation: The agenda is described as outcomes-oriented, using time boxing and anchoring to the record to keep focus on concrete agreements and next steps.
Error Correction
Incorrect: Our agenda will show eligibility by stating that the model is better without linking it to the training data or evaluation controls.
Show Correction & Explanation
Correct Sentence: Our agenda will show eligibility by linking the performance improvement to training data provenance and controlled evaluation conditions.
Explanation: For §101, the improvement must be tied to training data characteristics and reproducibility, not merely asserted.
Incorrect: For §103, we will accept that prior art assumes high-quality, bias-controlled datasets unless the examiner proves otherwise.
Show Correction & Explanation
Correct Sentence: For §103, we will probe whether the prior art actually teaches high-quality, bias-controlled datasets and require support for any assumption of reasonable expectation of success.
Explanation: The lesson advises guiding the discussion to examine prior art assumptions about data quality and to place the burden on the record to substantiate a reasonable expectation of success.