Authoritative English for Chiplet Architecture Claims: Drafting with chiplet architecture patent language examples
Struggling to make chiplet claims sound truly integrated—not just another MCM? This lesson equips you to draft authoritative, examiner-ready language that frames partitioning, coherency, and die-to-die links with clear technical rationale and legal breadth. You’ll get concise explanations, patent-grade examples, and targeted exercises covering partitioning rationale, UCIe/BoW link semantics, coherency boundaries, yield/reticle constraints, and heterogeneous nodes. By the end, you’ll write claims that read like SoC-level integration within a package—precise, defensible, and hard to design around.
Step 1: Framing the chiplet claim problem and defining essential vocabulary
When drafting patent claims for chiplet-based architectures, the central problem is positioning your invention against two different baselines: the traditional monolithic system-on-chip (SoC), and older multi-chip modules (MCMs) that merely package separate dies together without modern die-to-die integration. A monolithic SoC realizes all subsystems on a single die; by contrast, chiplet-based architectures deliberately partition functionality across multiple smaller dies (chiplets) that are closely coupled through short-reach interconnects and often share system-level features such as coherency or unified memory semantics. The claim must explain why this partition is not a simple packaging convenience but a technical solution to constraints such as reticle limits, yield, performance, or heterogeneity of process nodes. The language must be precise and authoritative so that an examiner can clearly see the technical contribution over both monolithic and generic multi-die arrangements.
To do this effectively, you need to use chiplet-specific terminology that accurately reflects the state of the art:
- Interposer: A passive or active substrate (e.g., silicon interposer in 2.5D packaging) that provides high-density wiring between chiplets. The interposer can host through-silicon vias (TSVs), redistribution layers (RDLs), and, in some designs, active circuitry. Claims often specify whether the interconnect traverses an interposer, an organic substrate, or a direct-bonded interface.
- Die-to-die link: The electrical or optical link that connects chiplets. In chiplet claims, this is not a generic off-package interface; it typically refers to short-reach, high-bandwidth, low-latency links optimized for within-package communication. Using standard names such as UCIe (Universal Chiplet Interconnect Express) or BoW (Bunch of Wires) for the physical layer (PHY) helps identify the intended characteristics without limiting you unnecessarily.
- Reticle-limited: A fabrication constraint describing the maximum area that can be exposed in a single lithography step. Claims often justify partitioning by stating that the aggregate design exceeds reticle dimensions or that yield and cost are improved by subdividing a large design into smaller dies.
- Coherency domain: The set of agents that participate in cache coherency protocols. Chiplet claims frequently hinge on whether chiplets share a coherency domain across the die-to-die link, or whether coherency is intentionally bounded to isolate traffic and reduce latency or power.
- UCIe/BoW PHY: Standardized physical-level specifications for die-to-die connections. Referencing a PHY indicates signaling, lane aggregation, training, and link features. Claims may cite conformance or compatibility while also describing vendor-specific extensions.
- Chiplet partitioning: The architectural decision to allocate functions across different dies—for example, separating compute cores, memory controllers, I/O, and accelerators. Claims must articulate the partitioning rationale (e.g., yield, process node heterogeneity, thermal zoning) and the resulting system-level behavior.
These terms matter because chiplet novelty often arises at the boundary: where the partition is drawn, how the die-to-die interconnect behaves compared with an on-die interconnect, and how coherency, bandwidth, latency, and power are managed. In monolithic SoCs, coherency and interconnect choices are constrained by on-die fabrics. In chiplet designs, you choose whether to extend coherency across chiplets or to limit it. You also trade-off link width and power against area and routability on the interposer. By articulating these constraints and choices, the claim shows the technical insight that differentiates chiplets from mere multi-chip assemblies.
Step 2: Analyzing model claim patterns for chiplet architectures
To position novelty clearly, consider how independent claims usually establish the architecture and the differentiating features, while dependent claims refine these features with specific embodiments. Five recurring patterns are particularly useful in chiplet claims: partitioning rationale, interconnect characteristics, coherency/isolation boundaries, yield/reticle constraints, and heterogeneous process nodes.
-
Partitioning rationale: Independent claims should state not only that functionality is split across chiplets but also why the partition exists. For example, placing high-power compute units on one chiplet and large SRAM or HBM interfaces on another may be justified by thermal isolation, routing complexity, or reticle-area pressure. The rationale signals a technical problem and frames the claim’s solution in more than structural terms. You anchor the partitioning to measurable system properties—latency tolerance of certain paths, bandwidth demands of memory tiles, or cross-chiplet QoS mechanisms.
-
Interconnect characteristics: Claims often describe the die-to-die fabric in terms of lanes, encoding, credit-based flow control, flit sizes, and link training. The goal is to show how the link approximates on-die behavior or, alternatively, creates deliberate bottlenecks for isolation and power control. Terms like “short-reach, within-package interconnect,” “reticle-scale lane aggregation,” “deterministic latency budget,” and “link-level error detection without retransmission beyond N cycles” provide credible, field-specific markers without overcommitting to one vendor’s spec.
-
Coherency and isolation boundaries: You can claim novelty by defining which chiplets share a coherency domain and which do not. One pattern is to maintain coherency between compute chiplets while keeping accelerator chiplets outside the domain, interacting via message-passing with protocol shims. Another is to extend coherency to memory-chiplet caches but not to I/O chiplets. The claim should capture the boundary and the reason—latency, power, or protocol complexity—thereby distinguishing over monolithic designs where coherency is uniform.
-
Yield and reticle constraints: Independent claims may explicitly reference the reticle limit or statistical yield to justify the partition. This signals a practical manufacturing problem that the architecture solves. It also distances your claim from MCMs that were not driven by these constraints. Phrases like “reticle-limited die size,” “expected defect density,” and “binning to assemble known good die” add technical substance and link partitioning to manufacturability improvements.
-
Heterogeneous process nodes: Chiplet-based designs often mix process nodes—for example, advanced logic on a cutting-edge node with analog/PHY or large SRAM on a more mature node. Claims can assert that specific chiplets are fabricated in different nodes to optimize leakage, voltage handling, or density. This heterogeneity supports novelty over monolithic SoCs built on a single node and over prior MCMs that did not coordinate coherency or tight coupling across nodes.
When decomposing an independent claim, identify the core structure: a package including multiple chiplets, at least one high-bandwidth die-to-die interconnect, a defined coherency or isolation scheme, and a partitioning that addresses yield or reticle constraints. For dependent claims, drill into: link training behavior, flow control granularity, error handling, lane repair, topologies (mesh, ring, star), packaging (2.5D interposer, organic substrate, fan-out RDL), thermal zones, power gating across chiplets, or protocol overlays (CXL.cache over UCIe streaming mode, BoW raw mode with packetization).
Step 3: Drafting targeted claim components with authoritative phrases
When writing claim text, use concise, field-appropriate phrases that evoke recognized chiplet practices without narrowing your scope unnecessarily. Aim to name the architectural feature, state its technical effect, and reference the context that makes the choice non-trivial.
-
Core package and chiplets:
- “A packaged assembly comprising a first chiplet and a second chiplet disposed on an interposer, the interposer providing within-package wiring configured for short-reach die-to-die signaling.”
- “The first chiplet comprising compute logic; the second chiplet comprising memory interface logic and on-chip SRAM tiles, the compute logic and the memory interface logic being partitioned across distinct dies for yield and reticle-size considerations.”
-
Die-to-die interconnect:
- “A die-to-die link formed by aggregated lanes conforming to a chiplet physical layer, the link including training, lane repair, and credit-based flow control to achieve bounded-latency transfers.”
- “The link operating in a streaming mode transporting cache-coherent transactions between chiplets, with packetization and integrity checks implemented at a link-layer adapter.”
-
Coherency and isolation:
- “A coherency domain spanning the first chiplet and a subset of units on the second chiplet, the coherency domain excluding an accelerator unit to reduce protocol overhead and power.”
- “A protocol boundary at the die-to-die interface configured to translate between on-chip cache-coherent transactions and message-oriented transactions, thereby isolating accelerator traffic while preserving memory ordering guarantees for compute traffic.”
-
Partitioning and heterogeneity:
- “The first chiplet fabricated in a first process node optimized for high-density logic; the second chiplet fabricated in a second, distinct node optimized for analog I/O and large memory arrays.”
- “Partitioning of compute complexes from memory PHYs to enable thermal zoning that limits peak junction temperature within the compute chiplet.”
-
Enablement without over-narrowing:
- “The die-to-die link is compatible with UCIe or BoW physical layers, and the link-layer adapter implements flow control to maintain a target quality-of-service under bursty traffic.”
- “The interposer comprises one of: a silicon interposer with TSVs; an organic substrate with redistribution layers; or a direct hybrid-bond interface, each configured to provide short-reach connectivity.”
When addressing specialized contexts like ML accelerators, memory chiplets, or compute tile topologies, keep the language modular. For instance, describe an “accelerator chiplet configured to access a memory chiplet via a die-to-die fabric supporting cache-bypass semantics,” or a “compute chiplet comprising multiple tiles coupled to a local fabric, the fabric interfacing to the die-to-die link via a protocol bridge that preserves ordering class semantics.” Each phrase points to recognized design practices while allowing multiple embodiments.
For clarity on novelty over monolithic SoCs, emphasize that the claimed interconnect provides functionally similar semantics to on-die fabrics (e.g., coherent transactions, QoS) but does so across distinct dies with physical constraints that demand specialized link behavior. For novelty over generic MCMs, emphasize tight coupling, bounded latency, coherency awareness, link training, and integrated power/thermal management that together deliver SoC-like characteristics within a package.
Step 4: Refining for patentability and breadth
After establishing the core architecture, refine with dependent-claim embellishments that anticipate design-arounds and demonstrate technical depth. The goal is to maintain breadth in the independent claim while adding optional features that distinguish over prior art.
-
Protocols and overlays: Add dependent claims that specify protocol layers without locking you to a single standard. For example, state that the link supports a cache-coherent transaction protocol transported over a streaming physical layer. Indicate optional overlays, such as CXL.cache semantics or proprietary coherency messages, framed as compatibility or support for multiple modes. This approach recognizes evolving standards and avoids obsolescence.
-
Topologies: Because chiplet systems may use different network topologies, add claims that enumerate alternatives: a point-to-point link, a ring interconnect across compute chiplets, a hub-and-spoke arrangement via a switch chiplet, or a 2D mesh on an interposer. Use “one or more of” constructions to keep coverage broad. Also include dependent claims addressing multi-link striping, path redundancy, and link failover.
-
Packaging: Strengthen enablement by covering 2.5D interposers, organic substrates, and direct-bonded stacking. Include claims mentioning TSV density ranges, RDL pitch, or micro-bump pitch in functional terms (e.g., “sufficient to sustain link bandwidth B at a signaling rate R with error probability below E”). This frames packaging as a means to performance targets rather than a narrow material choice.
-
Quality of service (QoS): Add claims that specify traffic classes, priority-based arbitration, rate limiting, or credit partitioning on the die-to-die link. QoS language is a strong differentiator because it illustrates the system’s ability to multiplex different workloads—latency-sensitive coherent traffic versus bulk DMA—without interference. This combats obviousness over simple high-bandwidth links that ignore workload diversity.
-
Power and thermals: Include dependent claims that cover link power states, lane parking, adaptive voltage/frequency scaling of the link, and coordinated power gating across chiplets. Add thermal-aware scheduling or throttling informed by temperature sensors on the interposer or on the chiplets. These measures emphasize the integrated, SoC-like management of a multi-die system.
-
Error handling and reliability: Claim error-detecting codes on flits, retransmission policies bounded by latency budgets, and lane repair procedures during training or runtime. Reliability mechanisms distinguish advanced chiplet fabrics from prior MCMs that relied on off-the-shelf interfaces without tight latency guarantees.
-
Heterogeneous process nodes and binning: Use dependent claims to specify combinations of nodes (e.g., logic at an advanced node with PHY at a mature node) and to assert methods for assembling known-good-die, including test access through the die-to-die link. This ties manufacturability to system behavior and supports cost/yield advantages.
-
Software and firmware interaction: Without claiming software per se, you can include apparatus claims that reference control registers, link bring-up sequences, and over-the-air firmware updates for link-layer adapters. This demonstrates that the hardware is designed for configurable, field-upgradable behavior—another differentiator from fixed-function MCM links.
Throughout refinement, maintain precise enablement language that captures variations in topology, protocol, and packaging. Use phrases like “configured to,” “operable to,” and “adapted to” to describe capability without implying method steps in an apparatus claim. When you must include method claims, structure them around on-package operations, such as bringing up the die-to-die link, establishing membership in a coherency domain, or performing lane repair, and tie those steps to measurable outcomes (e.g., latency bounds, bandwidth thresholds).
Finally, ensure that the claim set clearly distinguishes over both monolithic and generic multi-die prior art. For monolithic SoCs, stress the unique constraints and enabling mechanisms required by cross-die communication—training, lane repair, error handling, coherency bridging, QoS. For generic MCMs, emphasize the SoC-like semantics achieved within the package—cache coherency with bounded latency, unified memory addressing, partitioning for reticle-limited die size and yield, heterogeneity across process nodes with coordinated link-layer control. By combining this vocabulary, these structural patterns, and targeted dependent-claim embellishments, you present an authoritative, technically grounded claim set that anticipates how examiners and competitors analyze chiplet inventions.
- Clearly differentiate chiplets from monolithic SoCs and generic MCMs by stating a technical partitioning rationale (e.g., reticle limits, yield, thermal/heterogeneous nodes) and SoC-like within-package behavior.
- Specify die-to-die interconnect features (e.g., UCIe/BoW compatibility, lane aggregation, training, lane repair, credit-based flow control, bounded/deterministic latency) to show tight integration.
- Define coherency/isolation boundaries across chiplets (who is in the coherency domain and why) and, if applicable, use protocol shims for message-passing to excluded units.
- Use dependent-claim refinements to broaden and strengthen coverage: topologies, packaging variants, QoS, power/thermal management, error handling, heterogeneous nodes, and configurability/firmware control.
Example Sentences
- The independent claim recites a packaged assembly with compute and memory chiplets on a silicon interposer, the partitioning being motivated by reticle-limited die size and yield improvement.
- A die-to-die link compatible with UCIe or BoW is configured to provide bounded-latency, short-reach signaling with lane repair and credit-based flow control.
- The coherency domain spans the compute chiplet and a cache on the memory chiplet while expressly excluding an accelerator chiplet to reduce protocol overhead.
- Heterogeneous process nodes are claimed by fabricating logic on an advanced node and PHY and large SRAM on a mature node to optimize leakage and voltage handling.
- Dependent claims enumerate a hub-and-spoke topology via a switch chiplet and specify QoS with priority arbitration to separate coherent traffic from bulk DMA.
Example Dialogue
Alex: Our draft still sounds like an MCM—where do we show tight integration?
Ben: Add that the die-to-die link conforms to a chiplet PHY, with training, lane repair, and a deterministic latency budget.
Alex: Good point, and we should state that compute and memory are partitioned for reticle limits and yield, not just convenience.
Ben: Exactly, and define the coherency boundary: compute plus memory cache are coherent, but the accelerator is outside the domain via a message-oriented shim.
Alex: Let’s also claim heterogeneous nodes—logic at 3 nm and PHY at 16 nm—to justify leakage and voltage choices.
Ben: And include QoS on the link so the examiner sees SoC-like semantics within the package, not a generic off-package interface.
Exercises
Multiple Choice
1. Which statement most clearly distinguishes a chiplet architecture from a generic multi-chip module (MCM) in a patent claim?
- The dies are packaged together on a substrate.
- The chiplets communicate via a short-reach, within-package die-to-die link with training and lane repair, providing bounded-latency transfers.
- Each die is labeled as compute or memory.
- The package includes heat spreaders and thermal paste.
Show Answer & Explanation
Correct Answer: The chiplets communicate via a short-reach, within-package die-to-die link with training and lane repair, providing bounded-latency transfers.
Explanation: Chiplet novelty is shown by tight, SoC-like integration across dies: short-reach die-to-die links with training, lane repair, and bounded latency. Generic MCMs lack these integrated, low-latency, coherent-like behaviors.
2. In articulating a partitioning rationale, which option best frames a non-trivial technical justification?
- Compute and memory are on different chiplets to fit marketing preferences.
- Compute and memory are split to reduce BOM cost regardless of performance.
- Compute and memory are partitioned because the aggregate design is reticle-limited and yield improves when subdivided into smaller dies.
- Compute and memory are split to use more package pins.
Show Answer & Explanation
Correct Answer: Compute and memory are partitioned because the aggregate design is reticle-limited and yield improves when subdivided into smaller dies.
Explanation: Referencing reticle limits and yield ties partitioning to manufacturing constraints, a recognized technical problem that supports novelty over monolithic SoCs and generic MCMs.
Fill in the Blanks
The claim recites that the compute and memory chiplets share a ___ domain across the die-to-die link, while the accelerator communicates via message-passing outside that domain.
Show Answer & Explanation
Correct Answer: coherency
Explanation: Defining the coherency domain boundary (who is coherent and who is not) is a common pattern to show tight coupling and protocol choices in chiplet claims.
The die-to-die link is compatible with ___ or BoW physical layers and includes credit-based flow control to maintain a deterministic latency budget.
Show Answer & Explanation
Correct Answer: UCIe
Explanation: UCIe (and BoW) are standardized chiplet PHYs. Referencing them indicates short-reach, high-bandwidth die-to-die characteristics without overcommitting to a single vendor.
Error Correction
Incorrect: The partitioning places compute and memory on separate dies only for packaging convenience, similar to an MCM.
Show Correction & Explanation
Correct Sentence: The partitioning places compute and memory on separate chiplets due to reticle-limited die size and yield improvement, distinguishing over generic MCM packaging convenience.
Explanation: Claims should state a technical rationale (reticle/yield) rather than mere packaging convenience to demonstrate non-obviousness and chiplet-specific motivation.
Incorrect: The die-to-die interface reuses an off-package PCIe link without training or lane repair and provides best-effort latency.
Show Correction & Explanation
Correct Sentence: The die-to-die link conforms to a chiplet PHY (e.g., UCIe or BoW) with training, lane repair, and credit-based flow control to provide bounded-latency, short-reach signaling.
Explanation: Chiplet links differ from generic off-package interfaces by including training, lane repair, and bounded-latency behavior characteristic of within-package, SoC-like interconnects.