From d5ba73b8936e498fd2211ba3d2eee2d8526bfe82 Mon Sep 17 00:00:00 2001 From: Leynos Date: Sun, 20 Jul 2025 03:25:16 +0100 Subject: [PATCH 01/11] Fix footnotes and list formatting --- AGENTS.md | 4 +- README.md | 27 +- .../asynchronous-outbound-messaging-design.md | 62 +- ...havioural-testing-in-rust-with-cucumber.md | 6 +- ...antipatterns-and-refactoring-strategies.md | 229 +++---- docs/contents.md | 3 +- docs/documentation-style-guide.md | 12 +- docs/frame-metadata.md | 4 +- ...ge-fragmentation-and-re-assembly-design.md | 40 +- ...eframe-a-guide-to-production-resilience.md | 36 +- docs/message-versioning.md | 10 +- docs/mocking-network-outages-in-rust.md | 107 ++-- docs/multi-layered-testing-strategy.md | 50 +- ...i-packet-and-streaming-responses-design.md | 12 +- .../observability-operability-and-maturity.md | 59 +- docs/preamble-validator.md | 12 +- docs/roadmap.md | 179 ++++-- docs/rust-binary-router-library-design.md | 447 +++++++------- docs/rust-doctest-dry-guide.md | 562 ++++++++++++++---- docs/rust-testing-with-rstest-fixtures.md | 303 +++++----- ...-set-philosophy-and-capability-maturity.md | 80 +-- ...eframe-1-0-detailed-development-roadmap.md | 70 +-- docs/wireframe-client-design.md | 17 +- docs/wireframe-testing-crate.md | 26 +- 24 files changed, 1374 insertions(+), 983 deletions(-) diff --git a/AGENTS.md b/AGENTS.md index 712e2e25..b0cd6305 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -9,8 +9,8 @@ - **Clarity over cleverness.** Be concise, but favour explicit over terse or obscure idioms. Prefer code that's easy to follow. - **Use functions and composition.** Avoid repetition by extracting reusable - logic. Prefer generators or comprehensions, and declarative code to imperative - repetition when readable. + logic. Prefer generators or comprehensions, and declarative code to + imperative repetition when readable. - **Small, meaningful functions.** Functions must be small, clear in purpose, single responsibility, and obey command/query segregation. - **Clear commit messages.** Commit messages should be descriptive, explaining diff --git a/README.md b/README.md index f576f9f6..37b3fba8 100644 --- a/README.md +++ b/README.md @@ -1,9 +1,9 @@ # Wireframe **Wireframe** is an experimental Rust library that simplifies building servers -and clients for custom binary protocols. The design borrows heavily from -[Actix Web](https://actix.rs/) to provide a familiar, declarative API for -routing, extractors, and middleware. +and clients for custom binary protocols. The design borrows heavily from [Actix +Web](https://actix.rs/) to provide a familiar, declarative API for routing, +extractors, and middleware. ## Motivation @@ -83,7 +83,8 @@ WireframeServer::new(|| { ``` This example showcases how derive macros and the framing abstraction simplify a -binary protocol server【F:docs/rust-binary-router-library-design.md†L1126-L1156】. +binary protocol +server【F:docs/rust-binary-router-library-design.md†L1126-L1156】. ## Custom Envelopes @@ -187,10 +188,10 @@ let app = WireframeApp::new().with_protocol(MySqlProtocolImpl); ## Session Registry -The \[`SessionRegistry`\] stores weak references to \[`PushHandle`\]s for active -connections. Background tasks can look up a handle by \[`ConnectionId`\] to send -frames asynchronously without keeping the connection alive. Stale entries are -removed automatically when looked up and found to be dead. Call +The \[`SessionRegistry`\] stores weak references to \[`PushHandle`\]s for +active connections. Background tasks can look up a handle by \[`ConnectionId`\] +to send frames asynchronously without keeping the connection alive. Stale +entries are removed automatically when looked up and found to be dead. Call `active_handles()` to iterate over live sessions for broadcast or diagnostics. ```rust @@ -219,8 +220,8 @@ access app state or expose peer information. Custom extractors let you centralize parsing and validation logic that would otherwise be duplicated across handlers. A session token parser, for example, -can verify the token before any route-specific code executes -[Design Guide: Data Extraction and Type Safety][data-extraction-guide]. +can verify the token before any route-specific code executes [Design Guide: +Data Extraction and Type Safety][data-extraction-guide]. ```rust use wireframe::extractor::{ConnectionInfo, FromMessageRequest, MessageRequest, Payload}; @@ -309,11 +310,13 @@ production use. Development priorities are tracked in [docs/roadmap.md](docs/roadmap.md). Key tasks include building the Actix‑inspired API, implementing middleware and -extractor traits, and providing example applications【F:docs/roadmap.md†L1-L24】. +extractor traits, and providing example +applications【F:docs/roadmap.md†L1-L24】. ## License Wireframe is distributed under the terms of the ISC license. See [LICENSE](LICENSE) for details. -[data-extraction-guide]: docs/rust-binary-router-library-design.md#53-data-extraction-and-type-safety +[data-extraction-guide]: +docs/rust-binary-router-library-design.md#53-data-extraction-and-type-safety diff --git a/docs/asynchronous-outbound-messaging-design.md b/docs/asynchronous-outbound-messaging-design.md index 0833dd27..b4ee4d50 100644 --- a/docs/asynchronous-outbound-messaging-design.md +++ b/docs/asynchronous-outbound-messaging-design.md @@ -17,10 +17,10 @@ worker task—to push frames to a live connection at any time. Earlier releases spawned a short-lived worker per request. This approach made persistent state awkward and required extra synchronisation when multiple tasks -needed to write to the same socket. The new design promotes each connection to a -**stateful actor** that owns its context for the lifetime of the session. Actor -state keeps sequencing rules and push queues local to one task, drastically -simplifying concurrency while enabling unsolicited frames. +needed to write to the same socket. The new design promotes each connection to +a **stateful actor** that owns its context for the lifetime of the session. +Actor state keeps sequencing rules and push queues local to one task, +drastically simplifying concurrency while enabling unsolicited frames. This feature is a cornerstone of the "Road to Wireframe 1.0" and is designed to be synergistic with the planned streaming and fragmentation capabilities, @@ -31,21 +31,21 @@ protocols. Only the queue management utilities in `src/push.rs` exist at present. The connection actor and its write loop are still to be implemented. The remaining -sections describe how to build that actor from first principles using the biased -`select!` loop presented in -[Section 3](#3-core-architecture-the-connection-actor). +sections describe how to build that actor from first principles using the +biased `select!` loop presented in [Section +3](#3-core-architecture-the-connection-actor). ## 2. Design Goals & Requirements The implementation must satisfy the following core requirements: -| ID | Requirement | +| ID | Requirement | | --- | ------------------------------------------------------------------------------------------------------------------------------------------------------ | -| G1 | Any async task must be able to push frames to a live connection. | -| G2 | Ordering-safety: Pushed frames must interleave correctly with normal request/response traffic and respect any per-message sequencing rules. | -| G3 | Back-pressure: Writers must block (or fail fast) when the peer cannot drain the socket, preventing unbounded memory consumption. | -| G4 | Generic—independent of any particular protocol; usable by both servers and clients built on wireframe. | -| G5 | Preserve the simple “return a reply” path for code that does not need pushes, ensuring backward compatibility and low friction for existing users. | +| G1 | Any async task must be able to push frames to a live connection. | +| G2 | Ordering-safety: Pushed frames must interleave correctly with normal request/response traffic and respect any per-message sequencing rules. | +| G3 | Back-pressure: Writers must block (or fail fast) when the peer cannot drain the socket, preventing unbounded memory consumption. | +| G4 | Generic—independent of any particular protocol; usable by both servers and clients built on wireframe. | +| G5 | Preserve the simple “return a reply” path for code that does not need pushes, ensuring backward compatibility and low friction for existing users. | ## 3. Core Architecture: The Connection Actor @@ -74,16 +74,16 @@ manage two distinct, bounded `tokio::mpsc` channels for pushed frames: background messages like log forwarding or secondary status updates. The bounded nature of these channels provides an inherent and robust -back-pressure mechanism. When a channel's buffer is full, any task attempting to -push a new message will be asynchronously suspended until space becomes +back-pressure mechanism. When a channel's buffer is full, any task attempting +to push a new message will be asynchronously suspended until space becomes available. ### 3.2 The Prioritised Write Loop -The connection actor's write logic will be implemented within a `tokio::select!` -loop. Crucially, this loop will use the `biased` keyword to ensure a strict, -deterministic polling order. This prevents high-volume yet critical control -messages from being starved by large data streams. +The connection actor's write logic will be implemented within a +`tokio::select!` loop. Crucially, this loop will use the `biased` keyword to +ensure a strict, deterministic polling order. This prevents high-volume yet +critical control messages from being starved by large data streams. The polling order will be: @@ -143,8 +143,8 @@ checks `low_priority_push_rx.try_recv()` and, if a frame is present, processes it and resets the counter. An optional time slice (for example 100 µs) can also be configured. When the -elapsed time spent handling high-priority frames exceeds this slice, and the low -queue is not empty, the actor yields to a low-priority frame. Application +elapsed time spent handling high-priority frames exceeds this slice, and the +low queue is not empty, the actor yields to a low-priority frame. Application builders expose `with_fairness(FairnessConfig)` where `FairnessConfig` groups the counter threshold and an optional `time_slice`. The counter defaults to 8 while `time_slice` is disabled. Setting the counter to zero disables the @@ -273,8 +273,8 @@ struct ActorState { } ``` -`total_sources` is calculated when the actor starts. Whenever a receiver returns -`None`, it is set to `None` and `closed_sources` increments. When +`total_sources` is calculated when the actor starts. Whenever a receiver +returns `None`, it is set to `None` and `closed_sources` increments. When `closed_sources == total_sources` the loop exits. This consolidation clarifies progress through the actor lifecycle and reduces manual flag management. @@ -587,8 +587,8 @@ clearly signalling to the producer task that the connection is gone. ### 5.2 Optional Dead Letter Queue (DLQ) for Critical Messages For applications where dropping a message is unacceptable (e.g., critical -notifications, audit events), the framework will support an optional Dead Letter -Queue. +notifications, audit events), the framework will support an optional Dead +Letter Queue. **Implementation:** The `WireframeApp` builder will provide a method, `with_push_dlq(mpsc::Sender)`, to configure a DLQ. If provided, any frame @@ -700,10 +700,10 @@ features of the 1.0 release. that urgent pushes can interrupt a long-running data stream. - **Message Fragmentation:** Pushes occur at the *logical frame* level. The - `FragmentAdapter` will operate at a lower layer in the `FrameProcessor` stack, - transparently splitting any large pushed frames before they are written to the - socket. The `PushHandle` and the application code that uses it remain - completely unaware of fragmentation. + `FragmentAdapter` will operate at a lower layer in the `FrameProcessor` + stack, transparently splitting any large pushed frames before they are + written to the socket. The `PushHandle` and the application code that uses it + remain completely unaware of fragmentation. ## 7. Use Cases @@ -757,8 +757,8 @@ sequenceDiagram ### 7.3 Broker-Side MQTT `PUBLISH` An MQTT broker can deliver retained messages or fan-out new `PUBLISH` frames to -all subscribed clients via their `PushHandle`s. The `try_push` method allows the -broker to drop non-critical messages when a subscriber falls behind. +all subscribed clients via their `PushHandle`s. The `try_push` method allows +the broker to drop non-critical messages when a subscriber falls behind. ```mermaid sequenceDiagram diff --git a/docs/behavioural-testing-in-rust-with-cucumber.md b/docs/behavioural-testing-in-rust-with-cucumber.md index 1e521927..bec9a6f2 100644 --- a/docs/behavioural-testing-in-rust-with-cucumber.md +++ b/docs/behavioural-testing-in-rust-with-cucumber.md @@ -1101,7 +1101,6 @@ aligned with what is needed. [^18]: *Quickstart* — Cucumber Rust Book, accessed on 14 July 2025, -[^19]: *A Beginner’s Guide to Cucumber in Rust* — Florian Reinhard, accessed on 14 July 2025, @@ -1116,7 +1115,7 @@ aligned with what is needed. Stack Overflow, accessed on 14 July 2025, -[^23]: Data tables - Cucumber Rust Book, accessed on 14 July 2025, +[^23]: Data tables - Cucumber Rust Book, accessed on 14 July 2025, [^25]: Best practices for scenario writing | CucumberStudio Documentation @@ -1133,7 +1132,8 @@ aligned with what is needed. [^31]: Cucumber in cucumber - Rust - [Docs.rs](http://Docs.rs), accessed on - 14 July 2025, + 14 July 2025, + [^32]: CLI (command-line interface) - Cucumber Rust Book, accessed on 14 July 2025, diff --git a/docs/complexity-antipatterns-and-refactoring-strategies.md b/docs/complexity-antipatterns-and-refactoring-strategies.md index 7f62752d..0dab954a 100644 --- a/docs/complexity-antipatterns-and-refactoring-strategies.md +++ b/docs/complexity-antipatterns-and-refactoring-strategies.md @@ -28,8 +28,8 @@ robust, maintainable, and understandable for the long haul. To effectively manage complexity, it is essential to measure it. Two prominent metrics in software engineering are Cyclomatic Complexity and Cognitive -Complexity, each offering a distinct perspective on the challenges inherent in a -codebase. +Complexity, each offering a distinct perspective on the challenges inherent in +a codebase. ### A. Cyclomatic Complexity: Measuring Testability @@ -38,11 +38,11 @@ quantitative measure of the number of linearly independent paths through a program's source code. It essentially quantifies the structural complexity of a program by counting decision points that can affect the execution flow. This metric is computed using the control-flow graph of the program, where nodes -represent indivisible groups of commands and directed edges connect nodes if one -command can immediately follow another. +represent indivisible groups of commands and directed edges connect nodes if +one command can immediately follow another. -The formula for Cyclomatic Complexity is often given as M=E−N+2P, where E is the -number of edges, N is the number of nodes, and P is the number of connected +The formula for Cyclomatic Complexity is often given as M=E−N+2P, where E is +the number of edges, N is the number of nodes, and P is the number of connected components (typically 1 for a single program or method). A simpler formulation for a single subroutine is @@ -95,25 +95,25 @@ Cognitive Complexity is incremented based on three main rules. 3. **Shorthand Discount:** Structures that allow multiple statements to be read as a single unit (e.g., a well-named method call) do not incur the same penalties as the raw statements they encapsulate. Method calls are generally - "free" in terms of cognitive complexity, as a well-chosen name summarizes the - underlying logic, allowing readers to grasp the high-level view before diving - into details. Recursive calls, however, do increment the score. + "free" in terms of cognitive complexity, as a well-chosen name summarizes + the underlying logic, allowing readers to grasp the high-level view before + diving into details. Recursive calls, however, do increment the score. For instance, a `switch` statement with multiple cases might have a high Cyclomatic Complexity because each case represents a distinct path for testing. However, if the structure is straightforward and easy to follow, its Cognitive -Complexity might be relatively low. Conversely, deeply nested conditional logic, -even with fewer paths, can significantly increase Cognitive Complexity due to -the mental effort required to track the conditions and context. +Complexity might be relatively low. Conversely, deeply nested conditional +logic, even with fewer paths, can significantly increase Cognitive Complexity +due to the mental effort required to track the conditions and context. Thresholds and Implications: Code with high Cognitive Complexity is harder to read, understand, test, and modify. SonarQube, for example, raises issues when a function's Cognitive -Complexity exceeds a certain threshold, signaling that the code should likely be -refactored into smaller, more manageable pieces. The primary impact of high -Cognitive Complexity is a slowdown in development and an increase in maintenance -costs. +Complexity exceeds a certain threshold, signaling that the code should likely +be refactored into smaller, more manageable pieces. The primary impact of high +Cognitive Complexity is a slowdown in development and an increase in +maintenance costs. ### Table 1: Cyclomatic vs. Cognitive Complexity @@ -152,9 +152,9 @@ pattern when looking at the code's shape. ### A. Definition and Characteristics A method exhibiting the Bumpy Road antipattern typically contains multiple -sections, each characterized by deep nesting of conditional logic or loops. Each -"bump" in the road—a segment of deeply indented code—often signifies a distinct -responsibility or a separate logical chunk that has not been properly +sections, each characterized by deep nesting of conditional logic or loops. +Each "bump" in the road—a segment of deeply indented code—often signifies a +distinct responsibility or a separate logical chunk that has not been properly encapsulated. Key characteristics include: @@ -189,18 +189,18 @@ The severity of a Bumpy Road can be assessed by: model). Fundamentally, a Bumpy Road signifies a function that is trying to do too many -things, violating the Single Responsibility Principle. It acts as an obstacle to -comprehension, forcing developers to slow down and pay meticulous attention, +things, violating the Single Responsibility Principle. It acts as an obstacle +to comprehension, forcing developers to slow down and pay meticulous attention, much like a physical bumpy road slows down driving. ### B. How It Forms and Its Impact The Bumpy Road antipattern, like many software antipatterns, often emerges from -development practices that prioritize short-term speed over long-term structural -integrity. Rushed development cycles, lack of clear design, or cutting corners -on maintenance can lead to the gradual accumulation of conditional logic within -a single function. As new requirements or edge cases are handled, developers -might add more +development practices that prioritize short-term speed over long-term +structural integrity. Rushed development cycles, lack of clear design, or +cutting corners on maintenance can lead to the gradual accumulation of +conditional logic within a single function. As new requirements or edge cases +are handled, developers might add more `if` statements or loops to an existing method rather than stepping back to refactor and create appropriate abstractions. @@ -238,8 +238,8 @@ code. Identifying early warning signs is also crucial. ### A. Avoiding the Antipattern: Proactive Strategies -Preventing the Bumpy Road begins with a commitment to sound software engineering -principles from the outset. +Preventing the Bumpy Road begins with a commitment to sound software +engineering principles from the outset. 1. **Adherence to Single Responsibility Principle (SRP):** Ensure that each function or method has one clear, well-defined responsibility. If a function @@ -337,8 +337,8 @@ perform auto-refactoring for certain languages. ### C. Red Flags Portending the Bumpy Road -Being vigilant for early warning signs can help prevent a minor complexity issue -from escalating into a full-blown Bumpy Road. +Being vigilant for early warning signs can help prevent a minor complexity +issue from escalating into a full-blown Bumpy Road. 1. **Increasing Cognitive Complexity Scores:** A rising Cognitive Complexity score for a method in static analysis tools is a direct indicator. @@ -363,52 +363,52 @@ from escalating into a full-blown Bumpy Road. correlates with high complexity that could manifest as a Bumpy Road. 6. **Code "Smells" like Long Method:** A Bumpy Road is often, though not always, - a Long Method. The length itself isn't the core problem, but it provides more - space for bumps to accumulate. + a Long Method. The length itself isn't the core problem, but it provides + more space for bumps to accumulate. 7. **Declining Code Health Metrics:** Tools like CodeScene provide "Code Health" metrics which can degrade if Bumpy Roads are introduced. -By proactively addressing these red flags through disciplined refactoring, teams -can maintain a smoother, more navigable codebase. +By proactively addressing these red flags through disciplined refactoring, +teams can maintain a smoother, more navigable codebase. ## V. Broader Implications and Clean Refactoring Approaches -The challenges posed by high complexity and antipatterns like the Bumpy Road are -deeply intertwined with fundamental software design principles. Understanding -these connections and employing sophisticated refactoring techniques are key to -building truly maintainable systems. +The challenges posed by high complexity and antipatterns like the Bumpy Road +are deeply intertwined with fundamental software design principles. +Understanding these connections and employing sophisticated refactoring +techniques are key to building truly maintainable systems. ### A. Relation to Separation of Concerns and CQRS 1\. Separation of Concerns (SoC) Separation of Concerns is a design principle that advocates for dividing a -computer program into distinct sections, where each section addresses a separate -concern. A "concern" is a set of information that affects the code of a computer -program. Modularity is achieved by encapsulating information within a section of -code that has a well-defined interface. - -The Bumpy Road antipattern is a direct violation of SoC. Each "bump" in the code -often represents a distinct concern or responsibility that has been improperly -co-located within a single method. For example, a single method might handle -input validation, business logic processing for different cases, data -transformation, and error handling for each case, all intermingled. Refactoring -a Bumpy Road by extracting methods inherently applies SoC, as each extracted -method ideally handles a single, well-defined concern. This leads to increased -freedom for simplification, maintenance, module upgrade, reuse, and independent -development. While SoC might introduce more interfaces and potentially more code -to execute, the benefits in clarity and maintainability often outweigh these -costs, especially as systems grow. +computer program into distinct sections, where each section addresses a +separate concern. A "concern" is a set of information that affects the code of +a computer program. Modularity is achieved by encapsulating information within +a section of code that has a well-defined interface. + +The Bumpy Road antipattern is a direct violation of SoC. Each "bump" in the +code often represents a distinct concern or responsibility that has been +improperly co-located within a single method. For example, a single method +might handle input validation, business logic processing for different cases, +data transformation, and error handling for each case, all intermingled. +Refactoring a Bumpy Road by extracting methods inherently applies SoC, as each +extracted method ideally handles a single, well-defined concern. This leads to +increased freedom for simplification, maintenance, module upgrade, reuse, and +independent development. While SoC might introduce more interfaces and +potentially more code to execute, the benefits in clarity and maintainability +often outweigh these costs, especially as systems grow. Consider a function that processes different types of user commands. A Bumpy Road approach might have a large `if-else if-else` structure, with each block handling a command type and its associated logic. This mixes the concern of -"dispatching" or "routing" based on command type with the concern of "executing" -each specific command. Applying SoC would involve separating these: perhaps one -component decides which command to execute, and separate components (functions -or classes) handle the execution of each command. This separation makes the -system easier to understand, test, and extend with new commands. +"dispatching" or "routing" based on command type with the concern of +"executing" each specific command. Applying SoC would involve separating these: +perhaps one component decides which command to execute, and separate components +(functions or classes) handle the execution of each command. This separation +makes the system easier to understand, test, and extend with new commands. 2\. Command Query Responsibility Segregation (CQRS) @@ -444,30 +444,30 @@ together. - **God Objects and CQRS:** The "God Object" or "God Class" antipattern, where a single class hoards too much logic and responsibility, often leads to methods - within that class becoming Bumpy Roads. CQRS can help decompose God Objects by - separating their command-handling responsibilities from their query-handling - responsibilities, potentially leading to smaller, more focused classes (e.g., - one class for command processing, another for query processing, or even - finer-grained handlers). This separation simplifies each part, making them - easier to manage and reducing the cognitive load associated with the original - monolithic structure. - -CQRS promotes a clear separation that can prevent the kind of tangled logic that -forms Bumpy Roads. By isolating write operations (commands) from read operations -(queries), and by encouraging task-based commands, the system naturally tends -towards smaller, more cohesive units of behaviour, thus reducing overall -cognitive complexity within individual components. The separation allows for -independent optimization and scaling of read and write sides, but more -importantly for this discussion, it enforces a structural discipline that -discourages methods from accumulating diverse responsibilities. + within that class becoming Bumpy Roads. CQRS can help decompose God Objects + by separating their command-handling responsibilities from their + query-handling responsibilities, potentially leading to smaller, more focused + classes (e.g., one class for command processing, another for query + processing, or even finer-grained handlers). This separation simplifies each + part, making them easier to manage and reducing the cognitive load associated + with the original monolithic structure. + +CQRS promotes a clear separation that can prevent the kind of tangled logic +that forms Bumpy Roads. By isolating write operations (commands) from read +operations (queries), and by encouraging task-based commands, the system +naturally tends towards smaller, more cohesive units of behaviour, thus +reducing overall cognitive complexity within individual components. The +separation allows for independent optimization and scaling of read and write +sides, but more importantly for this discussion, it enforces a structural +discipline that discourages methods from accumulating diverse responsibilities. ### B. Avoiding Spaghetti Code Turning into Ravioli Code -When refactoring complex, tangled code (often called "Spaghetti Code"), a common -approach is to break it down into smaller pieces, such as functions or classes. -However, if this is done without careful consideration for cohesion and -appropriate levels of abstraction, it can lead to "Ravioli Code". Ravioli Code -is characterized by a multitude of small, often overly granular classes or +When refactoring complex, tangled code (often called "Spaghetti Code"), a +common approach is to break it down into smaller pieces, such as functions or +classes. However, if this is done without careful consideration for cohesion +and appropriate levels of abstraction, it can lead to "Ravioli Code". Ravioli +Code is characterized by a multitude of small, often overly granular classes or functions, where understanding the overall program flow requires navigating through numerous tiny, disconnected pieces, making it as hard to follow as the original spaghetti. @@ -511,9 +511,9 @@ original spaghetti. 5. **Iterative Refactoring and Review:** Refactoring is not always a one-shot process. Continuously review the abstractions. Are they helping or hindering - understanding? Are there too many trivial classes that could be consolidated? - Pair programming can also help maintain a balanced perspective during - refactoring. + understanding? Are there too many trivial classes that could be + consolidated? Pair programming can also help maintain a balanced perspective + during refactoring. 6. **The "You Aren't Gonna Need It" (YAGNI) Principle:** This principle helps avoid unnecessary abstractions and features, which can contribute to Ravioli @@ -523,8 +523,8 @@ original spaghetti. be simple, the difficulty lies in tracing the overall execution flow. Ensure that the interactions and dependencies between components are clear and easy to follow. Sometimes, a slightly larger, more cohesive component is - preferable to many tiny ones if it improves the clarity of the overall system - behaviour. + preferable to many tiny ones if it improves the clarity of the overall + system behaviour. The goal is not to have the fewest classes or methods, but to have a structure where each component is easy to understand in isolation, and the interactions @@ -559,18 +559,18 @@ and method structure. 1\. Structural Pattern Matching -Structural pattern matching, available in languages like Python (since 3.10 with -match-case) and C#, offers a declarative and expressive way to handle complex -conditional logic, often replacing verbose if-elif-else chains or switch -statements. +Structural pattern matching, available in languages like Python (since 3.10 +with match-case) and C#, offers a declarative and expressive way to handle +complex conditional logic, often replacing verbose if-elif-else chains or +switch statements. It works by allowing code to match against the *structure* of data—such as its type, shape, or specific values within sequences (lists, tuples) or mappings (dictionaries)—and simultaneously destructure this data, binding parts of it to variables. This approach can significantly reduce cognitive load. The clarity -comes from the direct mapping of data shapes to code blocks, making it easier to -understand the conditions under which a piece of code executes. For instance, -instead of multiple +comes from the direct mapping of data shapes to code blocks, making it easier +to understand the conditions under which a piece of code executes. For +instance, instead of multiple `isinstance` checks followed by key lookups and value comparisons in a nested `if` structure to parse a JSON object, a single `case` statement with a mapping @@ -578,8 +578,8 @@ pattern can define the expected structure and extract the necessary values concisely. This shifts the focus from an imperative sequence of checks to a declarative description of data shapes, which is often more intuitive. The destructuring capability is particularly powerful, as it eliminates the manual -code otherwise needed to extract values after a condition has been met, reducing -boilerplate and the number of mental steps a developer must follow. +code otherwise needed to extract values after a condition has been met, +reducing boilerplate and the number of mental steps a developer must follow. Consider processing different event types from a UI framework, where events are represented as dictionaries. @@ -635,9 +635,9 @@ minimized. Many declarative approaches also inherently favor immutability and reduce side effects, which are common culprits for bugs and increased cognitive load in imperative code. -Examples include using SQL for database queries (specifying the desired dataset, -not the retrieval algorithm), or employing functional programming constructs -like +Examples include using SQL for database queries (specifying the desired +dataset, not the retrieval algorithm), or employing functional programming +constructs like `map`, `filter`, and `reduce` on collections instead of writing explicit loops. Refactoring imperative code to a declarative style can start small, perhaps by @@ -658,12 +658,12 @@ patterns offer a structured and extensible alternative. The **Command pattern** encapsulates a request or an action as an object. Each command object implements a common interface (e.g., with an -`execute()` method). This decouples the object that invokes the command from the -object that knows how to perform it. Instead of a large conditional checking a -type and then executing logic, different command objects can be instantiated -based on the type, and then their `execute()` method is called. This promotes -SRP, as each command class handles a single action, making the system easier to -test and extend. +`execute()` method). This decouples the object that invokes the command from +the object that knows how to perform it. Instead of a large conditional +checking a type and then executing logic, different command objects can be +instantiated based on the type, and then their `execute()` method is called. +This promotes SRP, as each command class handles a single action, making the +system easier to test and extend. The **Dispatcher pattern** often works in conjunction with the Command pattern. A dispatcher is a central component that receives requests (which could be @@ -734,9 +734,9 @@ This approach not only simplifies the original `handleMessage` method but also makes the system more extensible, as new message types can be supported by adding new handler classes and registering them with the dispatcher, often without modifying existing dispatcher code (aligning with the Open/Closed -Principle). However, it's important to ensure that the dispatch mechanism itself -remains clear and that the proliferation of small classes doesn't lead to -Ravioli Code, where the overall system flow becomes obscured. Clear naming +Principle). However, it's important to ensure that the dispatch mechanism +itself remains clear and that the proliferation of small classes doesn't lead +to Ravioli Code, where the overall system flow becomes obscured. Clear naming conventions and logical organisation are vital. The **State pattern** is a related behavioural pattern useful when an object's @@ -749,8 +749,8 @@ effective for refactoring state machines implemented with complex `if/else` or `switch` statements. By thoughtfully applying these refactoring strategies, developers can -significantly reduce cognitive complexity, making codebases more understandable, -maintainable, and adaptable to future changes. +significantly reduce cognitive complexity, making codebases more +understandable, maintainable, and adaptable to future changes. ## VI. Conclusion: Towards a More Maintainable and Understandable Codebase @@ -782,9 +782,10 @@ maintainability over adherence to a pattern for its own sake, to avoid pitfalls like Ravioli Code. A proactive and disciplined approach, where these principles and techniques are -integrated into daily development practices, is essential. This includes regular -code reviews, monitoring complexity metrics, and fostering a team culture that -values code quality and continuous improvement. The oft-quoted wisdom, "Good -programmers write code that humans can understand", remains the guiding -principle. By striving for this ideal, development teams can build systems that -are not only powerful and efficient but also a pleasure to evolve and maintain. +integrated into daily development practices, is essential. This includes +regular code reviews, monitoring complexity metrics, and fostering a team +culture that values code quality and continuous improvement. The oft-quoted +wisdom, "Good programmers write code that humans can understand", remains the +guiding principle. By striving for this ideal, development teams can build +systems that are not only powerful and efficient but also a pleasure to evolve +and maintain. diff --git a/docs/contents.md b/docs/contents.md index b4607b6c..e1b59f86 100644 --- a/docs/contents.md +++ b/docs/contents.md @@ -29,7 +29,8 @@ reference when navigating the project's design and architecture material. - [Wireframe 1.0 roadmap](wireframe-1-0-detailed-development-roadmap.md) Detailed tasks leading to Wireframe 1.0. - [Project roadmap](roadmap.md) High-level development roadmap. -- [1.0 philosophy](the-road-to-wireframe-1-0-feature-set-philosophy-and-capability-maturity.md) +- [1.0 + philosophy](the-road-to-wireframe-1-0-feature-set-philosophy-and-capability-maturity.md) Philosophy and feature set for Wireframe 1.0. ## Testing diff --git a/docs/documentation-style-guide.md b/docs/documentation-style-guide.md index 523d00f1..74c129fb 100644 --- a/docs/documentation-style-guide.md +++ b/docs/documentation-style-guide.md @@ -58,8 +58,8 @@ fn add(a: i32, b: i32) -> i32 { ## API doc comments (Rust) -Use doc comments to document public APIs. Keep them consistent with the contents -of the manual. +Use doc comments to document public APIs. Keep them consistent with the +contents of the manual. - Begin each block with `///`. - Keep the summary line short, followed by further detail. @@ -94,10 +94,10 @@ pub fn add(a: i32, b: i32) -> i32 { ## Diagrams and images -Where it adds clarity, include [Mermaid](https://mermaid.js.org/) diagrams. When -embedding figures, use `![alt text](path/to/image)` and provide concise alt text -describing the content. Add a short description before each Mermaid diagram so -screen readers can understand it. +Where it adds clarity, include [Mermaid](https://mermaid.js.org/) diagrams. +When embedding figures, use `![alt text](path/to/image)` and provide concise +alt text describing the content. Add a short description before each Mermaid +diagram so screen readers can understand it. ```mermaid flowchart TD diff --git a/docs/frame-metadata.md b/docs/frame-metadata.md index d0805fc2..557096dc 100644 --- a/docs/frame-metadata.md +++ b/docs/frame-metadata.md @@ -5,8 +5,8 @@ entire payload. This can be useful for routing decisions when the message type is encoded in a header field. Implement the trait for your serializer or decoder that knows how to read the -header bytes. Only the minimal header portion should be read, returning the full -frame and the number of bytes consumed from the input. +header bytes. Only the minimal header portion should be read, returning the +full frame and the number of bytes consumed from the input. ```rust use wireframe::frame::{FrameMetadata, FrameProcessor}; diff --git a/docs/generic-message-fragmentation-and-re-assembly-design.md b/docs/generic-message-fragmentation-and-re-assembly-design.md index 9bac5d54..bfbe7951 100644 --- a/docs/generic-message-fragmentation-and-re-assembly-design.md +++ b/docs/generic-message-fragmentation-and-re-assembly-design.md @@ -10,14 +10,14 @@ which processes one inbound frame to one logical frame, cannot handle this. This document details the design for a generic, protocol-agnostic fragmentation and re-assembly layer. The core philosophy is to treat this as a **transparent -middleware**. Application-level code, such as handlers, should remain unaware of -the underlying fragmentation, dealing only with complete, logical messages. This -new layer will be responsible for automatically splitting oversized outbound -frames and meticulously re-assembling inbound fragments into a single, coherent -message before they reach the router. - -This feature is a critical component of the "Road to Wireframe 1.0," designed to -seamlessly integrate with and underpin the streaming and server-push +middleware**. Application-level code, such as handlers, should remain unaware +of the underlying fragmentation, dealing only with complete, logical messages. +This new layer will be responsible for automatically splitting oversized +outbound frames and meticulously re-assembling inbound fragments into a single, +coherent message before they reach the router. + +This feature is a critical component of the "Road to Wireframe 1.0," designed +to seamlessly integrate with and underpin the streaming and server-push capabilities. ## 2. Design Goals & Requirements @@ -27,7 +27,7 @@ The implementation must satisfy the following core requirements: | ID | Goal | -| -- | ------------------ | +| --- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | G1 | Transparent inbound re-assembly → The router and handlers must always receive one complete, logical Frame. | | G2 | Transparent outbound fragmentation when a payload exceeds a configurable, protocol-specific size. | | G3 | Pluggable Strategy: The logic for parsing and building fragment headers, detecting the final fragment, and managing sequence numbers must be supplied by the protocol implementation, not hard-coded in the framework. | @@ -51,10 +51,10 @@ uncompressed data, as required by most protocol specifications. ### 3.1 State Management for Multiplexing -A critical requirement for modern protocols is the ability to handle interleaved -fragments from different logical messages on the same connection. To support -this, the `FragmentAdapter` will not maintain a single re-assembly state, but a -map of concurrent re-assembly processes. +A critical requirement for modern protocols is the ability to handle +interleaved fragments from different logical messages on the same connection. +To support this, the `FragmentAdapter` will not maintain a single re-assembly +state, but a map of concurrent re-assembly processes. Rust @@ -98,8 +98,8 @@ inject their specific fragmentation rules into the generic `FragmentAdapter`. ### 4.1 Trait Definition -The trait is designed to be context-aware and expressive, allowing it to model a -wide range of protocols. +The trait is designed to be context-aware and expressive, allowing it to model +a wide range of protocols. Rust @@ -181,8 +181,8 @@ WireframeServer::new(|| { ### 5.1 Inbound Path (Re-assembly) -The re-assembly logic is the most complex part of the feature and must be robust -against errors and attacks. +The re-assembly logic is the most complex part of the feature and must be +robust against errors and attacks. 1. **Header Decoding:** The adapter reads from the socket buffer and calls `strategy.decode_header()`. If it returns `Ok(None)`, it waits for more data. @@ -220,8 +220,8 @@ against errors and attacks. - The new payload is appended to the buffer. - **Final Fragment:** If `meta.is_final` is true, the full payload is - extracted from the `PartialMessage`, the entry is removed from the map, and - the complete logical frame is passed down the processor chain. + extracted from the `PartialMessage`, the entry is removed from the map, + and the complete logical frame is passed down the processor chain. 4. **Timeout Handling:** A separate, low-priority background task within the `FragmentAdapter` will periodically iterate over the `reassembly_buffers`, @@ -273,7 +273,7 @@ This feature is designed as a foundational layer that other features build upon. | Category | Objective | Success Metric | -| --------------- | --------- | -------------------- | +| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | API Correctness | The FragmentStrategy trait and FragmentAdapter are implemented exactly as specified in this document. | 100% of the public API surface is present and correctly typed. | | Functionality | A large logical frame is correctly split into N fragments, and a sequence of N fragments is correctly re-assembled into the original frame. | An end-to-end test confirms byte-for-byte identity of a 64 MiB payload after being fragmented and re-assembled. | | Multiplexing | The adapter can correctly re-assemble two messages whose fragments are interleaved. | A test sending fragments A1, B1, A2, B2, A3, B3 must result in two correctly re-assembled messages, A and B. | diff --git a/docs/hardening-wireframe-a-guide-to-production-resilience.md b/docs/hardening-wireframe-a-guide-to-production-resilience.md index e545e215..ff9af8f9 100644 --- a/docs/hardening-wireframe-a-guide-to-production-resilience.md +++ b/docs/hardening-wireframe-a-guide-to-production-resilience.md @@ -4,22 +4,22 @@ For a low-level networking library like `wireframe`, resilience is not an optional extra; it is a core, non-functional requirement. A library that is -functionally correct but brittle under load, susceptible to resource exhaustion, -or unable to terminate cleanly is not fit for production. This guide details the -comprehensive strategy for hardening `wireframe`, transforming it into a -framework that is robust by default. +functionally correct but brittle under load, susceptible to resource +exhaustion, or unable to terminate cleanly is not fit for production. This +guide details the comprehensive strategy for hardening `wireframe`, +transforming it into a framework that is robust by default. This document is targeted at implementers. It moves beyond theoretical discussion to provide concrete, actionable patterns and code for building a -resilient system. The philosophy is simple: anticipate failure, manage resources -meticulously, and provide clear mechanisms for control and recovery. We will -cover three critical domains: coordinated shutdown, leak-proof resource +resilient system. The philosophy is simple: anticipate failure, manage +resources meticulously, and provide clear mechanisms for control and recovery. +We will cover three critical domains: coordinated shutdown, leak-proof resource management, and denial-of-service mitigation. ## 2. Coordinated, Graceful Shutdown -A network service that cannot shut down cleanly leaks resources, corrupts state, -and provides a poor experience for clients. `wireframe` will implement a +A network service that cannot shut down cleanly leaks resources, corrupts +state, and provides a poor experience for clients. `wireframe` will implement a canonical, proactive shutdown pattern, ensuring that termination is an orderly and reliable process. @@ -32,18 +32,18 @@ The core mechanism relies on two primitives from the `tokio` ecosystem: - `CancellationToken`**:** A single root token is created at server startup. This token is cloned and distributed to every spawned task, including connection actors and any user-defined background workers. When the server - needs to shut down (e.g., on receipt of `SIGINT`), it calls `.cancel()` on the - root token, a signal that is immediately visible to all clones. + needs to shut down (e.g., on receipt of `SIGINT`), it calls `.cancel()` on + the root token, a signal that is immediately visible to all clones. - `TaskTracker`**:** The server uses a `TaskTracker` to `spawn` all tasks. After - triggering cancellation, the main server task calls `tracker.close()` and then - `tracker.wait().await`. This call will only resolve once every single tracked - task has completed, guaranteeing that no tasks are orphaned. + triggering cancellation, the main server task calls `tracker.close()` and + then `tracker.wait().await`. This call will only resolve once every single + tracked task has completed, guaranteeing that no tasks are orphaned. ### 2.2 Implementation in the Connection Actor -Every long-running loop within `wireframe` must be made cancellation-aware. This -is achieved by adding a high-priority branch to every `select!` macro. +Every long-running loop within `wireframe` must be made cancellation-aware. +This is achieved by adding a high-priority branch to every `select!` macro. #### Example: The Main Connection Actor Loop @@ -307,8 +307,8 @@ pub enum WireframeError { When the connection actor receives a `WireframeError::Protocol(e)` from a response stream, it can pass this typed error to a protocol-specific callback. -This allows the protocol implementation to serialize a proper error frame (e.g., -an SQL error code) to send to the client before terminating the current +This allows the protocol implementation to serialize a proper error frame +(e.g., an SQL error code) to send to the client before terminating the current operation, rather than just abruptly closing the connection. ### 5.2 Dead Letter Queues (DLQ) for Guaranteed Pushing diff --git a/docs/message-versioning.md b/docs/message-versioning.md index 7993d652..28bd8447 100644 --- a/docs/message-versioning.md +++ b/docs/message-versioning.md @@ -68,8 +68,8 @@ fallback handler can respond with an error. ### 4. Compatibility Checks Handlers for newer versions can implement `From` to convert legacy -messages. A helper `ensure_compatible` function will attempt the conversion when -a mismatched version is received. +messages. A helper `ensure_compatible` function will attempt the conversion +when a mismatched version is received. ```rust async fn handle_login_v2(req: Message) { /* ... */ } @@ -83,9 +83,9 @@ app.version_guard(1).route(MessageId::Login, |m: Message| { ### 5. Handshake Negotiation During the optional connection preamble, both sides can exchange the highest -protocol version they support. The server selects a version and stores it in the -connection state so extractors and middleware can access it. If no compatible -version exists, the connection is rejected early. +protocol version they support. The server selects a version and stores it in +the connection state so extractors and middleware can access it. If no +compatible version exists, the connection is rejected early. ## Future Work diff --git a/docs/mocking-network-outages-in-rust.md b/docs/mocking-network-outages-in-rust.md index 43a47056..8e439f02 100644 --- a/docs/mocking-network-outages-in-rust.md +++ b/docs/mocking-network-outages-in-rust.md @@ -13,9 +13,9 @@ In this tutorial, we demonstrate how to refactor and test `mxd`’s server components to **simulate unreliable network conditions**. We’ll introduce a transport abstraction to inject simulated failures, and use `tokio-test::io::Builder` for custom I/O streams. We’ll also leverage `rstest` -for parameterized tests and `mockall` for mocking, where appropriate. The result -will be a suite of tests ensuring `mxd`’s server remains stable even when the -network is not. +for parameterized tests and `mockall` for mocking, where appropriate. The +result will be a suite of tests ensuring `mxd`’s server remains stable even +when the network is not. ## Overview of `mxd`’s Server Networking @@ -45,16 +45,16 @@ handles network I/O in its server: - On successful handshake, sends back a handshake OK reply and proceeds. - **Transaction Loop:** After handshake, `handle_client` creates a - `TransactionReader` and `TransactionWriter` (from the `transaction` module) to - handle the message framing. It then loops with `tokio::select!`, awaiting + `TransactionReader` and `TransactionWriter` (from the `transaction` module) + to handle the message framing. It then loops with `tokio::select!`, awaiting either: 1. **Incoming Transaction:** `tx_reader.read_transaction()` which reads the - next complete request frame (possibly composed of multiple fragments). If a - transaction is received, it calls `handler::handle_request` to produce a + next complete request frame (possibly composed of multiple fragments). If + a transaction is received, it calls `handler::handle_request` to produce a response and writes the response back with `tx_writer.write_transaction`. - 2. **Shutdown Signal:** A shared shutdown `watch` channel to break the loop on + 1. **Shutdown Signal:** A shared shutdown `watch` channel to break the loop on server shutdown. The loop’s error handling is important for our tests: @@ -100,8 +100,8 @@ scenario. Currently, `handle_client` is tied to a real `TcpStream`. To test network failures, we need to run `handle_client` (or its subroutines) with a *simulated -stream*. We’ll achieve this by abstracting the transport layer behind a trait or -generics, so that in tests we can substitute a mock stream object. +stream*. We’ll achieve this by abstracting the transport layer behind a trait +or generics, so that in tests we can substitute a mock stream object. **Refactoring** `handle_client`**:** A straightforward approach is to make `handle_client` generic over the stream’s reader and writer. The Tokio docs @@ -146,11 +146,12 @@ can apply this by splitting the logic: generic over any `AsyncRead`/`AsyncWrite`, this works seamlessly. The handshake logic will use the provided `reader` and `writer` as well. -With this change, `client_handler` no longer assumes a real network `TcpStream`; -we can pass in any in-memory or mock stream for testing. **Importantly**, the -production code doesn’t lose functionality – we still create actual TCP -listeners/streams, but we hand off to the generic handler. This refactor -maintains the same behaviour while enabling injection of test streams. +With this change, `client_handler` no longer assumes a real network +`TcpStream`; we can pass in any in-memory or mock stream for testing. +**Importantly**, the production code doesn’t lose functionality – we still +create actual TCP listeners/streams, but we hand off to the generic handler. +This refactor maintains the same behaviour while enabling injection of test +streams. *Example – generic handler signature:* @@ -218,9 +219,10 @@ where } ``` -In the above pseudocode, we essentially mirrored the logic from `handle_client`, -but on generic `reader`/`writer`. This refactoring sets the stage for injecting -**simulated failures** in tests by providing custom `reader`/`writer` types. +In the above pseudocode, we essentially mirrored the logic from +`handle_client`, but on generic `reader`/`writer`. This refactoring sets the +stage for injecting **simulated failures** in tests by providing custom +`reader`/`writer` types. ## Simulating Network Failures with `tokio-test::io::Builder` @@ -234,10 +236,10 @@ For example, the Tokio documentation demonstrates using `Builder` to simulate a simple echo conversation by preloading expected inputs and outputs. We will use a similar approach for failure scenarios. -**1. Simulating a Handshake Timeout:** In this scenario, the client connects but -**never sends the handshake bytes**, causing the server’s 5-second timeout to -elapse. To test this without an actual 5-second delay, we can take advantage of -Tokio’s ability to **pause time** in tests. By annotating our test with +**1. Simulating a Handshake Timeout:** In this scenario, the client connects +but **never sends the handshake bytes**, causing the server’s 5-second timeout +to elapse. To test this without an actual 5-second delay, we can take advantage +of Tokio’s ability to **pause time** in tests. By annotating our test with `#[tokio::test(start_paused = true)]`, the Tokio runtime’s clock is frozen at start. We can then `.advance` the clock programmatically to trigger the timeout. @@ -273,11 +275,11 @@ async fn handshake_times_out() { ``` In the above test, `Builder::new().build()` for the reader yields an I/O object -that returns EOF immediately on reads (since no `.read` is queued). The server’s -`read_exact` will wait, but after we advance the virtual clock 5+ seconds, the -`timeout` will return `Err`, causing the server to write a timeout error reply. -We expect the reply to be 8 bytes (`"TRTP"` + error code 3), which we queued as -an expected write. The `test_writer` is configured with +that returns EOF immediately on reads (since no `.read` is queued). The +server’s `read_exact` will wait, but after we advance the virtual clock 5+ +seconds, the `timeout` will return `Err`, causing the server to write a timeout +error reply. We expect the reply to be 8 bytes (`"TRTP"` + error code 3), which +we queued as an expected write. The `test_writer` is configured with `.write(&expected_reply)` to assert that those exact bytes are written. If the server fails to write this or writes different bytes, the test will fail. Finally, we assert that `client_handler` returned `Ok(())` – it should return @@ -314,9 +316,9 @@ async fn handshake_invalid_protocol() { In this test, we queue the handshake bytes `"WRNG..."` as the reader input. The server’s `parse_handshake` will return `HandshakeError::InvalidProtocol`. According to `handle_client`, this triggers sending an error reply with code=1 -and returning `Ok(())`. Our `test_writer` expects exactly those 8 bytes. We also -appended `.read_eof()` after the handshake bytes to indicate the client closed -the connection (this ensures the server’s next read sees EOF instead of +and returning `Ok(())`. Our `test_writer` expects exactly those 8 bytes. We +also appended `.read_eof()` after the handshake bytes to indicate the client +closed the connection (this ensures the server’s next read sees EOF instead of hanging). The test verifies that `client_handler` completes without propagating an error (it handled the invalid handshake gracefully). @@ -436,14 +438,14 @@ let test_reader = Builder::new() ``` Here, after the handshake, any attempt by the server to read further will -immediately get a `ConnectionReset` error. In `handle_client`, this is caught by -the generic `Err(e)` arm (not EOF), and the function will return an error. We -can assert that result is an `Err` and matches the expected kind. +immediately get a `ConnectionReset` error. In `handle_client`, this is caught +by the generic `Err(e)` arm (not EOF), and the function will return an error. +We can assert that result is an `Err` and matches the expected kind. **6. Simulating Partial Write Failures:** So far, we’ve focused on read-side issues. But what if the server fails while **writing** to the client (for -instance, the client disconnects just as the server sends a response)? In such a -case, `TransactionWriter::write_transaction` might return an error (likely a +instance, the client disconnects just as the server sends a response)? In such +a case, `TransactionWriter::write_transaction` might return an error (likely a broken pipe). Our handler would propagate that error out. To test this, we need a writer that simulates an error on write. The `Builder` @@ -497,8 +499,8 @@ deterministic test suite for network failures. ## Parameterizing Tests with `rstest` -As the above examples show, many scenarios follow a similar pattern of setup and -assertion. We can use the `rstest` crate to avoid repetitive code by +As the above examples show, many scenarios follow a similar pattern of setup +and assertion. We can use the `rstest` crate to avoid repetitive code by parameterizing the scenarios. The `#[rstest]` attribute allows us to define multiple cases for a single test function. @@ -557,24 +559,25 @@ async fn test_network_outage_scenarios(scenario: Scenario) { } ``` -Above, each `case(...)` provides a different `Scenario` variant. The test builds -the appropriate `test_reader`/`test_writer` and then invokes `client_handler`. -We use a `should_error` flag to assert the expected outcome. This single -parametrized test replaces multiple individual tests, reducing duplication. All -scenarios still run in isolation with distinct setups, thanks to `rstest`. +Above, each `case(...)` provides a different `Scenario` variant. The test +builds the appropriate `test_reader`/`test_writer` and then invokes +`client_handler`. We use a `should_error` flag to assert the expected outcome. +This single parametrized test replaces multiple individual tests, reducing +duplication. All scenarios still run in isolation with distinct setups, thanks +to `rstest`. ## Using `mockall` for Additional Flexibility While `tokio-test::io::Builder` covers most needs, there are situations where -explicit mocking might be useful. The `mockall` crate can generate mocks for our -abstractions. For example, if we had defined a trait +explicit mocking might be useful. The `mockall` crate can generate mocks for +our abstractions. For example, if we had defined a trait `trait Transport: AsyncRead + AsyncWrite + Unpin {}` (or a trait with specific async methods for read/write), we could use `mockall` to create a `MockTransport` and program its behaviour (return errors on certain calls, etc.). -However, mocking `AsyncRead/Write` directly can be complex. An easier target for -mocking might be higher-level components: +However, mocking `AsyncRead/Write` directly can be complex. An easier target +for mocking might be higher-level components: - **Accept Loop Simulation:** We could define a trait for the listener: @@ -593,10 +596,10 @@ mocking might be higher-level components: `accept_connections` logs the error and continues or exits properly. - **Isolating Business Logic:** In our `client_handler` tests above, we mostly - ignored the actual `handle_request` logic by using dummy minimal transactions. - If we wanted to focus purely on the network layer and not depend on real - database calls or command processing, we could abstract the request handling. - For example, introduce an interface: + ignored the actual `handle_request` logic by using dummy minimal + transactions. If we wanted to focus purely on the network layer and not + depend on real database calls or command processing, we could abstract the + request handling. For example, introduce an interface: ```rust trait RequestHandler { @@ -655,8 +658,8 @@ demonstrated how to simulate timeouts, abrupt disconnects, and I/O errors for both reads and writes. With parameterized tests and careful use of mocks, the server’s resilience under adverse network conditions can be validated thoroughly. This not only prevents regressions but also documents the intended -behaviour (for example, that a timeout should result in a specific error code to -the client, or that an EOF is treated as a graceful shutdown). +behaviour (for example, that a timeout should result in a specific error code +to the client, or that an EOF is treated as a graceful shutdown). **In conclusion**, testing for network outages in async Rust requires a mix of clever abstractions and tools: diff --git a/docs/multi-layered-testing-strategy.md b/docs/multi-layered-testing-strategy.md index 1ff3d7ca..3e2ea5d1 100644 --- a/docs/multi-layered-testing-strategy.md +++ b/docs/multi-layered-testing-strategy.md @@ -5,8 +5,8 @@ This document outlines the comprehensive testing strategy for the `wireframe` library, synthesising the test plans from the designs for server-initiated messages, streaming responses, and message fragmentation. A robust, -multi-layered approach to testing is non-negotiable for a low-level library like -`wireframe`, where correctness, resilience, and performance are paramount. +multi-layered approach to testing is non-negotiable for a low-level library +like `wireframe`, where correctness, resilience, and performance are paramount. The strategy is divided into four distinct layers, each building upon the last. This version has been enhanced with **concrete code examples** and **measurable @@ -31,9 +31,9 @@ form the bedrock of our confidence in the library. This test verifies that when a handler returns a `Response::Stream`, all frames from that stream are written to the outbound buffer in the correct sequence. -**Test Construction:** A mock connection actor is spawned, and a test handler is -invoked that returns a stream of 10 distinct, identifiable frames. The actor's -outbound write buffer is captured after the stream is fully processed. +**Test Construction:** A mock connection actor is spawned, and a test handler +is invoked that returns a stream of 10 distinct, identifiable frames. The +actor's outbound write buffer is captured after the stream is fully processed. ```rust #[tokio::test] @@ -57,14 +57,14 @@ async fn test_streaming_response_order() { **Expected Outcome & Measurable Objective:** The write buffer must contain all 10 frames, correctly serialised, in the exact order they were sent. The -`on_logical_response_end` hook must be called exactly once after the final frame -is flushed. The objective is **100% frame delivery with strict ordering +`on_logical_response_end` hook must be called exactly once after the final +frame is flushed. The objective is **100% frame delivery with strict ordering confirmed.** ### 2.2 Push Handle: High-Volume Throughput -This test ensures the `PushHandle` can sustain a high volume of messages without -deadlocking or losing frames. +This test ensures the `PushHandle` can sustain a high volume of messages +without deadlocking or losing frames. **Test Construction:** A fake connection actor is spawned with a large buffer. Its `PushHandle` is cloned and moved into a `tokio::spawn` block that rapidly @@ -197,9 +197,9 @@ async fn test_back_pressure() { ``` **Expected Outcome & Measurable Objective:** The second `push()` call must be -suspended until the first frame is drained from the queue. The objective is that -**the second push call must take at least as long as the mock socket's stall -time.** +suspended until the first frame is drained from the queue. The objective is +that **the second push call must take at least as long as the mock socket's +stall time.** ### 3.2 Socket Write Failure: Error Propagation to Handles @@ -229,10 +229,10 @@ async fn test_socket_write_failure() { ``` -**Expected Outcome & Measurable Objective:** The connection actor must terminate -cleanly. Any subsequent calls on associated `PushHandle`s must immediately -return a `BrokenPipe` error. The objective is that **failure propagation must -occur within one scheduler tick.** +**Expected Outcome & Measurable Objective:** The connection actor must +terminate cleanly. Any subsequent calls on associated `PushHandle`s must +immediately return a `BrokenPipe` error. The objective is that **failure +propagation must occur within one scheduler tick.** ### 3.3 Graceful Shutdown: Co-ordinated Task Termination @@ -240,8 +240,8 @@ This test ensures that a server-wide shutdown signal leads to the clean termination of all active connection tasks. **Test Construction:** `tokio_util::sync::CancellationToken` is used to signal -shutdown to multiple spawned connection tasks. A `tokio_util::task::TaskTracker` -waits for all tasks to complete. +shutdown to multiple spawned connection tasks. A +`tokio_util::task::TaskTracker` waits for all tasks to complete. ```rust #[tokio::test] @@ -271,8 +271,8 @@ terminate. The main server task must exit cleanly. The objective is that ### 3.4 Fragmentation Limit: DoS Protection -This test confirms that the `FragmentAdapter` protects against memory exhaustion -by enforcing the `max_message_size`. +This test confirms that the `FragmentAdapter` protects against memory +exhaustion by enforcing the `max_message_size`. **Test Construction:** An adapter is configured with `max_message_size(1024)`. Fragments are decoded that would total more than this limit. @@ -319,8 +319,8 @@ fn test_fragmentation_sequence_error() { **Expected Outcome & Measurable Objective:** The `decode()` method must return an `InvalidData` error upon detecting the sequence violation. The objective is -that **the specific protocol error ("sequence gap", "duplicate sequence") should -be identifiable from the error message.** +that **the specific protocol error ("sequence gap", "duplicate sequence") +should be identifiable from the error message.** ## 4. Layer 3: Advanced Correctness (Logic & Concurrency) @@ -338,9 +338,9 @@ the `FragmentAdapter`. **Test Construction:** A `proptest` strategy generates a sequence of "actions" (e.g., `SendFragment(bytes)`, `SendCompleteFrame(bytes)`). The test runner -applies these actions to both the real `FragmentAdapter` and a simple, validated -"model" of its state. The test asserts that the model and the real adapter's -state always agree. +applies these actions to both the real `FragmentAdapter` and a simple, +validated "model" of its state. The test asserts that the model and the real +adapter's state always agree. ```rust proptest! { diff --git a/docs/multi-packet-and-streaming-responses-design.md b/docs/multi-packet-and-streaming-responses-design.md index 19344a06..80d587a1 100644 --- a/docs/multi-packet-and-streaming-responses-design.md +++ b/docs/multi-packet-and-streaming-responses-design.md @@ -25,7 +25,7 @@ duplex and capable framework. The implementation must satisfy the following core requirements: | ID | Requirement | -| -- | ---------------------------------------------------------------------------------------------------------------------| +| --- | -------------------------------------------------------------------------------------------------------------------- | | G1 | Allow a handler to send zero, one, or many frames for a single logical response. | | G2 | Provide transparent back-pressure: writers must suspend when outbound capacity is exhausted. | | G3 | Integrate with protocol-specific sequencing rules (e.g., per-command counters) without hard-coding any one protocol. | @@ -226,15 +226,15 @@ cancellation-safe, no frames will be lost; the stream will simply be dropped. Similarly, if a handler panics or returns early, the `Stream` object it created is simply dropped. The connection actor will see the stream end as if it had -completed normally, ensuring no resources are leaked and the connection does not -hang. +completed normally, ensuring no resources are leaked and the connection does +not hang. ## 7. Synergy with Other 1.0 Features - **Asynchronous Pushes:** The connection actor's prioritised write loop (as - defined in the outbound messaging design) will always poll for pushed messages - *before* polling the response stream. This ensures that urgent, out-of-band - messages are not starved by a long-running data stream. + defined in the outbound messaging design) will always poll for pushed + messages *before* polling the response stream. This ensures that urgent, + out-of-band messages are not starved by a long-running data stream. - **Message Fragmentation:** Streaming occurs at the logical frame level. The `FragmentAdapter` will operate at a lower layer, transparently splitting any diff --git a/docs/observability-operability-and-maturity.md b/docs/observability-operability-and-maturity.md index 1c31fabc..fade7f1b 100644 --- a/docs/observability-operability-and-maturity.md +++ b/docs/observability-operability-and-maturity.md @@ -2,11 +2,12 @@ ## 1. Introduction: Beyond Functional Correctness -A library is functionally correct when it performs its specified tasks according -to its API. A library is *mature* when it anticipates the realities of its -operational environment. For a low-level networking framework like `wireframe`, -this means acknowledging that production systems are complex, failures are -inevitable, and visibility into runtime behaviour is non-negotiable. +A library is functionally correct when it performs its specified tasks +according to its API. A library is *mature* when it anticipates the realities +of its operational environment. For a low-level networking framework like +`wireframe`, this means acknowledging that production systems are complex, +failures are inevitable, and visibility into runtime behaviour is +non-negotiable. This guide details the cross-cutting strategies that will elevate `wireframe` from a simple toolkit to a production-grade, operationally mature framework. It @@ -35,11 +36,11 @@ first-class feature of `wireframe` 1.0. ### 2.1 The `tracing` Span Hierarchy -`tracing`'s key innovation is the `span`, which represents a unit of work with a -distinct beginning and end. By nesting spans, we can create a causal chain of -events, allowing us to understand how work flows through the system, even across -asynchronous boundaries and threads. `wireframe` will adopt a standard span -hierarchy. +`tracing`'s key innovation is the `span`, which represents a unit of work with +a distinct beginning and end. By nesting spans, we can create a causal chain of +events, allowing us to understand how work flows through the system, even +across asynchronous boundaries and threads. `wireframe` will adopt a standard +span hierarchy. - `connection`**:** A root span for each TCP connection, created on `accept()`. It provides the top-level context for all activity on that connection. @@ -78,15 +79,15 @@ async fn run_connection_actor( ``` -**Measurable Objective:** Every spawned task and connection actor in the library -must be instrumented with a `tracing` span containing relevant, queryable -identifiers (e.g., `connection_id`, `peer_addr`). +**Measurable Objective:** Every spawned task and connection actor in the +library must be instrumented with a `tracing` span containing relevant, +queryable identifiers (e.g., `connection_id`, `peer_addr`). ### 2.2 Structured Lifecycle Events Within each span, we will emit structured events at critical points in the -connection lifecycle. Using structured key-value pairs (`field = value`) instead -of simple strings allows logs to be automatically parsed, queried, and +connection lifecycle. Using structured key-value pairs (`field = value`) +instead of simple strings allows logs to be automatically parsed, queried, and aggregated by modern logging platforms. **Implementation Example:** Logging key events. @@ -187,9 +188,9 @@ must exit with a success code. ### 3.2 Intelligent Error Handling with Typed Errors -Conflating unrecoverable transport errors with recoverable protocol errors makes -robust error handling impossible. `wireframe` will provide a generic error enum -to give developers the necessary information to react intelligently. +Conflating unrecoverable transport errors with recoverable protocol errors +makes robust error handling impossible. `wireframe` will provide a generic +error enum to give developers the necessary information to react intelligently. **Implementation:** @@ -208,19 +209,19 @@ pub enum WireframeError { ``` When a handler's response stream yields a `WireframeError::Protocol(e)`, the -connection actor can pass this typed error to a protocol-specific callback. This -allows the implementation to serialize a proper error frame (e.g., an SQL error -code) and send it to the client before terminating the current operation, rather -than just abruptly closing the connection. +connection actor can pass this typed error to a protocol-specific callback. +This allows the implementation to serialize a proper error frame (e.g., an SQL +error code) and send it to the client before terminating the current operation, +rather than just abruptly closing the connection. ### 3.3 Resilient Messaging with Dead Letter Queues (DLQ) For applications where dropping a pushed message is not an option (e.g., critical audit logs), `wireframe` will support an optional Dead Letter Queue. -**Implementation:** The `WireframeApp` builder will accept an `mpsc::Sender` for -a DLQ. If a push fails because the primary queue is full, the frame is routed to -this sender instead of being dropped. +**Implementation:** The `WireframeApp` builder will accept an `mpsc::Sender` +for a DLQ. If a push fails because the primary queue is full, the frame is +routed to this sender instead of being dropped. ```rust // Inside PushHandle, when try_send fails with a full queue @@ -292,8 +293,8 @@ abstraction model. ### 4.2 A Commitment to Quality Assurance A mature library demonstrates its commitment to quality through a rigorous and -multi-faceted testing strategy that goes beyond simple unit tests. `wireframe`'s -QA process will be a core part of its development. +multi-faceted testing strategy that goes beyond simple unit tests. +`wireframe`'s QA process will be a core part of its development. - **Stateful Property Testing (**`proptest`**):** For verifying complex, stateful protocol conversations, ensuring that thousands of random-but-valid @@ -305,8 +306,8 @@ QA process will be a core part of its development. cannot. - **Performance Benchmarking (**`criterion`**):** To quantify the performance - impact of new features and prevent regressions, a comprehensive suite of micro - and macro benchmarks will be maintained. + impact of new features and prevent regressions, a comprehensive suite of + micro and macro benchmarks will be maintained. For more detailed information and comprehensive, worked examples of these testing methodologies, please refer to the companion document, *Wireframe: A diff --git a/docs/preamble-validator.md b/docs/preamble-validator.md index 3b908e5f..9e5b715e 100644 --- a/docs/preamble-validator.md +++ b/docs/preamble-validator.md @@ -29,8 +29,8 @@ sequenceDiagram Server-->>Client: (Continues or closes connection) ``` -The success callback receives the decoded preamble and a mutable `TcpStream`. It -may write a handshake response before the connection is passed to +The success callback receives the decoded preamble and a mutable `TcpStream`. +It may write a handshake response before the connection is passed to `WireframeApp`. In the tests, a `HotlinePreamble` struct illustrates the pattern, but any preamble type may be used. Register callbacks via `on_preamble_decode_success` and `on_preamble_decode_failure` on @@ -39,7 +39,7 @@ pattern, but any preamble type may be used. Register callbacks via ## Call Order `WireframeServer::with_preamble::()` must be called **before** registering -callbacks with `on_preamble_decode_success` or `on_preamble_decode_failure`. The -method converts the server to use a custom preamble type, dropping any callbacks -configured on the default `()` preamble. Registering callbacks after calling -`with_preamble::()` ensures they are retained. +callbacks with `on_preamble_decode_success` or `on_preamble_decode_failure`. +The method converts the server to use a custom preamble type, dropping any +callbacks configured on the default `()` preamble. Registering callbacks after +calling `with_preamble::()` ensures they are retained. diff --git a/docs/roadmap.md b/docs/roadmap.md index 1f20ee29..a15a58be 100644 --- a/docs/roadmap.md +++ b/docs/roadmap.md @@ -1,6 +1,9 @@ # Wireframe Combined Development Roadmap -This document outlines the development roadmap for the Wireframe library, merging previous roadmap documents into a single source of truth. It details the planned features, enhancements, and the overall trajectory towards a stable and production-ready 1.0 release. +This document outlines the development roadmap for the Wireframe library, +merging previous roadmap documents into a single source of truth. It details +the planned features, enhancements, and the overall trajectory towards a stable +and production-ready 1.0 release. ## Guiding Principles @@ -8,33 +11,42 @@ This document outlines the development roadmap for the Wireframe library, mergin - **Performance:** Maintain high performance and low overhead. -- **Extensibility:** Provide clear extension points, especially through middleware. +- **Extensibility:** Provide clear extension points, especially through + middleware. - **Robustness:** Ensure the library is resilient and handles errors gracefully. ## Phase 1: Core Functionality & API (Complete) -This phase established the foundational components of the Wireframe server and the request/response lifecycle. +This phase established the foundational components of the Wireframe server and +the request/response lifecycle. - [x] **Protocol Definition:** - - [x] Define the basic frame structure for network communication (`src/frame.rs`). + - [x] Define the basic frame structure for network communication + (`src/frame.rs`). - - [x] Implement preamble validation for versioning and compatibility (`src/preamble.rs`, `tests/preamble.rs`). + - [x] Implement preamble validation for versioning and compatibility + (`src/preamble.rs`, `tests/preamble.rs`). - [x] **Core Server Implementation:** - - [x] Implement the `Server` struct with `bind` and `run` methods (`src/server.rs`). + - [x] Implement the `Server` struct with `bind` and `run` methods + (`src/server.rs`). - - [x] Handle incoming TCP connections and spawn connection-handling tasks (`src/connection.rs`). + - [x] Handle incoming TCP connections and spawn connection-handling tasks + (`src/connection.rs`). - - [x] Define `Request`, `Response`, and `Message` structs (`src/message.rs`, `src/response.rs`). + - [x] Define `Request`, `Response`, and `Message` structs (`src/message.rs`, + `src/response.rs`). - [x] **Routing & Handlers:** - - [x] Implement a basic routing mechanism to map requests to handler functions (`src/app.rs`). + - [x] Implement a basic routing mechanism to map requests to handler + functions (`src/app.rs`). - - [x] Support handler functions with flexible, type-safe extractors (`src/extractor.rs`). + - [x] Support handler functions with flexible, type-safe extractors + (`src/extractor.rs`). - [x] **Error Handling:** @@ -42,161 +54,208 @@ This phase established the foundational components of the Wireframe server and t - [x] Implement `From` conversions for ergonomic error handling. - - [x] Ensure `Display` is implemented for all public error types (`tests/error_display.rs`). + - [x] Ensure `Display` is implemented for all public error types + (`tests/error_display.rs`). - [x] **Basic Testing:** - - [x] Develop a suite of integration tests for core request/response functionality (`tests/server.rs`, `tests/routes.rs`). + - [x] Develop a suite of integration tests for core request/response + functionality (`tests/server.rs`, `tests/routes.rs`). ## Phase 2: Middleware & Extensibility (Complete) -This phase focused on building the middleware system, a key feature for extensibility. +This phase focused on building the middleware system, a key feature for +extensibility. - [x] **Middleware Trait:** - [x] Design and implement the `Middleware` trait (`src/middleware.rs`). - - [x] Define `Next` to allow middleware to pass control to the next in the chain. + - [x] Define `Next` to allow middleware to pass control to the next in the + chain. - [x] **Middleware Integration:** - - [x] Integrate the middleware processing loop into the `App` and `Connection` logic. + - [x] Integrate the middleware processing loop into the `App` and + `Connection` logic. - [x] Ensure middleware can modify requests and responses. - [x] **Testing:** - - [x] Write tests to verify middleware functionality, including correct execution order (`tests/middleware.rs`, `tests/middleware_order.rs`). + - [x] Write tests to verify middleware functionality, including correct + execution order (`tests/middleware.rs`, `tests/middleware_order.rs`). ## Phase 3: Push Messaging & Async Operations (Complete) -This phase introduced capabilities for asynchronous, server-initiated communication and streaming. +This phase introduced capabilities for asynchronous, server-initiated +communication and streaming. - [x] **Push Messaging:** - - [x] Implement the `Push` mechanism for sending messages from server to client without a direct request (`src/push.rs`). + - [x] Implement the `Push` mechanism for sending messages from server to + client without a direct request (`src/push.rs`). - - [x] Develop `PushPolicies` for broadcasting messages to all or a subset of clients. + - [x] Develop `PushPolicies` for broadcasting messages to all or a subset of + clients. - - [x] Create tests for various push scenarios (`tests/push.rs`, `tests/push_policies.rs`). + - [x] Create tests for various push scenarios (`tests/push.rs`, + `tests/push_policies.rs`). - [x] **Async Stream Responses:** - [x] Enable handlers to return `impl Stream` of messages (`src/response.rs`). - - [x] Implement the client and server-side logic to handle streaming responses (`examples/async_stream.rs`, `tests/async_stream.rs`). + - [x] Implement the client and server-side logic to handle streaming + responses (`examples/async_stream.rs`, `tests/async_stream.rs`). ## Phase 4: Advanced Connection Handling & State (Complete) -This phase added sophisticated state management and improved connection lifecycle control. +This phase added sophisticated state management and improved connection +lifecycle control. - [x] **Session Management:** - - [x] Implement a `Session` struct to hold connection-specific state (`src/session.rs`). + - [x] Implement a `Session` struct to hold connection-specific state + (`src/session.rs`). - - [x] Create a `SessionRegistry` for managing all active sessions (`tests/session_registry.rs`). + - [x] Create a `SessionRegistry` for managing all active sessions + (`tests/session_registry.rs`). - - [x] Provide `State` and `Data` extractors for accessing shared and session-specific data. + - [x] Provide `State` and `Data` extractors for accessing shared and + session-specific data. - [x] **Lifecycle Hooks:** - - [x] Implement `on_connect` and `on_disconnect` hooks for session initialisation and cleanup (`src/hooks.rs`). + - [x] Implement `on_connect` and `on_disconnect` hooks for session + initialisation and cleanup (`src/hooks.rs`). - [x] Write tests to verify lifecycle hook behaviour (`tests/lifecycle.rs`). - [x] **Graceful Shutdown:** - - [x] Implement a graceful shutdown mechanism for the server, allowing active connections to complete their work. + - [x] Implement a graceful shutdown mechanism for the server, allowing active + connections to complete their work. ## Phase 5: Production Hardening & Observability (In Progress) -This phase focuses on making the library robust, debuggable, and ready for production environments. +This phase focuses on making the library robust, debuggable, and ready for +production environments. - [x] **Logging:** - - [x] Integrate `tracing` throughout the library for structured, level-based logging. + - [x] Integrate `tracing` throughout the library for structured, level-based + logging. - - [x] Create a helper crate for test logging setup (`wireframe_testing/src/logging.rs`). + - [x] Create a helper crate for test logging setup + (`wireframe_testing/src/logging.rs`). - [ ] **Metrics & Observability:** - - [ ] Expose key operational metrics (e.g., active connections, messages per second, error rates). + - [ ] Expose key operational metrics (e.g., active connections, messages per + second, error rates). - - [ ] Provide an integration guide for popular monitoring systems (e.g., Prometheus). + - [ ] Provide an integration guide for popular monitoring systems (e.g., + Prometheus). - [ ] **Advanced Error Handling:** - - [ ] Implement panic handlers in connection tasks to prevent a single connection from crashing the server. + - [ ] Implement panic handlers in connection tasks to prevent a single + connection from crashing the server. - [ ] **Testing:** - - [ ] Implement fuzz testing for the protocol parser (`tests/advanced/interaction_fuzz.rs`). + - [ ] Implement fuzz testing for the protocol parser + (`tests/advanced/interaction_fuzz.rs`). - - [ ] Use `loom` for concurrency testing of shared state (`tests/advanced/concurrency_loom.rs`). + - [ ] Use `loom` for concurrency testing of shared state + (`tests/advanced/concurrency_loom.rs`). ## Phase 6: Application-Level Streaming (Multi-Packet Responses) (Priority Focus) -This is the next major feature set. It enables a handler to return multiple, distinct messages over time in response to a single request, forming a logical stream. +This is the next major feature set. It enables a handler to return multiple, +distinct messages over time in response to a single request, forming a logical +stream. - [ ] **Protocol Enhancement:** - - [ ] Add a `correlation_id` field to the `Frame` header. For a request, this is the unique request ID. For each message in a multi-packet response, this ID must match the original request's ID. + - [ ] Add a `correlation_id` field to the `Frame` header. For a request, this + is the unique request ID. For each message in a multi-packet response, this + ID must match the original request's ID. - - [ ] Define a mechanism to signal the end of a multi-packet stream, such as a frame with a specific flag and no payload. + - [ ] Define a mechanism to signal the end of a multi-packet stream, such as + a frame with a specific flag and no payload. - [ ] **Core Library Implementation:** - - [ ] Introduce a `Response::MultiPacket` variant that contains a channel `Receiver`. + - [ ] Introduce a `Response::MultiPacket` variant that contains a channel + `Receiver`. - - [ ] Modify the `Connection` actor: upon receiving `Response::MultiPacket`, it should consume messages from the receiver and send each one as a `Frame`. + - [ ] Modify the `Connection` actor: upon receiving `Response::MultiPacket`, + it should consume messages from the receiver and send each one as a `Frame`. - - [ ] Each sent frame must carry the correct `correlation_id` from the initial request. + - [ ] Each sent frame must carry the correct `correlation_id` from the + initial request. - [ ] When the channel closes, send the end-of-stream marker frame. - [ ] **Ergonomics & API:** - - [ ] Provide a clean API for handlers to return a multi-packet response, likely by returning a `(Sender, Response)`. + - [ ] Provide a clean API for handlers to return a multi-packet response, + likely by returning a `(Sender, Response)`. - [ ] **Testing:** - - [ ] Develop integration tests where a client sends one request and receives multiple, correlated response messages. + - [ ] Develop integration tests where a client sends one request and receives + multiple, correlated response messages. - - [ ] Test that the end-of-stream marker is sent correctly and handled by the client. + - [ ] Test that the end-of-stream marker is sent correctly and handled by the + client. - - [ ] Test client-side handling of interleaved multi-packet responses from different requests. + - [ ] Test client-side handling of interleaved multi-packet responses from + different requests. ## Phase 7: Transport-Level Fragmentation & Reassembly -This phase will handle the transport of a single message that is too large to fit into a single frame, making the process transparent to the application logic. +This phase will handle the transport of a single message that is too large to +fit into a single frame, making the process transparent to the application +logic. - [ ] **Core Fragmentation & Reassembly (F&R) Layer:** - - [ ] Define a generic `Fragment` header or metadata containing `message_id`, `fragment_index`, and `is_last_fragment` fields. + - [ ] Define a generic `Fragment` header or metadata containing `message_id`, + `fragment_index`, and `is_last_fragment` fields. - - [ ] Implement a `Fragmenter` to split a large `Message` into multiple `Frame`s, each with a `Fragment` header. + - [ ] Implement a `Fragmenter` to split a large `Message` into multiple + `Frame`s, each with a `Fragment` header. - - [ ] Implement a `Reassembler` on the receiving end to collect fragments and reconstruct the original `Message`. + - [ ] Implement a `Reassembler` on the receiving end to collect fragments and + reconstruct the original `Message`. - - [ ] Manage a reassembly buffer with timeouts to prevent resource exhaustion from incomplete messages. + - [ ] Manage a reassembly buffer with timeouts to prevent resource exhaustion + from incomplete messages. - [ ] **Integration with Core Library:** - [ ] Integrate the F&R layer into the `Connection` actor's read/write paths. - - [ ] Ensure the F&R logic is transparent to handler functions; they should continue to send and receive complete `Message` objects. + - [ ] Ensure the F&R logic is transparent to handler functions; they should + continue to send and receive complete `Message` objects. - [ ] **Testing:** - [ ] Create unit tests for the `Fragmenter` and `Reassembler`. - - [ ] Develop integration tests sending and receiving large messages that require fragmentation. + - [ ] Develop integration tests sending and receiving large messages that + require fragmentation. - - [ ] Test edge cases: out-of-order fragments, duplicate fragments, and reassembly timeouts. + - [ ] Test edge cases: out-of-order fragments, duplicate fragments, and + reassembly timeouts. ## Phase 8: Advanced Features & Ecosystem (Future) -This phase includes features that will broaden the library's applicability and ecosystem. +This phase includes features that will broaden the library's applicability and +ecosystem. - [ ] **Client Library:** @@ -204,11 +263,13 @@ This phase includes features that will broaden the library's applicability and e - [ ] **Alternative Transports:** - - [ ] Abstract the transport layer to support protocols other than raw TCP (e.g., WebSockets, QUIC). + - [ ] Abstract the transport layer to support protocols other than raw TCP + (e.g., WebSockets, QUIC). - [ ] **Message Versioning:** - - [ ] Implement a formal message versioning system to allow for protocol evolution. + - [ ] Implement a formal message versioning system to allow for protocol + evolution. - [ ] **Security:** @@ -216,7 +277,8 @@ This phase includes features that will broaden the library's applicability and e ## Phase 9: Documentation & Community (Ongoing) -Continuous improvement of documentation and examples is essential for adoption and usability. +Continuous improvement of documentation and examples is essential for adoption +and usability. - [x] **Initial Documentation:** @@ -226,7 +288,8 @@ Continuous improvement of documentation and examples is essential for adoption a - [x] **Examples:** - - [x] Create a variety of examples demonstrating core features (`ping_pong`, `echo`, `metadata_routing`, `async_stream`). + - [x] Create a variety of examples demonstrating core features (`ping_pong`, + `echo`, `metadata_routing`, `async_stream`). - [ ] **Website & User Guide:** diff --git a/docs/rust-binary-router-library-design.md b/docs/rust-binary-router-library-design.md index 0f22aad4..bd57c1cd 100644 --- a/docs/rust-binary-router-library-design.md +++ b/docs/rust-binary-router-library-design.md @@ -4,20 +4,21 @@ The development of applications requiring communication over custom binary wire protocols often involves significant complexities in source code. These -complexities arise from manual data serialization and deserialization, intricate -framing logic, stateful connection management, and the imperative dispatch of -messages to appropriate handlers. Such low-level concerns can obscure the core -application logic, increase development time, and introduce a higher propensity -for errors. The Rust programming language, with its emphasis on safety, -performance, and powerful compile-time abstractions, offers a promising -foundation for mitigating these challenges. +complexities arise from manual data serialization and deserialization, +intricate framing logic, stateful connection management, and the imperative +dispatch of messages to appropriate handlers. Such low-level concerns can +obscure the core application logic, increase development time, and introduce a +higher propensity for errors. The Rust programming language, with its emphasis +on safety, performance, and powerful compile-time abstractions, offers a +promising foundation for mitigating these challenges. This report outlines the design of "wireframe," a novel Rust library aimed at substantially reducing source code complexity when building applications that -handle arbitrary frame-based binary wire protocols. The design draws inspiration -from the ergonomic API of Actix Web 4, a popular Rust web framework known for -its intuitive routing, data extraction, and middleware systems.4 "wireframe" -intends to adapt these successful patterns to the domain of binary protocols. +handle arbitrary frame-based binary wire protocols. The design draws +inspiration from the ergonomic API of Actix Web 4, a popular Rust web framework +known for its intuitive routing, data extraction, and middleware systems.4 +"wireframe" intends to adapt these successful patterns to the domain of binary +protocols. A key aspect of the proposed design is the utilization of `wire-rs` 6 for message serialization and deserialization, contingent upon its ability to @@ -61,9 +62,9 @@ common manual implementation pitfalls. ## 3. Literature Survey and Precedent Analysis -A survey of the existing Rust ecosystem provides valuable context for the design -of "wireframe," highlighting established patterns for serialization, protocol -implementation, and API ergonomics. +A survey of the existing Rust ecosystem provides valuable context for the +design of "wireframe," highlighting established patterns for serialization, +protocol implementation, and API ergonomics. ### 3.1. Binary Serialization Libraries in Rust @@ -77,13 +78,13 @@ each with distinct characteristics. features `WireReader` and `WireWriter` for manual data reading and writing, with explicit control over endianness.6 However, the available information does not clearly indicate the presence or nature of derivable `Encode` and - `Decode` traits for automatic (de)serialization of complex types.6 The ability - to automatically generate (de)serialization logic via derive macros is crucial - for achieving "wireframe's" goal of reducing source code complexity. If such - derive macros are not a core feature of `wire-rs`, "wireframe" would need to - either contribute them, provide its own wrapper traits that enable derivation - while using `wire-rs` internally, or consider alternative serialization - libraries. + `Decode` traits for automatic (de)serialization of complex types.6 The + ability to automatically generate (de)serialization logic via derive macros + is crucial for achieving "wireframe's" goal of reducing source code + complexity. If such derive macros are not a core feature of `wire-rs`, + "wireframe" would need to either contribute them, provide its own wrapper + traits that enable derivation while using `wire-rs` internally, or consider + alternative serialization libraries. - `bincode`: `bincode` is a widely used binary serialization library that integrates well with Serde.8 It offers high performance and configurable @@ -91,8 +92,8 @@ each with distinct characteristics. optional dependency and provides its own `Encode`/`Decode` traits that can be derived.11 Its flexibility and performance make it a strong candidate if `wire-rs` proves unsuitable for derivable (de)serialization. The choice - between fixed-width integers and Varint encoding offers trade-offs in terms of - size and speed. + between fixed-width integers and Varint encoding offers trade-offs in terms + of size and speed. - `postcard`: `postcard` is another Serde-compatible library, specifically designed for `no_std` and embedded environments, prioritizing resource @@ -104,8 +105,8 @@ each with distinct characteristics. focus on minimalism and a simple specification is appealing. - `bin-proto`: `bin-proto` offers simple and fast structured bit-level binary - encoding and decoding.14 It provides `BitEncode` and `BitDecode` traits, along - with custom derive macros (e.g., `#`) for ease of use.14 It allows + encoding and decoding.14 It provides `BitEncode` and `BitDecode` traits, + along with custom derive macros (e.g., `#`) for ease of use.14 It allows fine-grained control over bit-level layout, such as specifying the number of bits for fields and enum discriminants.14 `bin-proto` also supports context-aware parsing, where deserialization logic can depend on external @@ -130,10 +131,11 @@ network protocols, offering insights into effective abstractions. received from any I/O stream, with built-in support for TCP and UDP. Any type implementing its `Parcel` trait can be serialized. The derive macro handles the implementation for custom types, though it requires types to also - implement `Clone`, `Debug`, and `PartialEq`, which might be overly restrictive - for some use cases.16 The library also supports middleware for transforming - sent/received data. This library exemplifies the use of derive macros to - reduce boilerplate in protocol definitions, a core strategy for "wireframe". + implement `Clone`, `Debug`, and `PartialEq`, which might be overly + restrictive for some use cases.16 The library also supports middleware for + transforming sent/received data. This library exemplifies the use of derive + macros to reduce boilerplate in protocol definitions, a core strategy for + "wireframe". - `message-io`: This library provides abstractions for message-based network communication over various transports like TCP, UDP, and WebSockets. Notably, @@ -154,9 +156,9 @@ network protocols, offering insights into effective abstractions. Although designed for RPC, its approach of defining service schemas directly in Rust code (using the `#[tarpc::service]` attribute to generate service traits and client/server boilerplate) is an interesting parallel to - "wireframe's" goal of reducing boilerplate for message handlers. Features like - pluggable transports and serde serialization further highlight its modern - design. + "wireframe's" goal of reducing boilerplate for message handlers. Features + like pluggable transports and serde serialization further highlight its + modern design. A clear pattern emerges from these libraries: the use of derive macros and trait-based designs is a prevalent and effective strategy in Rust for @@ -187,34 +189,34 @@ query as a model for API aesthetics. Its design offers valuable lessons for `FromRequest` trait.4 Built-in extractors include `web::Path` for path parameters, `web::Json` for JSON payloads, `web::Query` for query parameters, and `web::Data` for accessing shared application state.21 Users can also - define custom extractors by implementing `FromRequest`.24 This system promotes - type safety and decouples data parsing and validation from the core handler - logic. + define custom extractors by implementing `FromRequest`.24 This system + promotes type safety and decouples data parsing and validation from the core + handler logic. - **Middleware**: Actix Web has a robust middleware system allowing developers to insert custom processing logic into the request/response lifecycle. Middleware is registered using `App::wrap()` and typically implements `Transform` and `Service` traits.26 Common use cases include logging 27, - authentication, and request/response modification. Middleware functions can be - simple `async fn`s when using `middleware::from_fn()`. + authentication, and request/response modification. Middleware functions can + be simple `async fn`s when using `middleware::from_fn()`. - **Application Structure and State**: Applications are built around an `App` - instance, which is then used to configure an `HttpServer`.4 Shared application - state can be managed using `web::Data`, making state accessible to handlers - and middleware.21 For globally shared mutable state, careful use of `Arc` and - `Mutex` (or atomics) is required, initialized outside the `HttpServer::new` - closure. + instance, which is then used to configure an `HttpServer`.4 Shared + application state can be managed using `web::Data`, making state + accessible to handlers and middleware.21 For globally shared mutable state, + careful use of `Arc` and `Mutex` (or atomics) is required, initialized + outside the `HttpServer::new` closure. The ergonomic success of Actix Web for web application development can be significantly attributed to these powerful and intuitive abstractions. Extractors, for example, cleanly separate the concern of deriving data from a request from the business logic within a handler.22 Middleware allows for -modular implementation of cross-cutting concerns like logging or authentication, -preventing handler functions from becoming cluttered.26 If "wireframe" can adapt -these patterns—such as creating extractors for deserialized message payloads or -connection-specific metadata, and a middleware system for binary message -streams—it can achieve comparable benefits in reducing complexity and improving -code organization for its users. +modular implementation of cross-cutting concerns like logging or +authentication, preventing handler functions from becoming cluttered.26 If +"wireframe" can adapt these patterns—such as creating extractors for +deserialized message payloads or connection-specific metadata, and a middleware +system for binary message streams—it can achieve comparable benefits in +reducing complexity and improving code organization for its users. ### 3.4. Key Learnings and Implications for "wireframe" Design @@ -241,9 +243,9 @@ through the qualitative improvements offered by its abstractions. The design must clearly articulate how its features—such as declarative APIs, automatic code generation via macros, and separation of concerns inspired by Actix Web—lead to simpler, more maintainable code compared to hypothetical manual -implementations of binary protocol handlers. This involves illustrating "before" -(manual, complex, error-prone) versus "after" (wireframe, simplified, robust) -scenarios through its API design and examples. +implementations of binary protocol handlers. This involves illustrating +"before" (manual, complex, error-prone) versus "after" (wireframe, simplified, +robust) scenarios through its API design and examples. ## 4. "wireframe" Library Design @@ -261,16 +263,16 @@ The development of "wireframe" adheres to the following principles: development. This is achieved through high-level abstractions, declarative APIs, and minimizing boilerplate code. - **Generality for Arbitrary Frame-Based Protocols**: The library must not be - opinionated about the specific content or structure of the binary protocols it - handles beyond the assumption of a frame-based structure. Users should be able - to define their own framing logic and message types. + opinionated about the specific content or structure of the binary protocols + it handles beyond the assumption of a frame-based structure. Users should be + able to define their own framing logic and message types. - **Performance**: Leveraging Rust's inherent performance characteristics is - crucial.2 While developer ergonomics is a primary focus, the design must avoid - introducing unnecessary overhead. Asynchronous operations, powered by a + crucial.2 While developer ergonomics is a primary focus, the design must + avoid introducing unnecessary overhead. Asynchronous operations, powered by a runtime like Tokio, are essential for efficient I/O and concurrency. - **Safety**: The library will harness Rust's strong type system and ownership - model to prevent common networking bugs, such as data races and use-after-free - errors, contributing to more reliable software. + model to prevent common networking bugs, such as data races and + use-after-free errors, contributing to more reliable software. - **Developer Ergonomics**: The API should be intuitive, well-documented, and easy to learn, particularly for developers familiar with patterns from Actix Web. @@ -279,10 +281,10 @@ A potential tension exists between the goal of supporting "arbitrary" protocols and maintaining simplicity. An overly generic library might force users to implement many low-level details, negating the complexity reduction benefits. Conversely, an excessively opinionated library might not be flexible enough for -diverse protocol needs. "wireframe" aims to strike a balance by providing robust -conventions and helper traits for common protocol patterns (e.g., message -ID-based routing, standard framing methods) while still allowing "escape -hatches" for highly customized requirements. This involves identifying +diverse protocol needs. "wireframe" aims to strike a balance by providing +robust conventions and helper traits for common protocol patterns (e.g., +message ID-based routing, standard framing methods) while still allowing +"escape hatches" for highly customized requirements. This involves identifying commonalities in frame-based binary protocols and offering streamlined abstractions for them, without precluding more bespoke implementations. @@ -322,28 +324,30 @@ handling to be managed and customized independently. the transport layer into discrete logical units called "frames." Conversely, it serializes outgoing messages from the application into byte sequences suitable for transmission, adding any necessary framing information (e.g., - length prefixes, delimiters). This layer will utilize user-defined or built-in - framing logic, potentially by implementing traits like Tokio's `Decoder` and - `Encoder`. + length prefixes, delimiters). This layer will utilize user-defined or + built-in framing logic, potentially by implementing traits like Tokio's + `Decoder` and `Encoder`. - **Deserialization/Serialization Engine**: This engine converts the byte - payload of incoming frames into strongly-typed Rust data structures (messages) - and serializes outgoing Rust messages into byte payloads for outgoing frames. - This is the primary role intended for `wire-rs` 6 or an alternative like - `bincode` 11 or `postcard`.12 A minimal wrapper trait in the library currently - exposes these derives under a convenient `Message` trait, providing `to_bytes` - and `from_bytes` helpers. + payload of incoming frames into strongly-typed Rust data structures + (messages) and serializes outgoing Rust messages into byte payloads for + outgoing frames. This is the primary role intended for `wire-rs` 6 or an + alternative like `bincode` 11 or `postcard`.12 A minimal wrapper trait in the + library currently exposes these derives under a convenient `Message` trait, + providing `to_bytes` and `from_bytes` helpers. - **Routing Engine**: After a message is deserialized (or at least a header containing an identifier is processed), the routing engine inspects it to determine which user-defined handler function is responsible for processing this type of message. - **Handler Invocation Logic**: This component is responsible for calling the appropriate user-defined handler function. It will manage the extraction of - necessary data (e.g., the message payload, connection information) and provide - it to the handler in a type-safe manner, inspired by Actix Web's extractors. + necessary data (e.g., the message payload, connection information) and + provide it to the handler in a type-safe manner, inspired by Actix Web's + extractors. - **Middleware/Interceptor Chain**: This allows for pre-processing of incoming - frames/messages before they reach the handler and post-processing of responses - (or frames generated by the handler) before they are serialized and sent. This - enables cross-cutting concerns like logging, authentication, or metrics. + frames/messages before they reach the handler and post-processing of + responses (or frames generated by the handler) before they are serialized and + sent. This enables cross-cutting concerns like logging, authentication, or + metrics. **Data Flow**: @@ -360,9 +364,10 @@ handling to be managed and customized independently. Function**. 7. **Outgoing**: If the handler produces a response message: a. The response passes through the **Middleware Chain (response path)**. b. The response - message is passed to the **Serialization Engine** to be converted into a byte - payload. c. This payload is given to the **Framing Layer** to be encapsulated - in a frame. d. The framed bytes are sent via the **Transport Layer Adapter**. + message is passed to the **Serialization Engine** to be converted into a + byte payload. c. This payload is given to the **Framing Layer** to be + encapsulated in a frame. d. The framed bytes are sent via the **Transport + Layer Adapter**. ```mermaid sequenceDiagram @@ -384,16 +389,16 @@ sequenceDiagram This layered architecture mirrors the conceptual separation found in network protocol stacks, such as the OSI or TCP/IP models.28 Each component addresses a -distinct set of problems. This modularity is fundamental to managing the overall -complexity of the library and the applications built with it. It allows +distinct set of problems. This modularity is fundamental to managing the +overall complexity of the library and the applications built with it. It allows individual components, such as the framing mechanism or the serialization engine, to be potentially customized or even replaced by users with specific needs, without requiring modifications to other parts of the system. ### 4.3. Frame Definition and Processing -To handle "arbitrary frame-based protocols," "wireframe" must provide a flexible -way to define and process frames. +To handle "arbitrary frame-based protocols," "wireframe" must provide a +flexible way to define and process frames. - `FrameProcessor` **(or Tokio** `Decoder`**/**`Encoder` **integration)**: The core of frame handling will revolve around a user-implementable trait, @@ -414,11 +419,11 @@ way to define and process frames. headers, length prefixes, or delimiters. "wireframe" could provide common `FrameProcessor` implementations (e.g., for - length-prefixed frames) as part of its standard library, simplifying setup for - common protocol types. The library ships with a `LengthPrefixedProcessor`. It - accepts a `LengthFormat` specifying the prefix size and byte order—for - example, `LengthFormat::u16_le()` or `LengthFormat::u32_be()`. Applications - configure it via + length-prefixed frames) as part of its standard library, simplifying setup + for common protocol types. The library ships with a + `LengthPrefixedProcessor`. It accepts a `LengthFormat` specifying the prefix + size and byte order—for example, `LengthFormat::u16_le()` or + `LengthFormat::u32_be()`. Applications configure it via `WireframeApp::frame_processor(LengthPrefixedProcessor::new(format))`. The `FrameProcessor` trait remains public, so custom implementations can be supplied when required. @@ -486,11 +491,11 @@ sequenceDiagram This separation of framing logic via traits is crucial. Different binary protocols employ vastly different methods for delimiting messages on the wire. -Some use fixed-size headers with length fields, others rely on special start/end -byte sequences, and some might have no explicit framing beyond what the -transport layer provides (e.g., UDP datagrams). A trait-based approach for frame -processing, akin to how `tokio-util::codec` operates, endows "wireframe" with -the necessary flexibility to adapt to this diversity without embedding +Some use fixed-size headers with length fields, others rely on special +start/end byte sequences, and some might have no explicit framing beyond what +the transport layer provides (e.g., UDP datagrams). A trait-based approach for +frame processing, akin to how `tokio-util::codec` operates, endows "wireframe" +with the necessary flexibility to adapt to this diversity without embedding assumptions about any single framing strategy into its core. ### 4.4. Message Serialization and Deserialization @@ -500,44 +505,45 @@ complexity that "wireframe" aims to simplify. - Primary Strategy: wire-rs with Derivable Traits: - The preferred approach is to utilize wire-rs 6 as the underlying serialization - and deserialization engine. However, this is critically dependent on wire-rs - supporting, or being extended to support, derivable Encode and Decode traits - (e.g., through #). The ability to automatically generate this logic from - struct/enum definitions is paramount. Manual serialization/deserialization - using WireReader::read_u32(), WireWriter::write_string(), etc., for every - field would not meet the complexity reduction goals. + The preferred approach is to utilize wire-rs 6 as the underlying + serialization and deserialization engine. However, this is critically + dependent on wire-rs supporting, or being extended to support, derivable + Encode and Decode traits (e.g., through #). The ability to automatically + generate this logic from struct/enum definitions is paramount. Manual + serialization/deserialization using WireReader::read_u32(), + WireWriter::write_string(), etc., for every field would not meet the + complexity reduction goals. If wire-rs itself does not offer derive macros, "wireframe" might need to - provide its own wrapper traits and derive macros that internally use wire-rs's - WireReader and WireWriter primitives but expose a user-friendly, derivable - interface. + provide its own wrapper traits and derive macros that internally use + wire-rs's WireReader and WireWriter primitives but expose a user-friendly, + derivable interface. - Alternative/Fallback Strategies: If wire-rs proves unsuitable (e.g., due to the difficulty of implementing derivable traits, performance characteristics not matching the needs, or - fundamental API incompatibilities), well-supported alternatives with excellent - Serde integration will be considered: + fundamental API incompatibilities), well-supported alternatives with + excellent Serde integration will be considered: - `bincode`: Offers high performance, configurability, and its own derivable `Encode`/`Decode` traits in version 2.x.10 It is a strong contender for general-purpose binary serialization. - `postcard`: Ideal for scenarios where serialized size is a primary concern, such as embedded systems, and also provides Serde-based derive macros.9 Its - simpler configuration might be an advantage.10 The choice would be guided by - the specific requirements of typical "wireframe" use cases. + simpler configuration might be an advantage.10 The choice would be guided + by the specific requirements of typical "wireframe" use cases. - Context-Aware Deserialization: - Some binary protocols require deserialization logic to vary based on preceding - data or external state. For instance, the interpretation of a field might - depend on the value of an earlier field in the same message or on the current - state of the connection. bin-proto hints at such capabilities with its context - parameter in BitDecode::decode.14 "wireframe" should aim to support this, - either through features in the chosen serialization library (e.g., wire-rs - supporting a context argument) or by layering this capability on top. This - could involve a multi-stage deserialization process: + Some binary protocols require deserialization logic to vary based on + preceding data or external state. For instance, the interpretation of a field + might depend on the value of an earlier field in the same message or on the + current state of the connection. bin-proto hints at such capabilities with + its context parameter in BitDecode::decode.14 "wireframe" should aim to + support this, either through features in the chosen serialization library + (e.g., wire-rs supporting a context argument) or by layering this capability + on top. This could involve a multi-stage deserialization process: 1. Deserialize a generic frame header or a preliminary part of the message. 2. Use information from this initial part to establish a context. @@ -546,18 +552,19 @@ complexity that "wireframe" aims to simplify. The cornerstone of "reducing source code complexity" in this domain is the automation of serialization and deserialization. Manual implementation of this logic is not only tedious but also a frequent source of subtle bugs related to -endianness, offset calculations, and data type mismatches. Libraries like Serde, -and by extension `bincode` and `postcard`, have demonstrated the immense value -of derive macros in Rust for automating these tasks.8 If "wireframe" were to -force users into manual field-by-field reading and writing, it would fail to +endianness, offset calculations, and data type mismatches. Libraries like +Serde, and by extension `bincode` and `postcard`, have demonstrated the immense +value of derive macros in Rust for automating these tasks.8 If "wireframe" were +to force users into manual field-by-field reading and writing, it would fail to deliver on its primary promise. Therefore, ensuring a smooth, derivable (de)serialization experience is a non-negotiable aspect of its design. ### 4.5. Routing and Dispatch Mechanism -Once a frame is received and its payload (or at least a routing-relevant header) -is deserialized into a Rust message, "wireframe" needs an efficient and clear -mechanism to dispatch this message to the appropriate user-defined handler. +Once a frame is received and its payload (or at least a routing-relevant +header) is deserialized into a Rust message, "wireframe" needs an efficient and +clear mechanism to dispatch this message to the appropriate user-defined +handler. - Route Definition: @@ -578,7 +585,7 @@ mechanism to dispatch this message to the appropriate user-defined handler. //... other routes ``` - 2. **Attribute Macro on Handler Functions**: + 1. **Attribute Macro on Handler Functions**: ```rust # @@ -599,19 +606,19 @@ mechanism to dispatch this message to the appropriate user-defined handler. - Dispatch Logic: Internally, the router will maintain a mapping (e.g., using a HashMap or a - specialized dispatch table if the identifiers are dense integers) from message - identifiers to handler functions. Upon receiving a deserialized message, the - router extracts its identifier, looks up the corresponding handler, and - invokes it. The efficiency of this lookup is critical for high-throughput - systems that process a large number of messages per second. + specialized dispatch table if the identifiers are dense integers) from + message identifiers to handler functions. Upon receiving a deserialized + message, the router extracts its identifier, looks up the corresponding + handler, and invokes it. The efficiency of this lookup is critical for + high-throughput systems that process a large number of messages per second. - Dynamic Routing and Guards: - For more complex routing scenarios where the choice of handler might depend on - multiple fields within a message or the state of the connection, "wireframe" - could incorporate a "guard" system, analogous to Actix Web's route guards. - Guards would be functions that evaluate conditions on the incoming message or - connection context before a handler is chosen. + For more complex routing scenarios where the choice of handler might depend + on multiple fields within a message or the state of the connection, + "wireframe" could incorporate a "guard" system, analogous to Actix Web's + route guards. Guards would be functions that evaluate conditions on the + incoming message or connection context before a handler is chosen. ````rust Router::new() @@ -623,9 +630,9 @@ mechanism to dispatch this message to the appropriate user-defined handler. ```` The routing mechanism essentially implements a form of pattern matching or a -state machine that operates on message identifiers. A clear, declarative API for -this routing, as opposed to manual `match` statements over message types within -a single monolithic receive loop, significantly simplifies the top-level +state machine that operates on message identifiers. A clear, declarative API +for this routing, as opposed to manual `match` statements over message types +within a single monolithic receive loop, significantly simplifies the top-level structure of a protocol server. This declarative approach makes it easier for developers to understand the mapping between incoming messages and their respective processing logic, thereby improving the maintainability and @@ -641,8 +648,8 @@ proposed API components. ### 5.1. Router Configuration and Service Definition Similar to Actix Web's `App` and `HttpServer` structure 4, "wireframe" will -provide a builder pattern for configuring the application and a server component -to run it. +provide a builder pattern for configuring the application and a server +component to run it. - `WireframeApp` **or** `Router` **Builder**: A central builder struct, let's call it `WireframeApp`, will serve as the primary point for configuring the @@ -706,16 +713,16 @@ The WireframeApp builder would offer methods like: - .wrap(middleware_factory): Adds middleware to the processing pipeline. - **Server Initialization**: A `WireframeServer` component (analogous to - `HttpServer`) would take the configured `WireframeApp` factory (a closure that - creates an `App` instance per worker thread), bind to a network address, and - manage incoming connections, task spawning for each connection, and the + `HttpServer`) would take the configured `WireframeApp` factory (a closure + that creates an `App` instance per worker thread), bind to a network address, + and manage incoming connections, task spawning for each connection, and the overall server lifecycle. The default number of worker tasks matches the available CPU cores, falling back to a single worker if the count cannot be determined. This would likely be built on Tokio's networking and runtime primitives. -This structural similarity to Actix Web is intentional. Developers familiar with -Actix Web's application setup will find "wireframe's" approach intuitive, +This structural similarity to Actix Web is intentional. Developers familiar +with Actix Web's application setup will find "wireframe's" approach intuitive, reducing the learning curve. Actix Web is a widely adopted framework 5, and reusing its successful patterns makes "wireframe" feel like a natural extension for a different networking domain, rather than an entirely new and unfamiliar @@ -755,8 +762,8 @@ messages and optionally producing responses. (analogous to Actix Web's `Responder` trait 4). This trait defines how the returned value is serialized and sent back to the client. When a handler yields such a value, `wireframe` encodes it using the application’s - configured serializer and passes the resulting bytes to the `FrameProcessor` - for transmission back to the peer. + configured serializer and passes the resulting bytes to the + `FrameProcessor` for transmission back to the peer. - `Result`: For explicit error handling. If `Ok(response_message)`, the message is sent. If `Err(error_value)`, the error is processed by "wireframe's" error handling mechanism (see Section @@ -765,8 +772,9 @@ messages and optionally producing responses. incoming message (e.g., broadcasting a message to other clients, or if the protocol is one-way for this message). -The following table outlines core "wireframe" API components and their Actix Web -analogies, illustrating how the "aesthetic sense" of Actix Web is translated: +The following table outlines core "wireframe" API components and their Actix +Web analogies, illustrating how the "aesthetic sense" of Actix Web is +translated: #### Table 1: Core `wireframe` API Components and Actix Web Analogies @@ -782,13 +790,13 @@ analogies, illustrating how the "aesthetic sense" of Actix Web is translated: | `WireframeMiddleware` (Transform) | `impl Transform` | Factory for middleware services that process messages/frames. | This mapping is valuable because it leverages existing mental models for -developers familiar with Actix Web, thereby lowering the barrier to adoption for -"wireframe". The use of `async fn` and extractor patterns directly contributes -to cleaner, more readable, and more testable handler logic. By delegating tasks -like data deserialization and access to connection metadata to extractors -(similar to how Actix Web extractors handle HTTP-specific parsing 22), -"wireframe" handler functions can remain focused on the core business logic -associated with each message type. +developers familiar with Actix Web, thereby lowering the barrier to adoption +for "wireframe". The use of `async fn` and extractor patterns directly +contributes to cleaner, more readable, and more testable handler logic. By +delegating tasks like data deserialization and access to connection metadata to +extractors (similar to how Actix Web extractors handle HTTP-specific parsing +22), "wireframe" handler functions can remain focused on the core business +logic associated with each message type. ### 5.3. Data Extraction and Type Safety @@ -797,8 +805,8 @@ mechanism for accessing data from incoming messages and connection context within handlers. - `FromMessageRequest` **Trait**: A central trait, analogous to Actix Web's - `FromRequest` 24, will be defined. Types implementing `FromMessageRequest` can - be used as handler arguments. + `FromRequest` 24, will be defined. Types implementing `FromMessageRequest` + can be used as handler arguments. ```rust use wireframe::dev::{MessageRequest, Payload}; // Hypothetical types @@ -821,9 +829,9 @@ instance of each type can exist; later registrations overwrite earlier ones. - **Built-in Extractors**: "wireframe" will provide several common extractors: - `Message`: This would be the most common extractor. It attempts to - deserialize the incoming frame's payload into the specified type `T`. `T` must - implement the relevant deserialization trait (e.g., `Decode` from `wire-rs` or - `serde::Deserialize` if using `bincode`/`postcard`). + deserialize the incoming frame's payload into the specified type `T`. `T` + must implement the relevant deserialization trait (e.g., `Decode` from + `wire-rs` or `serde::Deserialize` if using `bincode`/`postcard`). ```rust async fn handle_user_update(update: Message) -> Result<()> { @@ -833,8 +841,8 @@ instance of each type can exist; later registrations overwrite earlier ones. ``` - `ConnectionInfo`: Provides access to metadata about the current connection, - such as the peer's network address, a unique connection identifier assigned by - "wireframe", or transport-specific details. + such as the peer's network address, a unique connection identifier assigned + by "wireframe", or transport-specific details. ```rust async fn handle_connect_event(conn_info: ConnectionInfo) { @@ -1030,8 +1038,8 @@ will provide a comprehensive error handling strategy. would include issues like malformed payloads, type mismatches, or data integrity failures (e.g., failed CRC checks if integrated at this level). - `RoutingError`: Errors related to message dispatch, such as a message - identifier not corresponding to any registered handler, or guards preventing - access. + identifier not corresponding to any registered handler, or guards + preventing access. - `HandlerError`: Errors explicitly returned by user-defined handler functions, indicating a problem in the application's business logic. - `IoError`: Errors from the underlying network I/O operations (e.g., @@ -1075,11 +1083,11 @@ will provide a comprehensive error handling strategy. - **Custom Error Responses**: If the specific binary protocol supports sending error messages back to the client, "wireframe" should provide a mechanism for handlers or middleware to return such protocol-specific error responses. This - might involve a special `Responder`-like trait for error types, similar to how - Actix Web's `ResponseError` trait allows custom types to be converted into - HTTP error responses.4 By default, unhandled errors or errors that cannot be - translated into a protocol-specific response might result in logging the error - and closing the connection. + might involve a special `Responder`-like trait for error types, similar to + how Actix Web's `ResponseError` trait allows custom types to be converted + into HTTP error responses.4 By default, unhandled errors or errors that + cannot be translated into a protocol-specific response might result in + logging the error and closing the connection. A well-defined error strategy is paramount for building resilient network applications. It simplifies debugging by providing clear information about the @@ -1092,8 +1100,8 @@ informatively. ### 5.6. Illustrative API Usage Examples To demonstrate the intended simplicity and the Actix-Web-inspired API, concrete -examples are invaluable. They make the abstract design tangible and showcase how -"wireframe" aims to reduce source code complexity. +examples are invaluable. They make the abstract design tangible and showcase +how "wireframe" aims to reduce source code complexity. - **Example 1: Simple Echo Protocol** @@ -1120,7 +1128,7 @@ examples are invaluable. They make the abstract design tangible and showcase how } ``` -1. **Frame Processor Implementation** (Simple length-prefixed framing using +- **Frame Processor Implementation** (Simple length-prefixed framing using `tokio-util`; invalid input or oversized frames return `io::Error` from both decode and encode): @@ -1175,9 +1183,9 @@ impl> Encoder for LengthPrefixedCodec { ``` (Note: "wireframe" would abstract the direct use of `Encoder`/`Decoder` behind -its own `FrameProcessor` trait or provide helpers.) +its own `FrameProcessor` trait or provide helpers.) -1. **Server Setup and Handler**: +- **Server Setup and Handler**: ```rust // Crate: main.rs @@ -1256,7 +1264,7 @@ simplify server implementation. // Assume ClientMessage and ServerMessage have associated IDs for routing/serialization ``` - 2. **Application State**: + 1. **Application State**: ```rust // Crate: main.rs (or app_state.rs) @@ -1271,7 +1279,7 @@ simplify server implementation. pub type SharedChatRoomState = wireframe::SharedState>>; ``` - 3. **Server Setup and Handlers**: + 1. **Server Setup and Handlers**: ```rust // Crate: main.rs @@ -1342,15 +1350,16 @@ simplify server implementation. } ``` -This chat example hints at how shared state (SharedChatRoomState) and connection -information (ConnectionInfo) would be used, and how handlers might not always -send a direct response but could trigger other actions (like broadcasting). +This chat example hints at how shared state (SharedChatRoomState) and +connection information (ConnectionInfo) would be used, and how handlers might +not always send a direct response but could trigger other actions (like +broadcasting). These examples, even if simplified, begin to illustrate how "wireframe" aims to abstract away the low-level details of network programming, allowing developers to focus on defining messages and implementing the logic for handling them. The -clarity achieved by these abstractions is central to the goal of reducing source -code complexity. Actix Web's "Hello, world!" 4 effectively showcases its +clarity achieved by these abstractions is central to the goal of reducing +source code complexity. Actix Web's "Hello, world!" 4 effectively showcases its simplicity; "wireframe" aims for similar illustrative power with its examples. ## 6. Addressing Source Code Complexity @@ -1383,8 +1392,8 @@ choices aim to mitigate them. low-level networking details, (de)serialization code, and framing logic, making the codebase harder to understand, test, and modify. - **Error Handling**: Managing and propagating errors from various sources (I/O, - deserialization, framing, application logic) in a consistent and robust manner - can be challenging. + deserialization, framing, application logic) in a consistent and robust + manner can be challenging. **How "wireframe" Abstractions Simplify These Areas**: @@ -1399,20 +1408,20 @@ choices aim to mitigate them. - Declarative Routing: - The proposed #[message_handler(...)] attribute macro or - WireframeApp::route(...) method provides a declarative way to map message - types or identifiers to handler functions. This replaces verbose manual - dispatch logic (e.g., large match statements) with clear, high-level - definitions, making the overall message flow easier to understand and manage. + The proposed #[message_handler(…)] attribute macro or WireframeApp::route(…) + method provides a declarative way to map message types or identifiers to + handler functions. This replaces verbose manual dispatch logic (e.g., large + match statements) with clear, high-level definitions, making the overall + message flow easier to understand and manage. - Extractors for Decoupled Data Access: Inspired by Actix Web, "wireframe" extractors (Message\, ConnectionInfo, - SharedState\, and custom extractors) decouple the process of obtaining data - for a handler from the handler's core business logic. Handlers simply declare - what data they need as function parameters, and "wireframe" takes care of - providing it. This makes handlers cleaner, more focused, and easier to test in - isolation. + SharedState\, and custom extractors) decouple the process of obtaining + data for a handler from the handler's core business logic. Handlers simply + declare what data they need as function parameters, and "wireframe" takes + care of providing it. This makes handlers cleaner, more focused, and easier + to test in isolation. - Middleware for Modular Cross-Cutting Concerns: @@ -1426,9 +1435,9 @@ choices aim to mitigate them. The WireframeServer component will abstract away the complexities of setting up network listeners (e.g., TCP listeners via Tokio), accepting incoming - connections, and managing the lifecycle of each connection (including reading, - writing, and handling disconnections). For each connection, it will typically - spawn an asynchronous task to handle message processing, isolating + connections, and managing the lifecycle of each connection (including + reading, writing, and handling disconnections). For each connection, it will + typically spawn an asynchronous task to handle message processing, isolating connection-specific logic. This frees the developer from writing significant networking boilerplate. @@ -1441,19 +1450,19 @@ choices aim to mitigate them. message definitions or handler logic. The library itself can provide common framing implementations. -These abstractions collectively contribute to code that is not only less verbose -but also more readable, maintainable, and testable. By reducing complexity in -these common areas, "wireframe" allows developers to concentrate more on the -unique aspects of their application's protocol and business logic. This focus, -in turn, can lead to faster development cycles and a lower incidence of bugs. -While Rust's inherent safety features prevent many classes of memory-related -errors 1, logical errors in protocol implementation remain a significant -challenge. By providing well-tested, high-level abstractions for common but -error-prone tasks like framing and (de)serialization, "wireframe" aims to help -developers avoid entire categories of these logical bugs, leading to more robust -systems. Ultimately, a simpler, more intuitive library enhances developer -productivity and allows teams to build and iterate on binary protocol-based -applications more effectively. +These abstractions collectively contribute to code that is not only less +verbose but also more readable, maintainable, and testable. By reducing +complexity in these common areas, "wireframe" allows developers to concentrate +more on the unique aspects of their application's protocol and business logic. +This focus, in turn, can lead to faster development cycles and a lower +incidence of bugs. While Rust's inherent safety features prevent many classes +of memory-related errors 1, logical errors in protocol implementation remain a +significant challenge. By providing well-tested, high-level abstractions for +common but error-prone tasks like framing and (de)serialization, "wireframe" +aims to help developers avoid entire categories of these logical bugs, leading +to more robust systems. Ultimately, a simpler, more intuitive library enhances +developer productivity and allows teams to build and iterate on binary +protocol-based applications more effectively. ## 7. Future Development and Roadmap @@ -1466,16 +1475,16 @@ applicability. - **UDP Support**: Explicit, first-class support for UDP-based protocols, adapting the routing and handler model to connectionless message passing. - This would involve a different server setup (e.g., `UdpWireframeServer`) and - potentially different extractor types relevant to datagrams (e.g., source - address). + This would involve a different server setup (e.g., `UdpWireframeServer`) + and potentially different extractor types relevant to datagrams (e.g., + source address). - **Other Transports**: Exploration of support for other transport layers where frame-based binary messaging is relevant. While WebSockets are often text-based (JSON), they can carry binary messages; `message-io` lists Ws separately from `FramedTcp` 17, suggesting distinct handling. - **In-Process Communication**: Adapting "wireframe" concepts for efficient, - type-safe in-process message passing, perhaps using Tokio's MPSC channels as - a "transport." + type-safe in-process message passing, perhaps using Tokio's MPSC channels + as a "transport." - **Advanced Framing Options**: @@ -1527,8 +1536,8 @@ applicability. protocol description language. - Basic protocol testing or debugging. - **Enhanced Debugging Support**: Features or integrations that make it easier - to inspect message flows, connection states, and errors within a "wireframe" - application. + to inspect message flows, connection states, and errors within a + "wireframe" application. - **More Examples and Documentation**: Continuously expanding the set of examples and detailed documentation for various use cases and advanced features. @@ -1545,8 +1554,8 @@ library into a more comprehensive ecosystem for binary protocol development in Rust. Anticipating such enhancements demonstrates a commitment to the library's long-term viability and its potential to serve a growing range of user needs. Addressing common advanced requirements like schema evolution 10 or diverse -transport needs early in the roadmap can guide architectural decisions to ensure -future extensibility. +transport needs early in the roadmap can guide architectural decisions to +ensure future extensibility. ## 8. Conclusion @@ -1577,8 +1586,8 @@ By adopting these strategies, "wireframe" seeks to improve developer productivity, enhance code maintainability, and allow applications to fully benefit from Rust's performance and safety guarantees. The design prioritizes creating an intuitive and familiar experience for developers, especially those -with a background in Actix Web, while remaining flexible enough to accommodate a -wide variety of binary protocols. +with a background in Actix Web, while remaining flexible enough to accommodate +a wide variety of binary protocols. The ultimate success of "wireframe" will depend on the effective execution of its primary goal—complexity reduction—while maintaining the robustness and diff --git a/docs/rust-doctest-dry-guide.md b/docs/rust-doctest-dry-guide.md index 85915a73..1f94677c 100644 --- a/docs/rust-doctest-dry-guide.md +++ b/docs/rust-doctest-dry-guide.md @@ -2,71 +2,151 @@ ## The `rustdoc` Compilation Model: A Foundational Perspective -To master the art of writing effective documentation tests in Rust, one must first understand the foundational principles upon which the `rustdoc` tool operates. Its behavior, particularly its testing mechanism, is not an arbitrary collection of features but a direct consequence of a deliberate design philosophy. The core of this philosophy is that every doctest should validate the public API of a crate from the perspective of an external user. This single principle dictates the entire compilation model and explains both the power and the inherent limitations of doctests. +To master the art of writing effective documentation tests in Rust, one must +first understand the foundational principles upon which the `rustdoc` tool +operates. Its behavior, particularly its testing mechanism, is not an arbitrary +collection of features but a direct consequence of a deliberate design +philosophy. The core of this philosophy is that every doctest should validate +the public API of a crate from the perspective of an external user. This single +principle dictates the entire compilation model and explains both the power and +the inherent limitations of doctests. ### 1.1 The "Separate Crate" Paradigm -At its heart, `rustdoc` treats each documentation test not as a snippet of code running within the library's own context, but as an entirely separate, temporary crate.1 When a developer executes - -`cargo test --doc`, `rustdoc` initiates a multi-stage process for every code block found in the documentation comments 3: - -1. **Parsing and Extraction**: `rustdoc` first parses the source code of the library, resolving conditional compilation attributes (`#[cfg]`) to determine which items are active and should be documented for the current target.2 It then extracts all code examples enclosed in triple-backtick fences (\`\`\`). - -2. **Code Generation**: For each extracted code block, `rustdoc` performs a textual transformation to create a complete, self-contained Rust program. If the block does not already contain a `fn main()`, the code is wrapped within one. Crucially, `rustdoc` also injects an `extern crate ;` statement, where `` is the name of the library being documented. This makes the library under test available as an external dependency.3 - -3. **Individual Compilation**: `rustdoc` then invokes the Rust compiler (`rustc`) separately for *each* of these newly generated miniature programs. Each one is compiled and linked against the already-compiled version of the main library.2 - -4. **Execution and Verification**: Finally, if compilation succeeds, the resulting executable is run. The test is considered to have passed if the program runs to completion without panicking. The executable is then deleted.2 - -The significance of this model cannot be overstated. It effectively transforms every doctest into a true integration test.6 The test code does not have special access to the library's internals; it interacts with the library's API precisely as a downstream crate would, providing a powerful guarantee that the public-facing examples are correct and functional.1 +At its heart, `rustdoc` treats each documentation test not as a snippet of code +running within the library's own context, but as an entirely separate, +temporary crate.1 When a developer executes + +`cargo test --doc`, `rustdoc` initiates a multi-stage process for every code +block found in the documentation comments 3: + +1. **Parsing and Extraction**: `rustdoc` first parses the source code of the + library, resolving conditional compilation attributes (`#[cfg]`) to + determine which items are active and should be documented for the current + target.2 It then extracts all code examples enclosed in triple-backtick + fences (\`\`\`). + +2. **Code Generation**: For each extracted code block, `rustdoc` performs a + textual transformation to create a complete, self-contained Rust program. If + the block does not already contain a `fn main()`, the code is wrapped within + one. Crucially, `rustdoc` also injects an `extern crate ;` + statement, where `` is the name of the library being documented. + This makes the library under test available as an external dependency.3 + +3. **Individual Compilation**: `rustdoc` then invokes the Rust compiler + (`rustc`) separately for *each* of these newly generated miniature programs. + Each one is compiled and linked against the already-compiled version of the + main library.2 + +4. **Execution and Verification**: Finally, if compilation succeeds, the + resulting executable is run. The test is considered to have passed if the + program runs to completion without panicking. The executable is then + deleted.2 + +The significance of this model cannot be overstated. It effectively transforms +every doctest into a true integration test.6 The test code does not have +special access to the library's internals; it interacts with the library's API +precisely as a downstream crate would, providing a powerful guarantee that the +public-facing examples are correct and functional.1 ### 1.2 First-Order Consequences of the Model -This "separate crate" paradigm has two immediate and significant consequences that shape all advanced doctesting patterns. - -First, **API visibility is strictly limited to public items**. Because the doctest is compiled as an external crate, it can only access functions, structs, traits, and modules marked with the `pub` keyword. It has no access to private items or even crate-level public items (e.g., `pub(crate)`). This is not a bug or an oversight but a fundamental aspect of the design, enforcing the perspective of an external consumer.1 - -Second, the model has **profound performance implications**. The process of invoking `rustc` to compile and link a new executable for every single doctest is computationally expensive. For small projects, this overhead is negligible. However, for large libraries with hundreds of doctests, the cumulative compilation time can become a significant bottleneck in the development and CI/CD cycle, a common pain point in the Rust community.2 - -The architectural purity of the `rustdoc` model—its insistence on simulating an external user—creates a fundamental trade-off. On one hand, it provides an unparalleled guarantee that the public documentation is accurate and that the examples work as advertised, creating true "living documentation".8 On the other hand, this same purity prevents the use of doctests for verifying documentation of internal, private APIs. This forces a bifurcation of documentation strategy. Public-facing documentation can be tied directly to working, tested code. Internal documentation for maintainers, which is equally vital for a project's health, cannot be verified with the same tools. Examples for private functions must either be marked as - -`ignore`, forgoing the test guarantee, or be duplicated in separate unit tests, violating the "Don't Repeat Yourself" (DRY) principle.1 This reveals that - -`rustdoc`'s design implicitly prioritizes the integrity of the public contract over the convenience of a single, unified system for testable documentation of both public and private code. +This "separate crate" paradigm has two immediate and significant consequences +that shape all advanced doctesting patterns. + +First, **API visibility is strictly limited to public items**. Because the +doctest is compiled as an external crate, it can only access functions, +structs, traits, and modules marked with the `pub` keyword. It has no access to +private items or even crate-level public items (e.g., `pub(crate)`). This is +not a bug or an oversight but a fundamental aspect of the design, enforcing the +perspective of an external consumer.1 + +Second, the model has **profound performance implications**. The process of +invoking `rustc` to compile and link a new executable for every single doctest +is computationally expensive. For small projects, this overhead is negligible. +However, for large libraries with hundreds of doctests, the cumulative +compilation time can become a significant bottleneck in the development and +CI/CD cycle, a common pain point in the Rust community.2 + +The architectural purity of the `rustdoc` model—its insistence on simulating an +external user—creates a fundamental trade-off. On one hand, it provides an +unparalleled guarantee that the public documentation is accurate and that the +examples work as advertised, creating true "living documentation".8 On the +other hand, this same purity prevents the use of doctests for verifying +documentation of internal, private APIs. This forces a bifurcation of +documentation strategy. Public-facing documentation can be tied directly to +working, tested code. Internal documentation for maintainers, which is equally +vital for a project's health, cannot be verified with the same tools. Examples +for private functions must either be marked as + +`ignore`, forgoing the test guarantee, or be duplicated in separate unit tests, +violating the "Don't Repeat Yourself" (DRY) principle.1 This reveals that + +`rustdoc`'s design implicitly prioritizes the integrity of the public contract +over the convenience of a single, unified system for testable documentation of +both public and private code. ## Authoring Effective Doctests: From Basics to Best Practices -With a solid understanding of the `rustdoc` compilation model, one can move on to the practical craft of authoring doctests. An effective doctest is more than just a block of code; it is a piece of technical communication that should be clear, illustrative, and robust. +With a solid understanding of the `rustdoc` compilation model, one can move on +to the practical craft of authoring doctests. An effective doctest is more than +just a block of code; it is a piece of technical communication that should be +clear, illustrative, and robust. ### 2.1 The Anatomy of a Doctest Doctests reside within documentation comments. Rust recognizes two types: -- **Outer doc comments (**`///`**)**: These document the item that follows them (e.g., a function, struct, or module). This is the most common type.8 - -- **Inner doc comments (**`//!`**)**: These document the item they are inside of (e.g., a module or the crate itself). They are typically used at the top of `lib.rs` or `mod.rs` to provide crate- or module-level documentation.9 - -Within these comments, a code block is denoted by triple backticks (`). While rustdoc defaults to assuming the language is Rust, explicitly adding the rust language specifier (e.g., `rust\`) is considered good practice for clarity.3 +- **Outer doc comments (**`///`**)**: These document the item that follows them + (e.g., a function, struct, or module). This is the most common type.8 -A doctest is considered to "pass" if it compiles successfully and runs to completion without panicking. To verify that a function produces a specific output, developers should use the standard assertion macros, such as `assert!`, `assert_eq!`, and `assert_ne!`.3 +- **Inner doc comments (**`//!`**)**: These document the item they are inside + of (e.g., a module or the crate itself). They are typically used at the top + of `lib.rs` or `mod.rs` to provide crate- or module-level documentation.9 + Within these comments, a code block is denoted by triple backticks (`). While + rustdoc defaults to assuming the language is Rust, + explicitly add the`rust` language specifier for clarity.[^3] A doctest is + considered to "pass" if it compiles successfully and runs to completion + without panicking. To verify that a function produces a specific output, + developers should use the standard assertion macros, such as `assert!`, + `assert_eq!`, and`assert_ne!`.3 ### 2.2 The Philosophy of a Good Example -The purpose of a documentation example extends beyond merely demonstrating syntax. A reader can typically be expected to understand the mechanics of calling a function or instantiating a struct. A truly valuable example illustrates *why* and in *what context* an item should be used.10 It should tell a small story or solve a miniature problem that illuminates the item's purpose. For instance, an example for +The purpose of a documentation example extends beyond merely demonstrating +syntax. A reader can typically be expected to understand the mechanics of +calling a function or instantiating a struct. A truly valuable example +illustrates *why* and in *what context* an item should be used.10 It should +tell a small story or solve a miniature problem that illuminates the item's +purpose. For instance, an example for -`String::clone()` should not just show `hello.clone();`, but should demonstrate a scenario where ownership rules necessitate creating a copy.10 +`String::clone()` should not just show `hello.clone();`, but should demonstrate +a scenario where ownership rules necessitate creating a copy.10 -To achieve this, examples must be clear and concise. Any code that is not directly relevant to the point being made—such as complex setup, boilerplate, or unrelated logic—should be hidden to avoid distracting the reader.3 +To achieve this, examples must be clear and concise. Any code that is not +directly relevant to the point being made—such as complex setup, boilerplate, +or unrelated logic—should be hidden to avoid distracting the reader.3 ### 2.3 Ergonomic Error Handling: Taming the `?` Operator -One of the most common ergonomic hurdles in writing doctests involves handling functions that return a `Result`. The question mark (`?`) operator is the idiomatic way to propagate errors in Rust, but it presents a challenge for doctests. The implicit `fn main()` wrapper generated by `rustdoc` has a return type of `()`, while the `?` operator can only be used in a function that returns a `Result` or `Option`. This mismatch leads to a compilation error.3 +One of the most common ergonomic hurdles in writing doctests involves handling +functions that return a `Result`. The question mark (`?`) operator is the +idiomatic way to propagate errors in Rust, but it presents a challenge for +doctests. The implicit `fn main()` wrapper generated by `rustdoc` has a return +type of `()`, while the `?` operator can only be used in a function that +returns a `Result` or `Option`. This mismatch leads to a compilation error.3 -Using `.unwrap()` or `.expect()` in examples is strongly discouraged. It is considered an anti-pattern because users often copy example code verbatim, and encouraging panicking on errors is contrary to robust application design.10 Instead, two canonical solutions exist. +Using `.unwrap()` or `.expect()` in examples is strongly discouraged. It is +considered an anti-pattern because users often copy example code verbatim, and +encouraging panicking on errors is contrary to robust application design.10 +Instead, two canonical solutions exist. Solution 1: The Explicit main Function -The most transparent and recommended approach is to manually write a main function within the doctest that returns a Result. This leverages the Termination trait, which is implemented for Result. The surrounding boilerplate can then be hidden from the rendered documentation. +The most transparent and recommended approach is to manually write a main +function within the doctest that returns a Result. This leverages the +Termination trait, which is implemented for Result. The surrounding boilerplate +can then be hidden from the rendered documentation. Rust @@ -85,11 +165,14 @@ Rust /// ``` ``` -In this pattern, the reader only sees the core, fallible code, while the test itself is a complete, well-behaved program.10 +In this pattern, the reader only sees the core, fallible code, while the test +itself is a complete, well-behaved program.10 Solution 2: The Implicit Result-Returning main -rustdoc provides a lesser-known but more concise shorthand for this exact scenario. If a code block ends with the literal token (()), rustdoc will automatically wrap the code in a main function that returns a Result. +rustdoc provides a lesser-known but more concise shorthand for this exact +scenario. If a code block ends with the literal token (()), rustdoc will +automatically wrap the code in a main function that returns a Result. Rust @@ -103,61 +186,133 @@ Rust /// ``` ``` -This is functionally equivalent to the explicit `main` but requires less boilerplate. However, it is critical that the `(())` be written as a single, contiguous sequence of characters, as `rustdoc`'s detection mechanism is purely textual and will not recognize `( () )`.3 +This is functionally equivalent to the explicit `main` but requires less +boilerplate. However, it is critical that the `(())` be written as a single, +contiguous sequence of characters, as `rustdoc`'s detection mechanism is purely +textual and will not recognize `( () )`.3 ### 2.4 The Power of Hidden Lines (`#`): Creating Clean Examples -The mechanism that makes clean, focused examples possible is the "hidden line" syntax. Any line in a doctest code block that begins with a `#` character (optionally preceded by whitespace) will be compiled and executed as part of the test, but it will be completely omitted from the final HTML documentation rendered for the user.3 - -This feature is essential for bridging the gap between what makes a good, human-readable example and what constitutes a complete, compilable program. Its primary use cases include: - -1. **Hiding** `main` **Wrappers**: As demonstrated in the error-handling examples, the entire `fn main() -> Result<...> {... }` and `Ok(())` scaffolding can be hidden, presenting the user with only the relevant code.10 - -2. **Hiding Setup Code**: If an example requires some preliminary setup—like creating a temporary file, defining a helper struct for the test, or initializing a server—this logic can be hidden to keep the example focused on the API item being documented.3 - -3. **Hiding** `use` **Statements**: While often useful to show which types are involved, `use` statements can sometimes be hidden to de-clutter very simple examples. - -The existence of features like hidden lines and the `(())` shorthand reveals a core tension in `rustdoc`'s design. The compilation model is rigid: every test must be a valid, standalone program.2 However, the ideal documentation example is often just a small, illustrative snippet that is not a valid program on its own.10 These ergonomic features are pragmatic "patches" designed to resolve this conflict. They allow the developer to inject the necessary boilerplate to satisfy the compiler without burdening the human reader with irrelevant details. Understanding them as clever workarounds, rather than as first-class language features, helps explain their sometimes quirky, text-based behavior. +The mechanism that makes clean, focused examples possible is the "hidden line" +syntax. Any line in a doctest code block that begins with a `#` character +(optionally preceded by whitespace) will be compiled and executed as part of +the test, but it will be completely omitted from the final HTML documentation +rendered for the user.3 + +This feature is essential for bridging the gap between what makes a good, +human-readable example and what constitutes a complete, compilable program. Its +primary use cases include: + +1. **Hiding** `main` **Wrappers**: As demonstrated in the error-handling + examples, the entire `fn main() -> Result<...> {... }` and `Ok(())` + scaffolding can be hidden, presenting the user with only the relevant code.10 + +2. **Hiding Setup Code**: If an example requires some preliminary setup—like + creating a temporary file, defining a helper struct for the test, or + initializing a server—this logic can be hidden to keep the example focused + on the API item being documented.3 + +3. **Hiding** `use` **Statements**: While often useful to show which types are + involved, `use` statements can sometimes be hidden to de-clutter very simple + examples. + +The existence of features like hidden lines and the `(())` shorthand reveals a +core tension in `rustdoc`'s design. The compilation model is rigid: every test +must be a valid, standalone program.2 However, the ideal documentation example +is often just a small, illustrative snippet that is not a valid program on its +own.10 These ergonomic features are pragmatic "patches" designed to resolve +this conflict. They allow the developer to inject the necessary boilerplate to +satisfy the compiler without burdening the human reader with irrelevant +details. Understanding them as clever workarounds, rather than as first-class +language features, helps explain their sometimes quirky, text-based behavior. ## Advanced Doctest Control and Attributes -Beyond basic pass/fail checks, `rustdoc` provides a suite of attributes to control doctest behavior with fine-grained precision. These attributes, placed in the header of a code block (e.g., \`\`\`\`ignore\`), allow developers to handle expected failures, non-executable examples, and other complex scenarios. +Beyond basic pass/fail checks, `rustdoc` provides a suite of attributes to +control doctest behavior with fine-grained precision. These attributes, placed +in the header of a code block (e.g., \`\`\`\`ignore\`), allow developers to +handle expected failures, non-executable examples, and other complex scenarios. ### 3.1 A Comparative Analysis of Doctest Attributes -Choosing the correct attribute is critical for communicating the intent of an example and ensuring the test suite provides meaningful feedback. The following table provides a comparative reference for the most common doctest attributes. +Choosing the correct attribute is critical for communicating the intent of an +example and ensuring the test suite provides meaningful feedback. The following +table provides a comparative reference for the most common doctest attributes. - - -

Attribute

Action

Test Outcome

Primary Use Case & Caveats

ignore

Skips both compilation and execution.

ignored

Use Case: For pseudo-code, examples known to be broken, or to temporarily disable a test. Caveat: Provides no guarantee that the code is even syntactically correct. Generally discouraged in favor of more specific attributes.3

should_panic

Compiles and runs the code. The test passes if the code panics.

ok on panic, failed if it does not panic.

Use Case: Demonstrating functions that are designed to panic on invalid input (e.g., indexing out of bounds).

compile_fail

Attempts to compile the code. The test passes if compilation fails.

ok on compilation failure, failed if it compiles successfully.

Use Case: Illustrating language rules, such as the borrow checker or type system constraints. Caveat: Highly brittle. A future Rust version might make the code valid, causing the test to unexpectedly fail.4

no_run

Compiles the code but does not execute it.

ok if compilation succeeds.

Use Case: Essential for examples with undesirable side effects in a test environment, such as network requests, filesystem I/O, or launching a GUI. Guarantees the example is valid Rust code without running it.5

edition2021

Compiles the code using the specified Rust edition's rules.

ok on success.

Use Case: Demonstrating syntax or idioms that are specific to a particular Rust edition (e.g., edition2018, edition2021).4

+| Attribute | Action | Test Outcome | Primary Use Case & Caveats | +| ------------ | ------------------------------------------------------------------- | -------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| ignore | Skips both compilation and execution. | ignored | Use Case: For pseudo-code, examples known to be broken, or to temporarily disable a test. Caveat: Provides no guarantee that the code is even syntactically correct. Generally discouraged in favor of more specific attributes.3 | +| should_panic | Compiles and runs the code. The test passes if the code panics. | ok on panic, failed if it does not panic. | Use Case: Demonstrating functions that are designed to panic on invalid input (e.g., indexing out of bounds). | +| compile_fail | Attempts to compile the code. The test passes if compilation fails. | ok on compilation failure, failed if it compiles successfully. | Use Case: Illustrating language rules, such as the borrow checker or type system constraints. Caveat: Highly brittle. A future Rust version might make the code valid, causing the test to unexpectedly fail.4 | +| no_run | Compiles the code but does not execute it. | ok if compilation succeeds. | Use Case: Essential for examples with undesirable side effects in a test environment, such as network requests, filesystem I/O, or launching a GUI. Guarantees the example is valid Rust code without running it.5 | +| edition2021 | Compiles the code using the specified Rust edition's rules. | ok on success. | Use Case: Demonstrating syntax or idioms that are specific to a particular Rust edition (e.g., edition2018, edition2021).4 | ### 3.2 Detailed Attribute Breakdown -- `ignore`: This is the bluntest instrument in the toolbox. It tells `rustdoc` to do nothing with the code block. It is almost always better to either fix the example using hidden lines or use a more descriptive attribute like `no_run`.3 Its main legitimate use is for non-Rust code blocks or illustrative pseudo-code. - -- `should_panic`: This attribute inverts the normal test condition. It is used to document and verify behavior that intentionally results in a panic. The test will fail if the code completes successfully or panics for a reason other than the one expected (if a specific panic message is asserted).3 - -- `compile_fail`: This is a powerful tool for creating educational examples that demonstrate what *not* to do. It is frequently used in documentation about ownership, borrowing, and lifetimes to show code that the compiler will correctly reject. However, developers must be aware of its fragility. An evolution in the Rust language or compiler could make previously invalid code compile, which would break the test.4 - -- `no_run`: This attribute strikes a crucial balance between test verification and practicality. For an example that demonstrates how to download a file from the internet, you want to ensure the example code is syntactically correct and uses the API properly, but you do not want your CI server to actually perform a network request every time tests are run. `no_run` provides this guarantee by compiling the code without executing it.5 - -- `edition20xx`: This attribute allows an example to be tested against a specific Rust edition. This is important for crates that support multiple editions and need to demonstrate edition-specific features or migration paths.4 +- `ignore`: This is the bluntest instrument in the toolbox. It tells `rustdoc` + to do nothing with the code block. It is almost always better to either fix + the example using hidden lines or use a more descriptive attribute like + `no_run`.3 Its main legitimate use is for non-Rust code blocks or + illustrative pseudo-code. + +- `should_panic`: This attribute inverts the normal test condition. It is used + to document and verify behavior that intentionally results in a panic. The + test will fail if the code completes successfully or panics for a reason + other than the one expected (if a specific panic message is asserted).3 + +- `compile_fail`: This is a powerful tool for creating educational examples + that demonstrate what *not* to do. It is frequently used in documentation + about ownership, borrowing, and lifetimes to show code that the compiler will + correctly reject. However, developers must be aware of its fragility. An + evolution in the Rust language or compiler could make previously invalid code + compile, which would break the test.4 + +- `no_run`: This attribute strikes a crucial balance between test verification + and practicality. For an example that demonstrates how to download a file + from the internet, you want to ensure the example code is syntactically + correct and uses the API properly, but you do not want your CI server to + actually perform a network request every time tests are run. `no_run` + provides this guarantee by compiling the code without executing it.5 + +- `edition20xx`: This attribute allows an example to be tested against a + specific Rust edition. This is important for crates that support multiple + editions and need to demonstrate edition-specific features or migration + paths.4 ## The DRY Principle in Doctests: Managing Shared and Complex Logic -The "Don't Repeat Yourself" (DRY) principle is a cornerstone of software engineering, and it applies to test code as much as it does to production code. As a project grows, it is common for multiple doctests to require the same complex setup logic. Copying and pasting this setup into every doctest using hidden lines is tedious, error-prone, and a clear violation of the DRY principle. +The "Don't Repeat Yourself" (DRY) principle is a cornerstone of software +engineering, and it applies to test code as much as it does to production code. +As a project grows, it is common for multiple doctests to require the same +complex setup logic. Copying and pasting this setup into every doctest using +hidden lines is tedious, error-prone, and a clear violation of the DRY +principle. ### 4.1 The Problem of Shared Setup -Consider a library for interacting with a database. Nearly every doctest might need to perform the same initial steps: spin up a temporary database instance, connect to it, and seed it with some initial data. Repeating this multi-line setup in every single example is inefficient and makes maintenance difficult. A change to the setup process would require updating dozens of doctests. +Consider a library for interacting with a database. Nearly every doctest might +need to perform the same initial steps: spin up a temporary database instance, +connect to it, and seed it with some initial data. Repeating this multi-line +setup in every single example is inefficient and makes maintenance difficult. A +change to the setup process would require updating dozens of doctests. ### 4.2 The `#[cfg(doctest)]` Pattern for Shared Helpers -The canonical solution to this problem involves using a special configuration flag provided by `rustdoc`: `doctest`. A common mistake is to try to place shared test logic in a block guarded by `#[cfg(test)]`. This will not work, because `rustdoc` does not enable the `test` configuration flag during its compilation process; `#[cfg(test)]` is reserved for unit and integration tests run directly by `cargo test`.12 +The canonical solution to this problem involves using a special configuration +flag provided by `rustdoc`: `doctest`. A common mistake is to try to place +shared test logic in a block guarded by `#[cfg(test)]`. This will not work, +because `rustdoc` does not enable the `test` configuration flag during its +compilation process; `#[cfg(test)]` is reserved for unit and integration tests +run directly by `cargo test`.12 -Instead, `rustdoc` sets its own unique `doctest` flag. By guarding a module or function with `#[cfg(doctest)]`, developers can write helper code that is compiled and available *only* when `cargo test --doc` is running. This code is excluded from normal production builds and standard unit test runs, preventing any pollution of the final binary or the public API. +Instead, `rustdoc` sets its own unique `doctest` flag. By guarding a module or +function with `#[cfg(doctest)]`, developers can write helper code that is +compiled and available *only* when `cargo test --doc` is running. This code is +excluded from normal production builds and standard unit test runs, preventing +any pollution of the final binary or the public API. -The typical implementation pattern is to create a private helper module within your library: +The typical implementation pattern is to create a private helper module within +your library: Rust @@ -205,23 +360,45 @@ mod doctest_helpers { pub struct TestContext { /*... */ } ``` -This pattern is the most effective way to achieve DRY doctests. It centralizes setup logic, improves maintainability, and cleanly separates testing concerns from production code.12 +This pattern is the most effective way to achieve DRY doctests. It centralizes +setup logic, improves maintainability, and cleanly separates testing concerns +from production code.12 ### 4.3 Advanced DRY: Programmatic Doctest Generation -For highly specialized use cases, such as authoring procedural macros, the DRY principle can be taken a step further. A procedural macro generates code, and it is often desirable to test that the generated code itself contains valid and working documentation. Writing these doctests manually can be exceptionally repetitive. +For highly specialized use cases, such as authoring procedural macros, the DRY +principle can be taken a step further. A procedural macro generates code, and +it is often desirable to test that the generated code itself contains valid and +working documentation. Writing these doctests manually can be exceptionally +repetitive. -Crates like `quote-doctest` address this by allowing developers to programmatically construct a doctest from a `TokenStream`. This enables the generation of doctests from the same source of truth that generates the code they are intended to test, representing the ultimate application of the DRY principle in this domain.14 +Crates like `quote-doctest` address this by allowing developers to +programmatically construct a doctest from a `TokenStream`. This enables the +generation of doctests from the same source of truth that generates the code +they are intended to test, representing the ultimate application of the DRY +principle in this domain.14 ## Conditional Compilation Strategies for Doctests -Conditional compilation is a powerful feature of Rust, but it introduces significant complexity when interacting with `rustdoc`. A common source of confusion stems from the failure to distinguish between two separate goals: (1) ensuring platform-specific or feature-gated items *appear* in the documentation, and (2) ensuring doctests for those items *execute* only under the correct conditions. These two goals are achieved with different mechanisms that operate at different stages of the `rustdoc` pipeline. +Conditional compilation is a powerful feature of Rust, but it introduces +significant complexity when interacting with `rustdoc`. A common source of +confusion stems from the failure to distinguish between two separate goals: (1) +ensuring platform-specific or feature-gated items *appear* in the +documentation, and (2) ensuring doctests for those items *execute* only under +the correct conditions. These two goals are achieved with different mechanisms +that operate at different stages of the `rustdoc` pipeline. ### 5.1 Documenting Conditionally Compiled Items: `#[cfg(doc)]` -**The Goal**: To ensure that an item, such as a `struct UnixSocket` that is only available on Unix-like systems, is included in the documentation regardless of which platform `rustdoc` is run on (e.g., when generating docs on a Windows machine). +**The Goal**: To ensure that an item, such as a `struct UnixSocket` that is +only available on Unix-like systems, is included in the documentation +regardless of which platform `rustdoc` is run on (e.g., when generating docs on +a Windows machine). -**The Mechanism**: `rustdoc` always invokes the compiler with the `--cfg doc` flag set. By adding `doc` to an item's `#[cfg]` attribute, a developer can instruct the compiler to include that item specifically for documentation builds.15 +**The Mechanism**: `rustdoc` always invokes the compiler with the `--cfg doc` +flag set. By adding `doc` to an item's `#[cfg]` attribute, a developer can +instruct the compiler to include that item specifically for documentation +builds.15 **The Pattern**: @@ -233,21 +410,33 @@ Rust pub struct UnixSocket; ``` -This `any` directive ensures the struct is compiled either when the target OS is `unix` OR when `rustdoc` is running. This correctly makes the item visible in the generated HTML. However, it is crucial to understand that this **does not** make the doctest for `UnixSocket` pass on non-Unix platforms. +This `any` directive ensures the struct is compiled either when the target OS +is `unix` OR when `rustdoc` is running. This correctly makes the item visible +in the generated HTML. However, it is crucial to understand that this **does +not** make the doctest for `UnixSocket` pass on non-Unix platforms. -This distinction highlights the "cfg duality." The `#[cfg(doc)]` attribute controls the *table of contents* of the documentation; it determines which items are parsed and rendered. The actual compilation of a doctest, however, happens in a separate, later stage. In that stage, the `doc` cfg is *not* passed to the compiler.15 The compiler only sees the host +This distinction highlights the "cfg duality." The `#[cfg(doc)]` attribute +controls the *table of contents* of the documentation; it determines which +items are parsed and rendered. The actual compilation of a doctest, however, +happens in a separate, later stage. In that stage, the `doc` cfg is *not* +passed to the compiler.15 The compiler only sees the host -`cfg` (e.g., `target_os = "windows"`), so the `UnixSocket` type is not available, and the test fails to compile. `#[cfg(doc)]` affects what is documented, not what is testable. +`cfg` (e.g., `target_os = "windows"`), so the `UnixSocket` type is not +available, and the test fails to compile. `#[cfg(doc)]` affects what is +documented, not what is testable. ### 5.2 Executing Doctests Conditionally: Feature Flags -**The Goal**: To ensure a doctest that relies on an optional crate feature (e.g., a feature named `"serde"`) is only executed when that feature is enabled via `cargo test --doc --features "serde"`. +**The Goal**: To ensure a doctest that relies on an optional crate feature +(e.g., a feature named `"serde"`) is only executed when that feature is enabled +via `cargo test --doc --features "serde"`. Two primary patterns exist to achieve this. Pattern 1: #\[cfg\] Inside the Code Block -This pattern involves placing a #\[cfg\] attribute directly on the code within the doctest itself. +This pattern involves placing a #\[cfg\] attribute directly on the code within +the doctest itself. Rust @@ -264,11 +453,16 @@ Rust /// ``` ``` -When the `"serde"` feature is disabled, the code inside the block is compiled out. The doctest becomes an empty program that runs, does nothing, and is reported as `ok`. While simple to write, this can be misleading, as the test suite reports a "pass" for a test that was effectively skipped.16 +When the `"serde"` feature is disabled, the code inside the block is compiled +out. The doctest becomes an empty program that runs, does nothing, and is +reported as `ok`. While simple to write, this can be misleading, as the test +suite reports a "pass" for a test that was effectively skipped.16 Pattern 2: cfg_attr to Conditionally ignore the Test -A more explicit and accurate pattern uses the cfg_attr attribute to conditionally add the ignore flag to the doctest's header. This is typically done with inner doc comments (//!). +A more explicit and accurate pattern uses the cfg_attr attribute to +conditionally add the ignore flag to the doctest's header. This is typically +done with inner doc comments (//!). Rust @@ -282,11 +476,19 @@ Rust //! ``` ``` -With this pattern, if the `"serde"` feature is disabled, the test is marked as `ignored` in the test results, which more accurately reflects its status. If the feature is enabled, the `ignore` is omitted, and the test runs normally. This approach provides clearer feedback but is significantly more verbose and less ergonomic, especially when applied to outer (`///`) doc comments, as the `cfg_attr` must be applied to every single line of the comment.16 +With this pattern, if the `"serde"` feature is disabled, the test is marked as +`ignored` in the test results, which more accurately reflects its status. If +the feature is enabled, the `ignore` is omitted, and the test runs normally. +This approach provides clearer feedback but is significantly more verbose and +less ergonomic, especially when applied to outer (`///`) doc comments, as the +`cfg_attr` must be applied to every single line of the comment.16 ### 5.3 Displaying Feature Requirements in Docs: `#[doc(cfg(...))]` -To complement conditional execution, Rust provides a way to visually flag feature-gated items in the generated documentation. This is achieved with the `#[doc(cfg(...))]` attribute, which requires enabling the `#![feature(doc_cfg)]` feature gate at the crate root. +To complement conditional execution, Rust provides a way to visually flag +feature-gated items in the generated documentation. This is achieved with the +`#[doc(cfg(...))]` attribute, which requires enabling the +`#![feature(doc_cfg)]` feature gate at the crate root. Rust @@ -300,96 +502,192 @@ Rust pub fn function_requiring_serde() { /*... */ } ``` -This will render a banner in the documentation for `function_requiring_serde` that reads, "This is only available when the `serde` feature is enabled." This attribute is purely for documentation generation and is independent of, but often used alongside, the conditional test execution patterns.16 +This will render a banner in the documentation for `function_requiring_serde` +that reads, "This is only available when the `serde` feature is enabled." This +attribute is purely for documentation generation and is independent of, but +often used alongside, the conditional test execution patterns.16 ## Doctests in the Wider Project Ecosystem -Doctests are a powerful tool, but they are just one component of a comprehensive testing strategy. Understanding their specific role and limitations is key to maintaining a healthy and well-tested Rust project. +Doctests are a powerful tool, but they are just one component of a +comprehensive testing strategy. Understanding their specific role and +limitations is key to maintaining a healthy and well-tested Rust project. ### 6.1 Choosing the Right Test Type: A Decision Framework -A robust testing strategy leverages three distinct types of tests, each with its own purpose: +A robust testing strategy leverages three distinct types of tests, each with +its own purpose: -- **Doctests**: These are ideal for simple, "happy-path" examples of your public API. Their dual purpose is to provide clear documentation for users and to act as a basic sanity check that the examples remain correct over time. They should be easy to read and focused on illustrating a single concept.6 +- **Doctests**: These are ideal for simple, "happy-path" examples of your + public API. Their dual purpose is to provide clear documentation for users + and to act as a basic sanity check that the examples remain correct over + time. They should be easy to read and focused on illustrating a single + concept.6 -- **Unit Tests (**`#[test]` **in** `src/`**)**: These are for testing the nitty-gritty details of your implementation. They are placed in submodules within your source files (often `mod tests {... }`) and are compiled only with `#[cfg(test)]`. Because they live inside the crate, they can access private functions and modules, making them perfect for testing internal logic, edge cases, and specific error conditions.1 +- **Unit Tests (**`#[test]` **in** `src/`**)**: These are for testing the + nitty-gritty details of your implementation. They are placed in submodules + within your source files (often `mod tests {... }`) and are compiled only + with `#[cfg(test)]`. Because they live inside the crate, they can access + private functions and modules, making them perfect for testing internal + logic, edge cases, and specific error conditions.1 -- **Integration Tests (in the** `tests/` **directory)**: These test the crate from a completely external perspective, much like doctests. However, they are not constrained by the need to be readable documentation. They are suited for testing complex user workflows, interactions between multiple API entry points, and the overall behavior of the library as a black box.6 +- **Integration Tests (in the** `tests/` **directory)**: These test the crate + from a completely external perspective, much like doctests. However, they are + not constrained by the need to be readable documentation. They are suited for + testing complex user workflows, interactions between multiple API entry + points, and the overall behavior of the library as a black box.6 ### 6.2 The Unsolved Problem: Testing Private APIs -As established, the `rustdoc` compilation model makes testing private items in doctests impossible by design.1 The community has developed several workarounds, but each comes with significant trade-offs 1: +As established, the `rustdoc` compilation model makes testing private items in +doctests impossible by design.1 The community has developed several +workarounds, but each comes with significant trade-offs 1: -1. `ignore` **the test**: This allows the example to exist in the documentation but sacrifices the guarantee of correctness. It is the least desirable option. +1. `ignore` **the test**: This allows the example to exist in the documentation + but sacrifices the guarantee of correctness. It is the least desirable + option. -2. **Make items** `pub` **in a** `detail` **or** `internal` **module**: This compromises API design by polluting the public namespace and exposing implementation details that should be encapsulated. It can lead to misuse by users and makes future refactoring difficult. +2. **Make items** `pub` **in a** `detail` **or** `internal` **module**: This + compromises API design by polluting the public namespace and exposing + implementation details that should be encapsulated. It can lead to misuse by + users and makes future refactoring difficult. -3. **Use** `cfg_attr` **to conditionally make items public**: This involves adding an attribute like `#[cfg_attr(feature = "doctest-private", visibility::make(pub))]` to every private item you wish to test. While robust, it is highly invasive and adds significant boilerplate throughout the codebase. +3. **Use** `cfg_attr` **to conditionally make items public**: This involves + adding an attribute like + `#[cfg_attr(feature = "doctest-private", visibility::make(pub))]` to every + private item you wish to test. While robust, it is highly invasive and adds + significant boilerplate throughout the codebase. -The expert recommendation is to acknowledge this limitation and not fight the tool. Do not compromise a clean API design for the sake of doctests. Use doctests for their intended purpose—verifying public API examples—and rely on dedicated unit tests for verifying private logic. The lack of a clean solution for test-verifying private documentation is a known and accepted trade-off within the Rust ecosystem. +The expert recommendation is to acknowledge this limitation and not fight the +tool. Do not compromise a clean API design for the sake of doctests. Use +doctests for their intended purpose—verifying public API examples—and rely on +dedicated unit tests for verifying private logic. The lack of a clean solution +for test-verifying private documentation is a known and accepted trade-off +within the Rust ecosystem. ### 6.3 Practical Challenges and Solutions -Beyond architectural considerations, developers face several practical, real-world challenges when working with doctests. - -- **The** `README.md` **Dilemma**: A project's `README.md` file serves multiple audiences. It needs to render cleanly on platforms like GitHub and [crates.io](http://crates.io), where hidden lines (`#...`) look like ugly, commented-out code. At the same time, it should contain testable examples, which often require hidden lines for setup.11 The best practice is to avoid maintaining the README manually. Instead, use a tool like - - `cargo-readme`. This tool generates a `README.md` file from your crate-level documentation (in `lib.rs`), automatically stripping out the hidden lines. This provides a single source of truth that is both fully testable via `cargo test --doc` and produces a clean, professional README for external sites.11 - -- **Developer Ergonomics in IDEs**: Writing code inside documentation comments can be a subpar experience. IDEs and tools like `rust-analyzer` often provide limited or no autocompletion, real-time error checking, or refactoring support for code within a comment block.18 A common and effective workflow to mitigate this is to first write and debug the example as a standard - - `#[test]` function in a temporary file or test module. This allows the developer to leverage the full power of the IDE. Once the code is working correctly, it can be copied into the doc comment, and the necessary formatting (`///`, `#`, etc.) can be applied.18 +Beyond architectural considerations, developers face several practical, +real-world challenges when working with doctests. + +- **The** `README.md` **Dilemma**: A project's `README.md` file serves multiple + audiences. It needs to render cleanly on platforms like GitHub and + [crates.io](http://crates.io), where hidden lines (`#...`) look like ugly, + commented-out code. At the same time, it should contain testable examples, + which often require hidden lines for setup.11 The best practice is to avoid + maintaining the README manually. Instead, use a tool like + + `cargo-readme`. This tool generates a `README.md` file from your crate-level + documentation (in `lib.rs`), automatically stripping out the hidden lines. + This provides a single source of truth that is both fully testable via + `cargo test --doc` and produces a clean, professional README for external + sites.11 + +- **Developer Ergonomics in IDEs**: Writing code inside documentation comments + can be a subpar experience. IDEs and tools like `rust-analyzer` often provide + limited or no autocompletion, real-time error checking, or refactoring + support for code within a comment block.18 A common and effective workflow to + mitigate this is to first write and debug the example as a standard + + `#[test]` function in a temporary file or test module. This allows the + developer to leverage the full power of the IDE. Once the code is working + correctly, it can be copied into the doc comment, and the necessary + formatting (`///`, `#`, etc.) can be applied.18 ## Conclusion and Recommendations -Rust's documentation testing framework is a uniquely powerful feature that promotes the creation of high-quality, reliable "living documentation." By deeply understanding its underlying compilation model and the patterns that have evolved to manage its constraints, developers can write doctests that are effective, ergonomic, and maintainable. To summarize the key principles for mastering doctests: +Rust's documentation testing framework is a uniquely powerful feature that +promotes the creation of high-quality, reliable "living documentation." By +deeply understanding its underlying compilation model and the patterns that +have evolved to manage its constraints, developers can write doctests that are +effective, ergonomic, and maintainable. To summarize the key principles for +mastering doctests: -1. **Embrace the Model**: Always remember that a doctest is an external integration test compiled in a separate crate. This mental model explains nearly all of its behavior. +1. **Embrace the Model**: Always remember that a doctest is an external + integration test compiled in a separate crate. This mental model explains + nearly all of its behavior. -2. **Prioritize Clarity**: Write examples that teach the *why*, not just the *how*. Use hidden lines (`#`) ruthlessly to eliminate boilerplate and focus the reader's attention on the relevant code. +2. **Prioritize Clarity**: Write examples that teach the *why*, not just the + *how*. Use hidden lines (`#`) ruthlessly to eliminate boilerplate and focus + the reader's attention on the relevant code. -3. **Handle Errors Gracefully**: For examples of fallible functions, always use the `fn main() -> Result<...>` pattern, hiding the boilerplate. Avoid `.unwrap()` to promote robust error-handling practices. +3. **Handle Errors Gracefully**: For examples of fallible functions, always use + the `fn main() -> Result<...>` pattern, hiding the boilerplate. Avoid + `.unwrap()` to promote robust error-handling practices. -4. **Be DRY**: When setup logic is shared across multiple examples, centralize it in a helper module guarded by `#[cfg(doctest)]` to avoid repetition. +4. **Be DRY**: When setup logic is shared across multiple examples, centralize + it in a helper module guarded by `#[cfg(doctest)]` to avoid repetition. -5. **Master** `cfg`: Use `#[cfg(doc)]` to control an item's *visibility* in the final documentation. Use `#[cfg(feature = "...")]` or other `cfg` flags *inside* the test block to control its conditional *execution*. Do not confuse the two. +5. **Master** `cfg`: Use `#[cfg(doc)]` to control an item's *visibility* in the + final documentation. Use `#[cfg(feature = "...")]` or other `cfg` flags + *inside* the test block to control its conditional *execution*. Do not + confuse the two. -6. **Know When to Stop**: A doctest is not the right tool for every job. When an example becomes overly complex, requires testing intricate error paths, or needs to access private implementation details, move it to a dedicated unit or integration test. Do not compromise your API design or test clarity by forcing a square peg into a round hole. Use the right tool for the job. +6. **Know When to Stop**: A doctest is not the right tool for every job. When + an example becomes overly complex, requires testing intricate error paths, + or needs to access private implementation details, move it to a dedicated + unit or integration test. Do not compromise your API design or test clarity + by forcing a square peg into a round hole. Use the right tool for the job. -#### **Works cited** +### **Works cited** - 1. rust - How can I write documentation tests for private modules ..., accessed on July 15, 2025, + 1. rust - How can I write documentation tests for private modules …, + accessed on July 15, 2025, + - 2. Rustdoc doctests need fixing - Swatinem, accessed on July 15, 2025, + 1. Rustdoc doctests need fixing - Swatinem, accessed on July 15, 2025, + - 3. Documentation tests - The rustdoc book - Rust Documentation, accessed on July 15, 2025, + 1. Documentation tests - The rustdoc book - Rust Documentation, accessed on + July 15, 2025, - 4. Documentation tests - - GitHub Pages, accessed on July 15, 2025, + 1. Documentation tests - - GitHub Pages, accessed on July 15, 2025, + - 5. Documentation tests - - MIT, accessed on July 15, 2025, + 1. Documentation tests - - MIT, accessed on July 15, 2025, + - 6. How to organize your Rust tests - LogRocket Blog, accessed on July 15, 2025, + 1. How to organize your Rust tests - LogRocket Blog, accessed on July 15, + 2025, - 7. Best way to organise tests in Rust - Reddit, accessed on July 15, 2025, + 1. Best way to organise tests in Rust - Reddit, accessed on July 15, 2025, + - 8. Writing Rust Documentation - DEV Community, accessed on July 15, 2025, + 1. Writing Rust Documentation - DEV Community, accessed on July 15, 2025, + - 9. The rustdoc book, accessed on July 15, 2025, + 1. The rustdoc book, accessed on July 15, 2025, + -10. Documentation - Rust API Guidelines, accessed on July 15, 2025, + 1. Documentation - Rust API Guidelines, accessed on July 15, 2025, + -11. Best practice for doc testing README - help - The Rust Programming Language Forum, accessed on July 15, 2025, + 1. Best practice for doc testing README - help - The Rust Programming Language + Forum, accessed on July 15, 2025, + -12. Compile_fail doc test ignored in cfg(test) - help - The Rust Programming Language Forum, accessed on July 15, 2025, + 1. Compile_fail doc test ignored in cfg(test) - help - The Rust Programming + Language Forum, accessed on July 15, 2025, + -13. Test setup for doctests - help - The Rust Programming Language Forum, accessed on July 15, 2025, + 1. Test setup for doctests - help - The Rust Programming Language Forum, + accessed on July 15, 2025, + -14. quote_doctest - Rust - [Docs.rs](http://Docs.rs), accessed on July 15, 2025, + 1. quote_doctest - Rust - [Docs.rs](http://Docs.rs), accessed on July 15, + 2025, -15. Advanced features - The rustdoc book - Rust Documentation, accessed on July 15, 2025, + 1. Advanced features - The rustdoc book - Rust Documentation, accessed on July + 15, 2025, -16. rust - How can I conditionally execute a module-level doctest based ..., accessed on July 15, 2025, + 1. rust - How can I conditionally execute a module-level doctest based …, + accessed on July 15, 2025, + -17. How would one achieve conditional compilation with Rust projects that have doctests?, accessed on July 15, 2025, + 1. How would one achieve conditional compilation with Rust projects that have + doctests?, accessed on July 15, 2025, + -18. How do you write your doc tests? : r/rust - Reddit, accessed on July 15, 2025, + 1. How do you write your doc tests? : r/rust - Reddit, accessed on July 15, + 2025, + diff --git a/docs/rust-testing-with-rstest-fixtures.md b/docs/rust-testing-with-rstest-fixtures.md index 4cf47893..d83678e7 100644 --- a/docs/rust-testing-with-rstest-fixtures.md +++ b/docs/rust-testing-with-rstest-fixtures.md @@ -8,8 +8,8 @@ crate () emerges as a powerful solution, offering a sophisticated fixture-based and parameterized testing framework that significantly simplifies these tasks through the use of procedural macros. This document provides a comprehensive exploration of `rstest`, from fundamental -concepts to advanced techniques, enabling Rust developers to write cleaner, more -expressive, and robust tests. +concepts to advanced techniques, enabling Rust developers to write cleaner, +more expressive, and robust tests. ## I. Introduction to `rstest` and Test Fixtures in Rust @@ -25,8 +25,8 @@ Managing this setup and teardown logic within each test function can lead to considerable boilerplate code and repetition, making tests harder to read and maintain. -Fixtures address this by encapsulating these dependencies and their setup logic. -For instance, if multiple tests require a logged-in user object or a +Fixtures address this by encapsulating these dependencies and their setup +logic. For instance, if multiple tests require a logged-in user object or a pre-populated database, instead of creating these in every test, a fixture can provide them. This approach allows developers to focus on the specific logic being tested rather than the auxiliary utilities. @@ -35,9 +35,9 @@ Fundamentally, the use of fixtures promotes a crucial separation of concerns: the *preparation* of the test environment is decoupled from the *execution* of the test logic. Traditional testing approaches often intermingle setup, action, and assertion logic within a single test function. This can result in lengthy -and convoluted tests that are difficult to comprehend at a glance. By extracting -the setup logic into reusable components (fixtures), the actual test functions -become shorter, more focused, and thus more readable and maintainable. +and convoluted tests that are difficult to comprehend at a glance. By +extracting the setup logic into reusable components (fixtures), the actual test +functions become shorter, more focused, and thus more readable and maintainable. ### B. Introducing `rstest`: Simplifying Fixture-Based Testing in Rust @@ -45,14 +45,15 @@ become shorter, more focused, and thus more readable and maintainable. by leveraging the concept of fixtures and providing powerful parameterization capabilities. It is available on `crates.io` and its source code is hosted at , distinguishing it from other software -projects that may share the same name but operate in different ecosystems (e.g., -a JavaScript/TypeScript framework mentioned). +projects that may share the same name but operate in different ecosystems +(e.g., a JavaScript/TypeScript framework mentioned). The `rstest` crate utilizes Rust's procedural macros, such as `#[rstest]` and `#[fixture]`, to achieve its declarative and expressive syntax. These macros allow developers to define fixtures and inject them into test functions simply -by listing them as arguments. This compile-time mechanism analyzes test function -signatures and fixture definitions to wire up dependencies automatically. +by listing them as arguments. This compile-time mechanism analyzes test +function signatures and fixture definitions to wire up dependencies +automatically. This reliance on procedural macros is a key architectural decision. It enables `rstest` to offer a remarkably clean and intuitive syntax at the test-writing @@ -72,8 +73,8 @@ quality and developer productivity: - **Readability:** By injecting dependencies as function arguments, `rstest` makes the requirements of a test explicit and easy to understand. The test function's signature clearly documents what it needs to run. This allows - developers to "focus on the important stuff in your tests" by abstracting away - the setup details. + developers to "focus on the important stuff in your tests" by abstracting + away the setup details. - **Reusability:** Fixtures defined with `rstest` are reusable components. A single fixture, such as one setting up a database connection or creating a complex data structure, can be used across multiple tests, eliminating @@ -84,12 +85,13 @@ quality and developer productivity: variations from a single function. The declarative nature of `rstest` is central to these benefits. Instead of -imperatively writing setup code within each test (the *how*), developers declare -the fixtures they need (the *what*) in the test function's signature. This -shifts the cognitive load from managing setup details in every test to designing -a system of well-defined, reusable fixtures. Over time, particularly in larger -projects, this can lead to a more robust, maintainable, and understandable test -suite as common setup patterns are centralized and managed effectively. +imperatively writing setup code within each test (the *how*), developers +declare the fixtures they need (the *what*) in the test function's signature. +This shifts the cognitive load from managing setup details in every test to +designing a system of well-defined, reusable fixtures. Over time, particularly +in larger projects, this can lead to a more robust, maintainable, and +understandable test suite as common setup patterns are centralized and managed +effectively. ## II. Getting Started with `rstest` @@ -147,21 +149,23 @@ pub fn answer_to_life() -> u32 { } ``` -In this example, `answer_to_life` is a public function marked with `#[fixture]`. -It takes no arguments and returns a `u32` value of 42. The `#[fixture]` macro -effectively registers this function with the `rstest` system, transforming it -into a component that `rstest` can discover and utilize. The return type of the -fixture function (here, `u32`) defines the type of the data that will be -injected into tests requesting this fixture. Fixtures can return any valid Rust -type, from simple primitives to complex structs or trait objects. Fixtures can -also depend on other fixtures, allowing for compositional setup. +In this example, `answer_to_life` is a public function marked with +`#[fixture]`. It takes no arguments and returns a `u32` value of 42. The +`#[fixture]` macro effectively registers this function with the `rstest` +system, transforming it into a component that `rstest` can discover and +utilize. The return type of the fixture function (here, `u32`) defines the type +of the data that will be injected into tests requesting this fixture. Fixtures +can return any valid Rust type, from simple primitives to complex structs or +trait objects. Fixtures can also depend on other fixtures, allowing for +compositional setup. ### C. Injecting Fixtures into Tests with `#[rstest]` Once a fixture is defined, it can be used in a test function. Test functions that utilize `rstest` features, including fixture injection, must be annotated -with the `#[rstest]` attribute. The fixture is then injected by simply declaring -an argument in the test function with the same name as the fixture function. +with the `#[rstest]` attribute. The fixture is then injected by simply +declaring an argument in the test function with the same name as the fixture +function. Here’s how to use the `answer_to_life` fixture in a test: @@ -199,11 +203,11 @@ leveraging `rstest` effectively. ### A. Simple Fixture Examples -The flexibility of `rstest` fixtures allows them to provide a wide array of data -types and perform various setup tasks. Fixtures are not limited by the kind of -data they can return; any valid Rust type is permissible. This enables fixtures -to encapsulate diverse setup logic, providing ready-to-use dependencies for -tests. +The flexibility of `rstest` fixtures allows them to provide a wide array of +data types and perform various setup tasks. Fixtures are not limited by the +kind of data they can return; any valid Rust type is permissible. This enables +fixtures to encapsulate diverse setup logic, providing ready-to-use +dependencies for tests. Here are a few examples illustrating different kinds of fixtures: @@ -297,11 +301,11 @@ implementation. ### B. Understanding fixture scope and lifetime (default behaviour) By default, `rstest` calls a fixture function anew for each test that uses it. -This means if five different tests inject the same fixture, the fixture function -will be executed five times, and each test will receive a fresh, independent -instance of the fixture's result. This behaviour is crucial for test isolation. -The `rstest` macro effectively desugars a test like `fn the_test(injected: i32)` -into something conceptually similar to +This means if five different tests inject the same fixture, the fixture +function will be executed five times, and each test will receive a fresh, +independent instance of the fixture's result. This behaviour is crucial for +test isolation. The `rstest` macro effectively desugars a test like +`fn the_test(injected: i32)` into something conceptually similar to `#[test] fn the_test() { let injected = injected_fixture_func(); /*... */ }` within the test body, implying a new call each time. @@ -318,9 +322,9 @@ concern or when the cost of fixture creation is prohibitive. ## IV. Parameterized Tests with `rstest` -`rstest` excels at creating parameterized tests, allowing a single test logic to -be executed with multiple sets of input data. This is achieved primarily through -the `#[case]` and `#[values]` attributes. +`rstest` excels at creating parameterized tests, allowing a single test logic +to be executed with multiple sets of input data. This is achieved primarily +through the `#[case]` and `#[values]` attributes. ### A. Table-Driven Tests with `#[case]`: Defining Specific Scenarios @@ -354,21 +358,22 @@ fn test_fibonacci(#[case] input: u32, #[case] expected: u32) { } ``` -For each `#[case(input_val, expected_val)]` line, `rstest` generates a separate, -independent test. If one case fails, the others are still executed and reported -individually by the test runner. These generated tests are often named by -appending `::case_N` to the original test function name (e.g., +For each `#[case(input_val, expected_val)]` line, `rstest` generates a +separate, independent test. If one case fails, the others are still executed +and reported individually by the test runner. These generated tests are often +named by appending `::case_N` to the original test function name (e.g., `test_fibonacci::case_1`, `test_fibonacci::case_2`, etc.), which aids in -identifying specific failing cases. This individual reporting mechanism provides -clearer feedback than a loop within a single test, where the first failure might -obscure subsequent ones. +identifying specific failing cases. This individual reporting mechanism +provides clearer feedback than a loop within a single test, where the first +failure might obscure subsequent ones. ### B. Combinatorial Testing with `#[values]`: Generating Test Matrices The `#[values(...)]` attribute is used on test function arguments to generate tests for every possible combination of the provided values (the Cartesian -product). This is particularly useful for testing interactions between different -parameters or ensuring comprehensive coverage across various input states. +product). This is particularly useful for testing interactions between +different parameters or ensuring comprehensive coverage across various input +states. Consider testing a state machine's transition logic based on current state and an incoming event: @@ -424,12 +429,13 @@ representative values or using `#[case]` for more targeted scenarios. Fixtures can be seamlessly combined with parameterized arguments (`#[case]` or `#[values]`) in the same test function. This powerful combination allows for testing different aspects of a component (varied by parameters) within a -consistent environment or context (provided by fixtures). The "Complete Example" -in the `rstest` documentation hints at this synergy, stating that all features -can be used together, mixing fixture variables, fixed cases, and value lists. +consistent environment or context (provided by fixtures). The "Complete +Example" in the `rstest` documentation hints at this synergy, stating that all +features can be used together, mixing fixture variables, fixed cases, and value +lists. -For example, a test might use a fixture to obtain a database connection and then -use `#[case]` arguments to test operations with different user IDs: +For example, a test might use a fixture to obtain a database connection and +then use `#[case]` arguments to test operations with different user IDs: ```rust use rstest::*; @@ -496,8 +502,8 @@ In this example, `derived_value` depends on `base_value`, and `configured_item` depends on `derived_value`. When `test_composed_fixture` requests `configured_item`, `rstest` first calls `base_value()`, then `derived_value(10)`, and finally `configured_item(20, "item_".to_string())`. -This hierarchical dependency resolution mirrors good software design principles, -promoting modularity and maintainability in test setups. +This hierarchical dependency resolution mirrors good software design +principles, promoting modularity and maintainability in test setups. ### B. Controlling Fixture Initialization: `#[once]` for Shared State @@ -555,10 +561,11 @@ consideration for resource management. ### C. Renaming Fixtures for Clarity: The `#[from]` Attribute -Sometimes a fixture's function name might be long and descriptive, but a shorter -or different name is preferred for the argument in a test or another fixture. -The `#[from(original_fixture_name)]` attribute on an argument allows renaming. -This is particularly useful when destructuring the result of a fixture. +Sometimes a fixture's function name might be long and descriptive, but a +shorter or different name is preferred for the argument in a test or another +fixture. The `#[from(original_fixture_name)]` attribute on an argument allows +renaming. This is particularly useful when destructuring the result of a +fixture. ```rust use rstest::*; @@ -580,10 +587,10 @@ fn test_with_destructured_fixture(#[from(complex_user_data_fixture)] (name, _, _ ``` The `#[from]` attribute decouples the fixture's actual function name from the -variable name used within the consuming function. As shown, if a fixture returns -a tuple or struct and the test only cares about some parts or wants to use more -idiomatic names for destructured elements, `#[from]` is essential to link the -argument pattern to the correct source fixture. +variable name used within the consuming function. As shown, if a fixture +returns a tuple or struct and the test only cares about some parts or wants to +use more idiomatic names for destructured elements, `#[from]` is essential to +link the argument pattern to the correct source fixture. ### D. Partial Fixture Injection & Default Arguments @@ -678,14 +685,15 @@ allowing the direct use of string representations for types that support it. However, if the `FromStr` conversion fails (e.g., due to a malformed string), the error will typically occur at test runtime, potentially leading to a panic. For types with complex parsing logic or many failure modes, it might be clearer -to perform the conversion explicitly within a fixture or at the beginning of the -test to handle errors more gracefully or provide more specific diagnostic +to perform the conversion explicitly within a fixture or at the beginning of +the test to handle errors more gracefully or provide more specific diagnostic messages. ## VI. Asynchronous Testing with `rstest` -`rstest` provides robust support for testing asynchronous Rust code, integrating -with common async runtimes and offering syntactic sugar for managing futures. +`rstest` provides robust support for testing asynchronous Rust code, +integrating with common async runtimes and offering syntactic sugar for +managing futures. ### A. Defining Asynchronous Fixtures (`async fn`) @@ -714,10 +722,10 @@ default async runtime support, but the fixture logic can be any async code. Test functions themselves can also be `async fn`. `rstest` will manage the execution of these async tests. By default, `rstest` often uses -`#[async_std::test]` to annotate the generated async test functions. However, it -is designed to be largely runtime-agnostic and can be integrated with other -popular async runtimes like Tokio or Actix. This is typically done by adding the -runtime's specific test attribute (e.g., `#[tokio::test]` or +`#[async_std::test]` to annotate the generated async test functions. However, +it is designed to be largely runtime-agnostic and can be integrated with other +popular async runtimes like Tokio or Actix. This is typically done by adding +the runtime's specific test attribute (e.g., `#[tokio::test]` or `#[actix_rt::test]`) alongside `#[rstest]`. ```rust @@ -743,10 +751,10 @@ The order of procedural macro attributes can sometimes matter. While `rstest` documentation and examples show flexibility (e.g., `#[rstest]` then `#[tokio::test]`, or vice versa), users should ensure their chosen async runtime's test macro is correctly placed to provide the necessary execution -context for the async test body and any async fixtures. `rstest` itself does not -bundle a runtime; it integrates with existing ones. The "Inject Test Attribute" -feature mentioned in `rstest` documentation may offer more explicit control over -which test runner attribute is applied. +context for the async test body and any async fixtures. `rstest` itself does +not bundle a runtime; it integrates with existing ones. The "Inject Test +Attribute" feature mentioned in `rstest` documentation may offer more explicit +control over which test runner attribute is applied. ### C. Managing Futures: `#[future]` and `#[awt]` Attributes @@ -901,27 +909,28 @@ fn test_read_from_temp_file(temp_file_with_content: PathBuf) { By encapsulating temporary resource management within fixtures, tests become cleaner and less prone to errors related to resource setup or cleanup. The RAII -(Resource Acquisition Is Initialization) pattern, common in Rust and exemplified -by `tempfile::TempDir` (which cleans up the directory when dropped), works -effectively with `rstest`'s fixture model. When a regular (non-`#[once]`) -fixture returns a `TempDir` object, or an object that owns it, the resource is -typically cleaned up after the test finishes, as the fixture's return value goes -out of scope. This localizes resource management logic to the fixture, keeping -the test focused on its assertions. For temporary resources, regular (per-test) -fixtures are generally preferred over `#[once]` fixtures to ensure proper -cleanup, as `#[once]` fixtures are never dropped. +(Resource Acquisition Is Initialization) pattern, common in Rust and +exemplified by `tempfile::TempDir` (which cleans up the directory when +dropped), works effectively with `rstest`'s fixture model. When a regular +(non-`#[once]`) fixture returns a `TempDir` object, or an object that owns it, +the resource is typically cleaned up after the test finishes, as the fixture's +return value goes out of scope. This localizes resource management logic to the +fixture, keeping the test focused on its assertions. For temporary resources, +regular (per-test) fixtures are generally preferred over `#[once]` fixtures to +ensure proper cleanup, as `#[once]` fixtures are never dropped. ### B. Mocking External Services (e.g., Database Connections, HTTP APIs) For unit and integration tests that depend on external services like databases or HTTP APIs, mocking is a crucial technique. Mocks allow tests to run in -isolation, without relying on real external systems, making them faster and more -reliable. `rstest` fixtures are an ideal place to encapsulate the setup and -configuration of mock objects. Crates like `mockall` can be used to create -mocks, or they can be hand-rolled. The fixture would then provide the configured -mock instance to the test. General testing advice also strongly recommends -mocking external dependencies. The `rstest` documentation itself shows examples -with fakes or mocks like `empty_repository` and `string_processor`. +isolation, without relying on real external systems, making them faster and +more reliable. `rstest` fixtures are an ideal place to encapsulate the setup +and configuration of mock objects. Crates like `mockall` can be used to create +mocks, or they can be hand-rolled. The fixture would then provide the +configured mock instance to the test. General testing advice also strongly +recommends mocking external dependencies. The `rstest` documentation itself +shows examples with fakes or mocks like `empty_repository` and +`string_processor`. A conceptual example using a hypothetical mocking library: @@ -1002,11 +1011,11 @@ readable and maintainable. ### C. Using `#[files(...)]` for Test Input from Filesystem Paths -For tests that need to process data from multiple input files, `rstest` provides -the `#[files("glob_pattern")]` attribute. This attribute can be used on a test -function argument to inject file paths that match a given glob pattern. The -argument type is typically `PathBuf`. It can also inject file contents directly -as `&str` or `&[u8]` by specifying a mode, e.g., +For tests that need to process data from multiple input files, `rstest` +provides the `#[files("glob_pattern")]` attribute. This attribute can be used +on a test function argument to inject file paths that match a given glob +pattern. The argument type is typically `PathBuf`. It can also inject file +contents directly as `&str` or `&[u8]` by specifying a mode, e.g., `#[files("glob_pattern", mode = "str")]`. Additional attributes like `#[base_dir = "…"]` can specify a base directory for the glob, and `#[exclude("regex")]` can filter out paths matching a regular expression. @@ -1046,15 +1055,15 @@ significantly increase binary size if used with large data files. ## VIII. Reusability and Organization As test suites grow, maintaining reusability and clear organization becomes -paramount. `rstest` and its ecosystem provide tools and encourage practices that -support these goals. +paramount. `rstest` and its ecosystem provide tools and encourage practices +that support these goals. ### A. Leveraging `rstest_reuse` for Test Templates While `rstest`'s `#[case]` attribute is excellent for parameterization, repeating the same set of `#[case]` attributes across multiple test functions -can lead to duplication. The `rstest_reuse` crate addresses this by allowing the -definition of reusable test templates. +can lead to duplication. The `rstest_reuse` crate addresses this by allowing +the definition of reusable test templates. `rstest_reuse` introduces two main attributes: @@ -1121,21 +1130,21 @@ for maintainability and scalability. `src/lib.rs` or `src/fixtures.rs` under `#[cfg(test)]`) and `use` them in integration tests. - **Naming Conventions:** Use clear, descriptive names for fixtures that - indicate what they provide or set up. Test function names should clearly state - what behaviour they are verifying. + indicate what they provide or set up. Test function names should clearly + state what behaviour they are verifying. - **Fixture Responsibility:** Aim for fixtures with a single, well-defined responsibility. Complex setups can be achieved by composing smaller, focused fixtures. - **Scope Management (**`#[once]` **vs. Regular):** Make conscious decisions about fixture lifetimes. Use `#[once]` sparingly, only for genuinely - expensive, read-only, and safely static resources, being mindful of its "never - dropped" nature. Prefer regular (per-test) fixtures for test isolation and - proper resource management. + expensive, read-only, and safely static resources, being mindful of its + "never dropped" nature. Prefer regular (per-test) fixtures for test isolation + and proper resource management. - **Modularity:** Group related fixtures and tests into modules. This improves navigation and understanding of the test suite. - **Readability:** Utilize features like `#[from]` for renaming and - `#[default]` / `#[with]` for configurable fixtures to enhance the clarity of - both fixture definitions and their usage in tests. + `#[default]` / `#[with]` for configurable fixtures to enhance the clarity + of both fixture definitions and their usage in tests. General testing advice, such as keeping tests small and focused and mocking external dependencies, also applies and is well-supported by `rstest`'s design. @@ -1160,21 +1169,21 @@ become verbose for scenarios involving shared setup or parameterization. `#[test]` functions with slight variations. `rstest`'s `#[case]` and `#[values]` attributes provide a much cleaner and more powerful solution. - **Readability and Boilerplate:** `rstest` generally leads to less boilerplate - code and more readable tests because dependencies are explicit in the function - signature, and parameterization is handled declaratively. + code and more readable tests because dependencies are explicit in the + function signature, and parameterization is handled declaratively. The following table summarizes key differences: **Table 1:** `rstest` **vs. Standard Rust** `#[test]` **for Fixture Management and Parameterization** -| Feature | Standard #[test] Approach | rstest Approach | -| ------------------------------------------------------------- | ------------------------------------------------------------- | -------------------------------------------------------------------------------- | -| Fixture Injection | Manual calls to setup functions within each test. | Fixture name as argument in #[rstest] function; fixture defined with #[fixture]. | -| Parameterized Tests (Specific Cases) | Loop inside one test, or multiple distinct #[test] functions. | #[case(...)] attributes on #[rstest] function. | -| Parameterized Tests (Value Combinations) | Nested loops inside one test, or complex manual generation. | #[values(...)] attributes on arguments of #[rstest] function. | -| Async Fixture Setup | Manual async block and .await calls inside test. | async fn fixtures, with #[future] and #[awt] for ergonomic `.await`ing. | -| Reusing Parameter Sets | Manual duplication of cases or custom helper macros. | rstest_reuse crate with #[template] and #[apply] attributes. | +| Feature | Standard #[test] Approach | rstest Approach | +| ---------------------------------------- | ------------------------------------------------------------- | -------------------------------------------------------------------------------- | +| Fixture Injection | Manual calls to setup functions within each test. | Fixture name as argument in #[rstest] function; fixture defined with #[fixture]. | +| Parameterized Tests (Specific Cases) | Loop inside one test, or multiple distinct #[test] functions. | #[case(…)] attributes on #[rstest] function. | +| Parameterized Tests (Value Combinations) | Nested loops inside one test, or complex manual generation. | #[values(…)] attributes on arguments of #[rstest] function. | +| Async Fixture Setup | Manual async block and .await calls inside test. | async fn fixtures, with #[future] and #[awt] for ergonomic `.await`ing. | +| Reusing Parameter Sets | Manual duplication of cases or custom helper macros. | rstest_reuse crate with #[template] and #[apply] attributes. | This comparison highlights how `rstest`'s attribute-based, declarative approach streamlines common testing patterns, reducing manual effort and improving the @@ -1221,8 +1230,8 @@ mind: `#[files]`) are defined and discovered at compile time. This means the structure of the tests is validated by the Rust compiler, which can catch structural errors (like type mismatches in `#[case]` arguments or references - to non-existent fixtures) earlier than runtime test discovery mechanisms. This - compile-time validation is a strength, offering a degree of static + to non-existent fixtures) earlier than runtime test discovery mechanisms. + This compile-time validation is a strength, offering a degree of static verification for the test suite itself. However, it also means that dynamically generating test cases at runtime based on external factors (not known at compile time) is not directly supported by `rstest`'s core model. @@ -1246,8 +1255,8 @@ For developers who rely on logging frameworks like `log` or `tracing` for debugging tests, the `rstest-log` crate can simplify integration. Test runners often capture standard output and error streams, and logging frameworks require proper initialization. `rstest-log` likely provides attributes or wrappers to -ensure that logging is correctly set up before each `rstest`-generated test case -runs, making it easier to get consistent log output from tests. +ensure that logging is correctly set up before each `rstest`-generated test +case runs, making it easier to get consistent log output from tests. ### B. `logtest`: Verifying Log Output @@ -1275,17 +1284,17 @@ are logged under specific conditions. ### C. `test-with`: Conditional Testing with `rstest` -The `test-with` crate allows for conditional execution of tests based on various -runtime conditions, such as the presence of environment variables, the existence -of specific files or folders, or the availability of network services. It can be -used with `rstest`. For example, an `rstest` test could be further annotated -with `test-with` attributes to ensure it only runs if a particular database -configuration file exists or if a dependent web service is reachable. The order -of macros is important: `rstest` should typically generate the test cases first, -and then `test-with` can apply its conditional execution logic to these -generated tests. This allows `rstest` to focus on test structure and data -provision, while `test-with` provides an orthogonal layer of control over test -execution conditions. +The `test-with` crate allows for conditional execution of tests based on +various runtime conditions, such as the presence of environment variables, the +existence of specific files or folders, or the availability of network +services. It can be used with `rstest`. For example, an `rstest` test could be +further annotated with `test-with` attributes to ensure it only runs if a +particular database configuration file exists or if a dependent web service is +reachable. The order of macros is important: `rstest` should typically generate +the test cases first, and then `test-with` can apply its conditional execution +logic to these generated tests. This allows `rstest` to focus on test structure +and data provision, while `test-with` provides an orthogonal layer of control +over test execution conditions. ## XI. Conclusion and Further Resources @@ -1299,9 +1308,9 @@ equips developers with the tools to build comprehensive and maintainable test suites. While considerations such as compile-time impact and the learning curve for -advanced features exist, the benefits in terms of cleaner, more robust, and more -expressive tests often outweigh these for projects with non-trivial testing -requirements. +advanced features exist, the benefits in terms of cleaner, more robust, and +more expressive tests often outweigh these for projects with non-trivial +testing requirements. ### A. Recap of `rstest`'s Power for Fixture-Based Testing @@ -1339,17 +1348,17 @@ provided by `rstest`: | ---------------------------- | -------------------------------------------------------------------------------------------- | | #[rstest] | Marks a function as an rstest test; enables fixture injection and parameterization. | | #[fixture] | Defines a function that provides a test fixture (setup data or services). | -| #[case(...)] | Defines a single parameterized test case with specific input values. | -| #[values(...)] | Defines a list of values for an argument, generating tests for each value or combination. | +| #[case(…)] | Defines a single parameterized test case with specific input values. | +| #[values(…)] | Defines a list of values for an argument, generating tests for each value or combination. | | #[once] | Marks a fixture to be initialized only once and shared (as a static reference) across tests. | | #[future] | Simplifies async argument types by removing impl Future boilerplate. | | #[awt] | (Function or argument level) Automatically .awaits future arguments in async tests. | | #[from(original_name)] | Allows renaming an injected fixture argument in the test function. | -| #[with(...)] | Overrides default arguments of a fixture for a specific test. | -| #[default(...)] | Provides default values for arguments within a fixture function. | -| #[timeout(...)] | Sets a timeout for an asynchronous test. | -| #[files("glob_pattern",...)] | Injects file paths (or contents, with mode=) matching a glob pattern as test arguments. | - -By mastering `rstest`, Rust developers can significantly elevate the quality and -efficiency of their testing practices, leading to more reliable and maintainable -software. +| #[with(…)] | Overrides default arguments of a fixture for a specific test. | +| #[default(…)] | Provides default values for arguments within a fixture function. | +| #[timeout(…)] | Sets a timeout for an asynchronous test. | +| #[files("glob_pattern",…)] | Injects file paths (or contents, with mode=) matching a glob pattern as test arguments. | + +By mastering `rstest`, Rust developers can significantly elevate the quality +and efficiency of their testing practices, leading to more reliable and +maintainable software. diff --git a/docs/the-road-to-wireframe-1-0-feature-set-philosophy-and-capability-maturity.md b/docs/the-road-to-wireframe-1-0-feature-set-philosophy-and-capability-maturity.md index e0866dc2..8bad69b2 100644 --- a/docs/the-road-to-wireframe-1-0-feature-set-philosophy-and-capability-maturity.md +++ b/docs/the-road-to-wireframe-1-0-feature-set-philosophy-and-capability-maturity.md @@ -16,12 +16,13 @@ network programming, `wireframe` must evolve. This document outlines the road to `wireframe` 1.0. It is a vision built upon three pillars of new functionality: **asynchronous duplex messaging**, -**multi-packet streaming responses**, and **transparent message fragmentation**. -More than just a list of features, this is a statement of capability maturity. -It details a cohesive strategy for hardening the library with production-grade -resilience, first-class observability, and a rigorous quality assurance process. -The result will be a framework that is not only powerful and flexible but also -exceptionally robust, debuggable, and a pleasure to use. +**multi-packet streaming responses**, and **transparent message +fragmentation**. More than just a list of features, this is a statement of +capability maturity. It details a cohesive strategy for hardening the library +with production-grade resilience, first-class observability, and a rigorous +quality assurance process. The result will be a framework that is not only +powerful and flexible but also exceptionally robust, debuggable, and a pleasure +to use. ## II. The Core Feature Set: A Duplex Frame Highway @@ -33,17 +34,17 @@ server-initiated, multi-part, or larger than a single network packet. ### A. Asynchronous, Bidirectional Messaging The strict "one frame in, one frame out" model is the primary limitation to be -addressed. The 1.0 architecture will embrace a fully asynchronous, bidirectional -communication model where any participant—client or server—can originate frames -at any time. This is achieved through two tightly integrated features: -server-initiated pushes and streaming responses. +addressed. The 1.0 architecture will embrace a fully asynchronous, +bidirectional communication model where any participant—client or server—can +originate frames at any time. This is achieved through two tightly integrated +features: server-initiated pushes and streaming responses. #### The Unified `Response` Enum and Declarative Handler Model To provide a clean, unified API, the handler return type will evolve. A more -ergonomic, declarative approach replaces the previous imperative model. Handlers -will return an enhanced `Response` enum, giving developers clear and efficient -ways to express their intent. +ergonomic, declarative approach replaces the previous imperative model. +Handlers will return an enhanced `Response` enum, giving developers clear and +efficient ways to express their intent. Rust @@ -97,9 +98,9 @@ eliminating the need for complex locking and making the system easier to reason about. The core of this actor is a `tokio::select!` loop that multiplexes frames from -multiple sources onto the outbound socket. To ensure that time-sensitive control -messages (like heartbeats or session notifications) are not delayed by large -data transfers, this loop will be explicitly prioritized using +multiple sources onto the outbound socket. To ensure that time-sensitive +control messages (like heartbeats or session notifications) are not delayed by +large data transfers, this loop will be explicitly prioritized using `select!(biased;)`. The polling order will be: @@ -129,8 +130,8 @@ fragments. The protocol-specific rules—how to parse a fragment header, determine the payload length, and identify the final fragment—are provided by the user via a -`FragmentStrategy` trait. This keeps the core library completely agnostic of any -specific wire format. +`FragmentStrategy` trait. This keeps the core library completely agnostic of +any specific wire format. The `FragmentStrategy` trait will be enhanced to be more expressive and context-aware: @@ -164,8 +165,8 @@ pub trait FragmentStrategy: 'static + Send + Sync { #### Robust Re-assembly for Modern Protocols A critical enhancement to the initial design is support for multiplexing. The -re-assembly logic will not assume that fragments arrive sequentially. By using a -concurrent hash map (e.g., `dashmap::DashMap`) keyed by `msg_id`, the +re-assembly logic will not assume that fragments arrive sequentially. By using +a concurrent hash map (e.g., `dashmap::DashMap`) keyed by `msg_id`, the `FragmentAdapter` can re-assemble multiple logical messages concurrently on the same connection. This is essential for supporting modern protocols like HTTP/2 or gRPC. @@ -174,8 +175,8 @@ or gRPC. A feature-complete library is not necessarily a mature one. The road to `wireframe` 1.0 is paved with a deep commitment to the cross-cutting concerns -that define production-ready software: hardening, ergonomics, observability, and -quality assurance. +that define production-ready software: hardening, ergonomics, observability, +and quality assurance. ### A. Hardening and Resilience @@ -246,8 +247,8 @@ expressive error-handling strategy. ### B. First-Class Developer Ergonomics -A powerful library that is difficult to use will not be used. `wireframe` 1.0 is -committed to an API that is intuitive, flexible, and idiomatic. +A powerful library that is difficult to use will not be used. `wireframe` 1.0 +is committed to an API that is intuitive, flexible, and idiomatic. - **Fluent Builder API:** All configuration will be done through a fluent builder pattern (`WireframeApp::new().with_feature_x().with_config_y()`), @@ -265,15 +266,16 @@ committed to an API that is intuitive, flexible, and idiomatic. ### C. Pervasive Observability -A production system is a black box without good instrumentation. `wireframe` 1.0 -will treat observability as a first-class feature, integrating the `tracing` -crate throughout its core. +A production system is a black box without good instrumentation. `wireframe` +1.0 will treat observability as a first-class feature, integrating the +`tracing` crate throughout its core. - **Structured Logging and Tracing:** The entire lifecycle of a connection, request, and response will be wrapped in `tracing::span!`s. 83 This provides - invaluable, context-aware diagnostic information that correlates events across - asynchronous boundaries. Key events—such as frame receipt, back-pressure - application, and connection termination—will be logged with structured data. + invaluable, context-aware diagnostic information that correlates events + across asynchronous boundaries. Key events—such as frame receipt, + back-pressure application, and connection termination—will be logged with + structured data. - **Metrics and OpenTelemetry:** The structured data from `tracing` can be consumed by subscribers that export it as metrics. 88 This enables the @@ -324,8 +326,8 @@ traditional unit and integration tests. - **Stateful Property Testing:** For validating complex, stateful protocol conversations (like fragmentation and re-assembly), `proptest` will be used. - This technique generates thousands of random-but-valid sequences of operations - to uncover edge cases that manual tests would miss. + This technique generates thousands of random-but-valid sequences of + operations to uncover edge cases that manual tests would miss. - **Concurrency Verification:** The `loom` crate will be used for permutation testing of concurrency hotspots, such as the connection actor's `select!` @@ -342,11 +344,11 @@ components. -| Phase | Focus | Key Deliverables | -| 1. Foundational Mechanics | Implement the core, non-public machinery. | Internal actor loop with select!(biased!), dual-channel push plumbing, basic FragmentAdapter logic. | -| 2. Public APIs & Ergonomics | Expose functionality to users in a clean, idiomatic way. | Fluent WireframeApp builder, WireframeProtocol trait, enhanced Response enum, FragmentStrategy trait, SessionRegistry with Weak references. | -| 3. Production Hardening | Add features for resilience and security. | CancellationToken-based graceful shutdown, re-assembly timeouts, per-connection rate limiting, optional Dead Letter Queue. | -| 4. Maturity and Polish | Focus on observability, advanced testing, and documentation. | Full tracing instrumentation, criterion benchmarks, loom and proptest test suites, comprehensive user guides and API documentation. | +| Phase | Focus | Key Deliverables | +| 1. Foundational Mechanics | Implement the core, non-public machinery. | Internal actor loop with select!(biased!), dual-channel push plumbing, basic FragmentAdapter logic. | +| 2. Public APIs & Ergonomics | Expose functionality to users in a clean, idiomatic way. | Fluent WireframeApp builder, WireframeProtocol trait, enhanced Response enum, FragmentStrategy trait, SessionRegistry with Weak references. | +| 3. Production Hardening | Add features for resilience and security. | CancellationToken-based graceful shutdown, re-assembly timeouts, per-connection rate limiting, optional Dead Letter Queue. | +| 4. Maturity and Polish | Focus on observability, advanced testing, and documentation. | Full tracing instrumentation, criterion benchmarks, loom and proptest test suites, comprehensive user guides and API documentation. | @@ -355,7 +357,7 @@ components. The road to `wireframe` 1.0 is an ambitious one. It represents a significant evolution from a simple routing library to a comprehensive, production-grade framework for building sophisticated, asynchronous network services. By wedding -a powerful new feature-set—duplex messaging, streaming, and fragmentation—with a -deep commitment to the principles of resilience, observability, and developer +a powerful new feature-set—duplex messaging, streaming, and fragmentation—with +a deep commitment to the principles of resilience, observability, and developer ergonomics, `wireframe` will provide the Rust community with a best-in-class tool for tackling the challenges of modern network programming. diff --git a/docs/wireframe-1-0-detailed-development-roadmap.md b/docs/wireframe-1-0-detailed-development-roadmap.md index 847f6998..7cfacd7c 100644 --- a/docs/wireframe-1-0-detailed-development-roadmap.md +++ b/docs/wireframe-1-0-detailed-development-roadmap.md @@ -16,33 +16,33 @@ public consumption. ## Phase 1: Foundational Mechanics -*Focus: Implementing the core, non-public machinery for duplex communication and -message processing. This phase establishes the internal architecture upon which -all public-facing features will be built.* +*Focus: Implementing the core, non-public machinery for duplex communication +and message processing. This phase establishes the internal architecture upon +which all public-facing features will be built.* -| Item | Name | Details | Size | Depends on | -| ---- | ------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ | ------- | -| 1.1 | Core Response & Error Types | Define the new `Response` enum with `Single`, `Vec`, `Stream` and `Empty` variants. Implement the generic `WireframeError` enum to distinguish between I/O and protocol errors. | Small | — | -| 1.2 | Priority Push Channels | Implement the internal dual-channel `mpsc` mechanism within the connection state to handle high-priority and low-priority pushed frames. | Medium | — | -| 1.3 | Connection Actor Write Loop | Convert per-request workers into stateful connection actors. Implement a `select!(biased; ...)` loop that polls for shutdown signals, high/low priority pushes and the handler response stream in that strict order. | Large | #1.2 | -| 1.4 | Initial FragmentStrategy Trait | Define the initial `FragmentStrategy` trait and the `FragmentMeta` struct. Focus on the core methods: `decode_header` and `encode_header`. | Medium | — | -| 1.5 | Basic FragmentAdapter | Implement the `FragmentAdapter` as a `FrameProcessor`. Build the inbound reassembly logic for a single, non-multiplexed stream of fragments and the outbound logic for splitting a single large frame. | Large | #1.4 | -| 1.6 | Internal Hook Plumbing | Add the invocation points for the protocol-specific hooks (`before_send`, `on_command_end`, etc.) within the connection actor, even if the public trait is not yet defined. | Small | #1.3 | +| Item | Name | Details | Size | Depends on | +| ---- | ------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ | ---------- | +| 1.1 | Core Response & Error Types | Define the new `Response` enum with `Single`, `Vec`, `Stream` and `Empty` variants. Implement the generic `WireframeError` enum to distinguish between I/O and protocol errors. | Small | — | +| 1.2 | Priority Push Channels | Implement the internal dual-channel `mpsc` mechanism within the connection state to handle high-priority and low-priority pushed frames. | Medium | — | +| 1.3 | Connection Actor Write Loop | Convert per-request workers into stateful connection actors. Implement a `select!(biased; ...)` loop that polls for shutdown signals, high/low priority pushes and the handler response stream in that strict order. | Large | #1.2 | +| 1.4 | Initial FragmentStrategy Trait | Define the initial `FragmentStrategy` trait and the `FragmentMeta` struct. Focus on the core methods: `decode_header` and `encode_header`. | Medium | — | +| 1.5 | Basic FragmentAdapter | Implement the `FragmentAdapter` as a `FrameProcessor`. Build the inbound reassembly logic for a single, non-multiplexed stream of fragments and the outbound logic for splitting a single large frame. | Large | #1.4 | +| 1.6 | Internal Hook Plumbing | Add the invocation points for the protocol-specific hooks (`before_send`, `on_command_end`, etc.) within the connection actor, even if the public trait is not yet defined. | Small | #1.3 | ## Phase 2: Public APIs & Developer Ergonomics -*Focus: Exposing the new functionality to developers through a clean, ergonomic, -and idiomatic API. This phase is about making the powerful new mechanics usable -and intuitive.* +*Focus: Exposing the new functionality to developers through a clean, +ergonomic, and idiomatic API. This phase is about making the powerful new +mechanics usable and intuitive.* -| Item | Name | Details | Size | Depends on | +| Item | Name | Details | Size | Depends on | | ---- | --------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ | ---------------- | -| 2.1 | WireframeProtocol Trait & Builder | Define the cohesive `WireframeProtocol` trait to encapsulate all protocol-specific logic. Refactor the `WireframeApp` builder to use a fluent `.with_protocol(MyProtocol)` method instead of multiple closures. | Medium | #1.6 | -| 2.2 | Public PushHandle API | Implement the public `PushHandle` struct with its `push`, `try_push` and policy-based `push_with_policy` methods. This handle interacts with the dual-channel system from #1.2. | Medium | #1.2 | -| 2.3 | Leak-Proof SessionRegistry | Implement the `SessionRegistry` for discovering connection handles. This must use `dashmap` with `Weak` pointers to prevent memory leaks from terminated connections. | Medium | #2.2 | -| 2.4 | async-stream Integration & Docs | Remove the proposed `FrameSink` from the design. Update the `Response::Stream` handling and document `async-stream` as the canonical way to create streams imperatively. | Small | #1.1 | -| 2.5 | Initial Test Suite | Write unit and integration tests for the new public APIs. Verify that `Response::Vec` and `Response::Stream` work, and that `PushHandle` can successfully send frames that are received by a client. | Large | #2.1, #2.3, #2.4 | -| 2.6 | Basic Fragmentation Example | Implement a simple `FragmentStrategy` (e.g. `LenFlag32K`) and an example showing the `FragmentAdapter` in use. This validates the adapter's basic functionality. | Medium | #1.5, #2.5 | +| 2.1 | WireframeProtocol Trait & Builder | Define the cohesive `WireframeProtocol` trait to encapsulate all protocol-specific logic. Refactor the `WireframeApp` builder to use a fluent `.with_protocol(MyProtocol)` method instead of multiple closures. | Medium | #1.6 | +| 2.2 | Public PushHandle API | Implement the public `PushHandle` struct with its `push`, `try_push` and policy-based `push_with_policy` methods. This handle interacts with the dual-channel system from #1.2. | Medium | #1.2 | +| 2.3 | Leak-Proof SessionRegistry | Implement the `SessionRegistry` for discovering connection handles. This must use `dashmap` with `Weak` pointers to prevent memory leaks from terminated connections. | Medium | #2.2 | +| 2.4 | async-stream Integration & Docs | Remove the proposed `FrameSink` from the design. Update the `Response::Stream` handling and document `async-stream` as the canonical way to create streams imperatively. | Small | #1.1 | +| 2.5 | Initial Test Suite | Write unit and integration tests for the new public APIs. Verify that `Response::Vec` and `Response::Stream` work, and that `PushHandle` can successfully send frames that are received by a client. | Large | #2.1, #2.3, #2.4 | +| 2.6 | Basic Fragmentation Example | Implement a simple `FragmentStrategy` (e.g. `LenFlag32K`) and an example showing the `FragmentAdapter` in use. This validates the adapter's basic functionality. | Medium | #1.5, #2.5 | ## Phase 3: Production Hardening & Resilience @@ -50,24 +50,24 @@ and intuitive.* operation in a production environment. This phase moves the library from "functional" to "resilient".* -| Item | Name | Details | Size | Depends on | +| Item | Name | Details | Size | Depends on | | ---- | ------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ | ---------- | -| 3.1 | Graceful Shutdown | Implement the server-wide graceful shutdown pattern. Use `tokio_util::sync::CancellationToken` for signalling and `tokio_util::task::TaskTracker` to ensure all connection actors terminate cleanly. | Large | #1.3 | -| 3.2 | Re-assembly DoS Protection | Harden the `FragmentAdapter` by adding a non-optional, configurable timeout for partial message re-assembly and strictly enforcing the `max_message_size` limit to prevent memory exhaustion. | Medium | #1.5 | -| 3.3 | Multiplexed Re-assembly | Enhance the `FragmentAdapter`'s inbound logic to support concurrent re-assembly of multiple messages. Use the `msg_id` from `FragmentMeta` as a key into a `dashmap::DashMap` of partial messages. | Large | #3.2 | -| 3.4 | Per-Connection Rate Limiting | Integrate an asynchronous, token-bucket rate limiter into the `PushHandle`. The rate limit should be configurable on the `WireframeApp` builder and enforced on every push. | Medium | #2.2 | -| 3.5 | Dead Letter Queue (DLQ) | Implement the optional Dead Letter Queue mechanism. Allow a user to provide a DLQ channel sender during app setup; failed pushes (due to a full queue) can be routed there instead of being dropped. | Medium | #2.2 | -| 3.6 | Context-Aware FragmentStrategy | Enhance the `FragmentStrategy` trait. `max_fragment_payload` and `encode_header` should receive a reference to the logical `Frame` being processed, allowing for more dynamic fragmentation rules. | Small | #1.4 | +| 3.1 | Graceful Shutdown | Implement the server-wide graceful shutdown pattern. Use `tokio_util::sync::CancellationToken` for signalling and `tokio_util::task::TaskTracker` to ensure all connection actors terminate cleanly. | Large | #1.3 | +| 3.2 | Re-assembly DoS Protection | Harden the `FragmentAdapter` by adding a non-optional, configurable timeout for partial message re-assembly and strictly enforcing the `max_message_size` limit to prevent memory exhaustion. | Medium | #1.5 | +| 3.3 | Multiplexed Re-assembly | Enhance the `FragmentAdapter`'s inbound logic to support concurrent re-assembly of multiple messages. Use the `msg_id` from `FragmentMeta` as a key into a `dashmap::DashMap` of partial messages. | Large | #3.2 | +| 3.4 | Per-Connection Rate Limiting | Integrate an asynchronous, token-bucket rate limiter into the `PushHandle`. The rate limit should be configurable on the `WireframeApp` builder and enforced on every push. | Medium | #2.2 | +| 3.5 | Dead Letter Queue (DLQ) | Implement the optional Dead Letter Queue mechanism. Allow a user to provide a DLQ channel sender during app setup; failed pushes (due to a full queue) can be routed there instead of being dropped. | Medium | #2.2 | +| 3.6 | Context-Aware FragmentStrategy | Enhance the `FragmentStrategy` trait. `max_fragment_payload` and `encode_header` should receive a reference to the logical `Frame` being processed, allowing for more dynamic fragmentation rules. | Small | #1.4 | *Focus: Finalizing the library with comprehensive instrumentation, advanced testing, and high-quality documentation to ensure it is stable, debuggable, and ready for a 1.0 release.* -| Item | Name | Details | Size | Depends on | +| Item | Name | Details | Size | Depends on | | ---- | ------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ | ---------- | -| 4.1 | Pervasive tracing instrumentation | Instrument the entire library with `tracing`. Add `span!` calls for connection and request lifecycles and detailed `event!` calls for key state transitions (e.g., back-pressure applied, frame dropped, connection terminated). | Large | All | -| 4.2 | Advanced Testing: Concurrency & Logic | Implement the advanced test suite. Use `loom` to verify the concurrency correctness of the `select!` loop and `PushHandle`. Use `proptest` for stateful property-based testing of complex protocol interactions (e.g., fragmentation and streaming). | Large | #3.3, #3.5 | -| 4.3 | Advanced Testing: Performance | Implement the criterion benchmark suite. Create micro-benchmarks for individual components (e.g., `PushHandle` contention) and macro-benchmarks for end-to-end throughput and latency. | Medium | All | -| 4.4 | Comprehensive User Guides | Write the official documentation for the new features. Create separate guides for "Duplex Messaging & Pushes", "Streaming Responses", and "Message Fragmentation". Each guide must include runnable examples and explain the relevant concepts and APIs. | Large | All | -| 4.5 | High-Quality Examples | Create at least two complete, high-quality examples demonstrating real-world use cases. These should include server-initiated MySQL packets (e.g., LOCAL INFILE and session-state trackers) and a push-driven protocol such as WebSocket heart-beats or MQTT broker fan-out. | Medium | All | -| 4.6 | Changelog & 1.0 Release | Finalise the `CHANGELOG.md` with a comprehensive summary of all new features, enhancements, and breaking changes. Tag and publish the 1.0.0 release. | Small | All | +| 4.1 | Pervasive tracing instrumentation | Instrument the entire library with `tracing`. Add `span!` calls for connection and request lifecycles and detailed `event!` calls for key state transitions (e.g., back-pressure applied, frame dropped, connection terminated). | Large | All | +| 4.2 | Advanced Testing: Concurrency & Logic | Implement the advanced test suite. Use `loom` to verify the concurrency correctness of the `select!` loop and `PushHandle`. Use `proptest` for stateful property-based testing of complex protocol interactions (e.g., fragmentation and streaming). | Large | #3.3, #3.5 | +| 4.3 | Advanced Testing: Performance | Implement the criterion benchmark suite. Create micro-benchmarks for individual components (e.g., `PushHandle` contention) and macro-benchmarks for end-to-end throughput and latency. | Medium | All | +| 4.4 | Comprehensive User Guides | Write the official documentation for the new features. Create separate guides for "Duplex Messaging & Pushes", "Streaming Responses", and "Message Fragmentation". Each guide must include runnable examples and explain the relevant concepts and APIs. | Large | All | +| 4.5 | High-Quality Examples | Create at least two complete, high-quality examples demonstrating real-world use cases. These should include server-initiated MySQL packets (e.g., LOCAL INFILE and session-state trackers) and a push-driven protocol such as WebSocket heart-beats or MQTT broker fan-out. | Medium | All | +| 4.6 | Changelog & 1.0 Release | Finalise the `CHANGELOG.md` with a comprehensive summary of all new features, enhancements, and breaking changes. Tag and publish the 1.0.0 release. | Small | All | diff --git a/docs/wireframe-client-design.md b/docs/wireframe-client-design.md index e1df1483..32ce4531 100644 --- a/docs/wireframe-client-design.md +++ b/docs/wireframe-client-design.md @@ -1,9 +1,9 @@ # Client Support in Wireframe -This document proposes an initial design for adding client-side protocol support -to `wireframe`. The goal is to reuse the existing framing, serialization, and -message abstractions while providing a small API for connecting to a server and -exchanging messages. +This document proposes an initial design for adding client-side protocol +support to `wireframe`. The goal is to reuse the existing framing, +serialization, and message abstractions while providing a small API for +connecting to a server and exchanging messages. ## Motivation @@ -12,15 +12,16 @@ are intentionally generic: transport adapters, framing, serialization, routing, and middleware form a pipeline that is largely independent of server-specific logic. The design document outlines these layers, which process frames from raw bytes to typed messages and back -again【F:docs/rust-binary-router-library-design.md†L316-L371】. By reusing these -pieces, we can implement a lightweight client without duplicating protocol code. +again【F:docs/rust-binary-router-library-design.md†L316-L371】. By reusing +these pieces, we can implement a lightweight client without duplicating +protocol code. ## Core Components ### `WireframeClient` -A new `WireframeClient` type manages a single connection to a server. It mirrors -`WireframeServer` but operates in the opposite direction: +A new `WireframeClient` type manages a single connection to a server. It +mirrors `WireframeServer` but operates in the opposite direction: - Connect to a `TcpStream`. - Optionally, send a preamble using the existing `Preamble` helpers. diff --git a/docs/wireframe-testing-crate.md b/docs/wireframe-testing-crate.md index 9564eedd..c96eed66 100644 --- a/docs/wireframe-testing-crate.md +++ b/docs/wireframe-testing-crate.md @@ -8,8 +8,8 @@ frames, enabling fast tests without opening real network connections. The existing tests in [`tests/`](../tests) use helper functions such as `run_app_with_frame` and `run_app_with_frames` to feed length‑prefixed frames -through an in‑memory duplex stream. These helpers simplify testing handlers -by allowing assertions on encoded responses without spinning up a full server. +through an in‑memory duplex stream. These helpers simplify testing handlers by +allowing assertions on encoded responses without spinning up a full server. Encapsulating this logic in a dedicated crate keeps test code concise and reusable across projects. @@ -65,21 +65,21 @@ where ``` These functions mirror the behaviour of `run_app_with_frame` and -`run_app_with_frames` found in the repository’s test utilities. They create -a `tokio::io::duplex` stream, spawn the application as a background task, and +`run_app_with_frames` found in the repository’s test utilities. They create a +`tokio::io::duplex` stream, spawn the application as a background task, and write the provided frame(s) to the client side of the stream. After the app finishes processing, the helpers collect the bytes written back and return them for inspection. -Any I/O errors surfaced by the duplex stream or failures while decoding a length -prefix propagate through the returned `IoResult`. Malformed or truncated frames -therefore cause the future to resolve with an error, allowing tests to assert on -these failure conditions directly. +Any I/O errors surfaced by the duplex stream or failures while decoding a +length prefix propagate through the returned `IoResult`. Malformed or truncated +frames therefore cause the future to resolve with an error, allowing tests to +assert on these failure conditions directly. ### Custom Buffer Capacity -A variant accepting a buffer `capacity` allows fine‑tuning the -size of the in‑memory duplex channel, matching the existing +A variant accepting a buffer `capacity` allows fine‑tuning the size of the +in‑memory duplex channel, matching the existing `run_app_with_frame_with_capacity` and `run_app_with_frames_with_capacity` helpers. @@ -107,9 +107,9 @@ pub async fn drive_with_frames_mut(app: &mut WireframeApp, frames: Vec>) For most tests the input frame is preassembled from raw bytes. A small wrapper can accept any `serde::Serialize` value and perform the encoding and framing -before delegating to `drive_with_frame`. This mirrors the patterns in `tests/ -routes.rs` where structs are converted to bytes with `BincodeSerializer` and -then wrapped in a length‑prefixed frame. +before delegating to `drive_with_frame`. This mirrors the patterns in +`tests/ routes.rs` where structs are converted to bytes with +`BincodeSerializer` and then wrapped in a length‑prefixed frame. ```rust #[derive(serde::Serialize)] From 7c093866fa278316a1b2cccbe0628729d2600768 Mon Sep 17 00:00:00 2001 From: Leynos Date: Sun, 20 Jul 2025 15:32:20 +0100 Subject: [PATCH 02/11] Update docs per review --- AGENTS.md | 2 +- README.md | 6 +- ...antipatterns-and-refactoring-strategies.md | 2 +- docs/documentation-style-guide.md | 4 +- ...eframe-a-guide-to-production-resilience.md | 4 +- .../observability-operability-and-maturity.md | 8 +- docs/rust-binary-router-library-design.md | 4 +- docs/rust-doctest-dry-guide.md | 246 ++++++++---------- docs/rust-testing-with-rstest-fixtures.md | 6 +- ...eframe-1-0-detailed-development-roadmap.md | 2 +- docs/wireframe-client-design.md | 4 +- docs/wireframe-testing-crate.md | 4 +- 12 files changed, 137 insertions(+), 155 deletions(-) diff --git a/AGENTS.md b/AGENTS.md index b0cd6305..49a06d62 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -9,7 +9,7 @@ - **Clarity over cleverness.** Be concise, but favour explicit over terse or obscure idioms. Prefer code that's easy to follow. - **Use functions and composition.** Avoid repetition by extracting reusable - logic. Prefer generators or comprehensions, and declarative code to + logic. Prefer generators or comprehensions, and declarative code, over imperative repetition when readable. - **Small, meaningful functions.** Functions must be small, clear in purpose, single responsibility, and obey command/query segregation. diff --git a/README.md b/README.md index 37b3fba8..360b5981 100644 --- a/README.md +++ b/README.md @@ -313,10 +313,10 @@ tasks include building the Actix‑inspired API, implementing middleware and extractor traits, and providing example applications【F:docs/roadmap.md†L1-L24】. -## License +## Licence -Wireframe is distributed under the terms of the ISC license. See -[LICENSE](LICENSE) for details. +Wireframe is distributed under the terms of the ISC licence. See +[LICENCE](LICENSE) for details. [data-extraction-guide]: docs/rust-binary-router-library-design.md#53-data-extraction-and-type-safety diff --git a/docs/complexity-antipatterns-and-refactoring-strategies.md b/docs/complexity-antipatterns-and-refactoring-strategies.md index 0dab954a..a97be85f 100644 --- a/docs/complexity-antipatterns-and-refactoring-strategies.md +++ b/docs/complexity-antipatterns-and-refactoring-strategies.md @@ -196,7 +196,7 @@ much like a physical bumpy road slows down driving. ### B. How It Forms and Its Impact The Bumpy Road antipattern, like many software antipatterns, often emerges from -development practices that prioritize short-term speed over long-term +development practices that prioritise short-term speed over long-term structural integrity. Rushed development cycles, lack of clear design, or cutting corners on maintenance can lead to the gradual accumulation of conditional logic within a single function. As new requirements or edge cases diff --git a/docs/documentation-style-guide.md b/docs/documentation-style-guide.md index 74c129fb..8df08e78 100644 --- a/docs/documentation-style-guide.md +++ b/docs/documentation-style-guide.md @@ -96,8 +96,8 @@ pub fn add(a: i32, b: i32) -> i32 { Where it adds clarity, include [Mermaid](https://mermaid.js.org/) diagrams. When embedding figures, use `![alt text](path/to/image)` and provide concise -alt text describing the content. Add a short description before each Mermaid -diagram so screen readers can understand it. +alt text describing the content. Add a brief description before each Mermaid +diagram, so screen readers can understand it. ```mermaid flowchart TD diff --git a/docs/hardening-wireframe-a-guide-to-production-resilience.md b/docs/hardening-wireframe-a-guide-to-production-resilience.md index ff9af8f9..03337a80 100644 --- a/docs/hardening-wireframe-a-guide-to-production-resilience.md +++ b/docs/hardening-wireframe-a-guide-to-production-resilience.md @@ -13,8 +13,8 @@ This document is targeted at implementers. It moves beyond theoretical discussion to provide concrete, actionable patterns and code for building a resilient system. The philosophy is simple: anticipate failure, manage resources meticulously, and provide clear mechanisms for control and recovery. -We will cover three critical domains: coordinated shutdown, leak-proof resource -management, and denial-of-service mitigation. +This guide covers three critical domains: coordinated shutdown, leak-proof +resource management, and denial-of-service mitigation. ## 2. Coordinated, Graceful Shutdown diff --git a/docs/observability-operability-and-maturity.md b/docs/observability-operability-and-maturity.md index fade7f1b..e47a2824 100644 --- a/docs/observability-operability-and-maturity.md +++ b/docs/observability-operability-and-maturity.md @@ -37,10 +37,10 @@ first-class feature of `wireframe` 1.0. ### 2.1 The `tracing` Span Hierarchy `tracing`'s key innovation is the `span`, which represents a unit of work with -a distinct beginning and end. By nesting spans, we can create a causal chain of -events, allowing us to understand how work flows through the system, even -across asynchronous boundaries and threads. `wireframe` will adopt a standard -span hierarchy. +a distinct beginning and end. Nesting spans creates a causal chain of events, +enabling an understanding of how work flows through the system, even across +asynchronous boundaries and threads. `wireframe` will adopt a standard span +hierarchy. - `connection`**:** A root span for each TCP connection, created on `accept()`. It provides the top-level context for all activity on that connection. diff --git a/docs/rust-binary-router-library-design.md b/docs/rust-binary-router-library-design.md index bd57c1cd..96fdc395 100644 --- a/docs/rust-binary-router-library-design.md +++ b/docs/rust-binary-router-library-design.md @@ -332,8 +332,8 @@ handling to be managed and customized independently. (messages) and serializes outgoing Rust messages into byte payloads for outgoing frames. This is the primary role intended for `wire-rs` 6 or an alternative like `bincode` 11 or `postcard`.12 A minimal wrapper trait in the - library currently exposes these derives under a convenient `Message` trait, - providing `to_bytes` and `from_bytes` helpers. + library currently exposes these derive macros under a convenient `Message` + trait, providing `to_bytes` and `from_bytes` helpers. - **Routing Engine**: After a message is deserialized (or at least a header containing an identifier is processed), the routing engine inspects it to determine which user-defined handler function is responsible for processing diff --git a/docs/rust-doctest-dry-guide.md b/docs/rust-doctest-dry-guide.md index 1f94677c..5c1e5e7f 100644 --- a/docs/rust-doctest-dry-guide.md +++ b/docs/rust-doctest-dry-guide.md @@ -15,7 +15,7 @@ the inherent limitations of doctests. At its heart, `rustdoc` treats each documentation test not as a snippet of code running within the library's own context, but as an entirely separate, -temporary crate.1 When a developer executes +temporary crate.[^1] When a developer executes `cargo test --doc`, `rustdoc` initiates a multi-stage process for every code block found in the documentation comments 3: @@ -23,7 +23,7 @@ block found in the documentation comments 3: 1. **Parsing and Extraction**: `rustdoc` first parses the source code of the library, resolving conditional compilation attributes (`#[cfg]`) to determine which items are active and should be documented for the current - target.2 It then extracts all code examples enclosed in triple-backtick + target.[^2] It then extracts all code examples enclosed in triple-backtick fences (\`\`\`). 2. **Code Generation**: For each extracted code block, `rustdoc` performs a @@ -31,23 +31,23 @@ block found in the documentation comments 3: the block does not already contain a `fn main()`, the code is wrapped within one. Crucially, `rustdoc` also injects an `extern crate ;` statement, where `` is the name of the library being documented. - This makes the library under test available as an external dependency.3 + This makes the library under test available as an external dependency.[^3] 3. **Individual Compilation**: `rustdoc` then invokes the Rust compiler (`rustc`) separately for *each* of these newly generated miniature programs. Each one is compiled and linked against the already-compiled version of the - main library.2 + main library.[^2] 4. **Execution and Verification**: Finally, if compilation succeeds, the resulting executable is run. The test is considered to have passed if the program runs to completion without panicking. The executable is then - deleted.2 + deleted.[^2] The significance of this model cannot be overstated. It effectively transforms -every doctest into a true integration test.6 The test code does not have +every doctest into a true integration test.[^6] The test code does not have special access to the library's internals; it interacts with the library's API precisely as a downstream crate would, providing a powerful guarantee that the -public-facing examples are correct and functional.1 +public-facing examples are correct and functional.[^1] ### 1.2 First-Order Consequences of the Model @@ -59,19 +59,19 @@ doctest is compiled as an external crate, it can only access functions, structs, traits, and modules marked with the `pub` keyword. It has no access to private items or even crate-level public items (e.g., `pub(crate)`). This is not a bug or an oversight but a fundamental aspect of the design, enforcing the -perspective of an external consumer.1 +perspective of an external consumer.[^1] Second, the model has **profound performance implications**. The process of invoking `rustc` to compile and link a new executable for every single doctest is computationally expensive. For small projects, this overhead is negligible. However, for large libraries with hundreds of doctests, the cumulative compilation time can become a significant bottleneck in the development and -CI/CD cycle, a common pain point in the Rust community.2 +CI/CD cycle, a common pain point in the Rust community.[^2] The architectural purity of the `rustdoc` model—its insistence on simulating an external user—creates a fundamental trade-off. On one hand, it provides an unparalleled guarantee that the public documentation is accurate and that the -examples work as advertised, creating true "living documentation".8 On the +examples work as advertised, creating true "living documentation".[^8] On the other hand, this same purity prevents the use of doctests for verifying documentation of internal, private APIs. This forces a bifurcation of documentation strategy. Public-facing documentation can be tied directly to @@ -80,7 +80,7 @@ vital for a project's health, cannot be verified with the same tools. Examples for private functions must either be marked as `ignore`, forgoing the test guarantee, or be duplicated in separate unit tests, -violating the "Don't Repeat Yourself" (DRY) principle.1 This reveals that +violating the "Don't Repeat Yourself" (DRY) principle.[^1] This reveals that `rustdoc`'s design implicitly prioritizes the integrity of the public contract over the convenience of a single, unified system for testable documentation of @@ -98,34 +98,34 @@ clear, illustrative, and robust. Doctests reside within documentation comments. Rust recognizes two types: - **Outer doc comments (**`///`**)**: These document the item that follows them - (e.g., a function, struct, or module). This is the most common type.8 + (e.g., a function, struct, or module). This is the most common type.[^8] - **Inner doc comments (**`//!`**)**: These document the item they are inside of (e.g., a module or the crate itself). They are typically used at the top - of `lib.rs` or `mod.rs` to provide crate- or module-level documentation.9 - Within these comments, a code block is denoted by triple backticks (`). While - rustdoc defaults to assuming the language is Rust, - explicitly add the`rust` language specifier for clarity.[^3] A doctest is - considered to "pass" if it compiles successfully and runs to completion - without panicking. To verify that a function produces a specific output, - developers should use the standard assertion macros, such as `assert!`, - `assert_eq!`, and`assert_ne!`.3 + of `lib.rs` or `mod.rs` to provide crate- or module-level documentation.[^9] + Within these comments, a code block is + denoted by triple backticks + (`). While rustdoc defaults to assuming the language is Rust, explicitly add the` + rust + ` language specifier for clarity.[^3] A doctest is considered to "pass" if it compiles successfully and runs to completion without panicking. To verify that a function produces a specific output, developers should use the standard assertion macros, such as ` + assert!`,`assert_eq!`, and`assert_ne!`.[^3] ### 2.2 The Philosophy of a Good Example The purpose of a documentation example extends beyond merely demonstrating syntax. A reader can typically be expected to understand the mechanics of calling a function or instantiating a struct. A truly valuable example -illustrates *why* and in *what context* an item should be used.10 It should +illustrates *why* and in *what context* an item should be used.[^10] It should tell a small story or solve a miniature problem that illuminates the item's purpose. For instance, an example for `String::clone()` should not just show `hello.clone();`, but should demonstrate -a scenario where ownership rules necessitate creating a copy.10 +a scenario where ownership rules necessitate creating a copy.[^10] To achieve this, examples must be clear and concise. Any code that is not directly relevant to the point being made—such as complex setup, boilerplate, -or unrelated logic—should be hidden to avoid distracting the reader.3 +or unrelated logic—should be hidden to avoid distracting the reader.[^3] ### 2.3 Ergonomic Error Handling: Taming the `?` Operator @@ -134,11 +134,11 @@ functions that return a `Result`. The question mark (`?`) operator is the idiomatic way to propagate errors in Rust, but it presents a challenge for doctests. The implicit `fn main()` wrapper generated by `rustdoc` has a return type of `()`, while the `?` operator can only be used in a function that -returns a `Result` or `Option`. This mismatch leads to a compilation error.3 +returns a `Result` or `Option`. This mismatch leads to a compilation error.[^3] Using `.unwrap()` or `.expect()` in examples is strongly discouraged. It is considered an anti-pattern because users often copy example code verbatim, and -encouraging panicking on errors is contrary to robust application design.10 +encouraging panicking on errors is contrary to robust application design.[^10] Instead, two canonical solutions exist. Solution 1: The Explicit main Function @@ -166,7 +166,7 @@ Rust ``` In this pattern, the reader only sees the core, fallible code, while the test -itself is a complete, well-behaved program.10 +itself is a complete, well-behaved program.[^10] Solution 2: The Implicit Result-Returning main @@ -189,7 +189,7 @@ Rust This is functionally equivalent to the explicit `main` but requires less boilerplate. However, it is critical that the `(())` be written as a single, contiguous sequence of characters, as `rustdoc`'s detection mechanism is purely -textual and will not recognize `( () )`.3 +textual and will not recognize `( () )`.[^3] ### 2.4 The Power of Hidden Lines (`#`): Creating Clean Examples @@ -197,7 +197,7 @@ The mechanism that makes clean, focused examples possible is the "hidden line" syntax. Any line in a doctest code block that begins with a `#` character (optionally preceded by whitespace) will be compiled and executed as part of the test, but it will be completely omitted from the final HTML documentation -rendered for the user.3 +rendered for the user.[^3] This feature is essential for bridging the gap between what makes a good, human-readable example and what constitutes a complete, compilable program. Its @@ -205,12 +205,13 @@ primary use cases include: 1. **Hiding** `main` **Wrappers**: As demonstrated in the error-handling examples, the entire `fn main() -> Result<...> {... }` and `Ok(())` - scaffolding can be hidden, presenting the user with only the relevant code.10 + scaffolding can be hidden, presenting the user with only the relevant + code.[^10] 2. **Hiding Setup Code**: If an example requires some preliminary setup—like creating a temporary file, defining a helper struct for the test, or initializing a server—this logic can be hidden to keep the example focused - on the API item being documented.3 + on the API item being documented.[^3] 3. **Hiding** `use` **Statements**: While often useful to show which types are involved, `use` statements can sometimes be hidden to de-clutter very simple @@ -218,13 +219,14 @@ primary use cases include: The existence of features like hidden lines and the `(())` shorthand reveals a core tension in `rustdoc`'s design. The compilation model is rigid: every test -must be a valid, standalone program.2 However, the ideal documentation example -is often just a small, illustrative snippet that is not a valid program on its -own.10 These ergonomic features are pragmatic "patches" designed to resolve -this conflict. They allow the developer to inject the necessary boilerplate to -satisfy the compiler without burdening the human reader with irrelevant -details. Understanding them as clever workarounds, rather than as first-class -language features, helps explain their sometimes quirky, text-based behavior. +must be a valid, standalone program.[^2] However, the ideal documentation +example is often just a small, illustrative snippet that is not a valid program +on its own.[^10] These ergonomic features are pragmatic "patches" designed to +resolve this conflict. They allow the developer to inject the necessary +boilerplate to satisfy the compiler without burdening the human reader with +irrelevant details. Understanding them as clever workarounds, rather than as +first-class language features, helps explain their sometimes quirky, text-based +behavior. ## Advanced Doctest Control and Attributes @@ -239,45 +241,45 @@ Choosing the correct attribute is critical for communicating the intent of an example and ensuring the test suite provides meaningful feedback. The following table provides a comparative reference for the most common doctest attributes. -| Attribute | Action | Test Outcome | Primary Use Case & Caveats | -| ------------ | ------------------------------------------------------------------- | -------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| ignore | Skips both compilation and execution. | ignored | Use Case: For pseudo-code, examples known to be broken, or to temporarily disable a test. Caveat: Provides no guarantee that the code is even syntactically correct. Generally discouraged in favor of more specific attributes.3 | -| should_panic | Compiles and runs the code. The test passes if the code panics. | ok on panic, failed if it does not panic. | Use Case: Demonstrating functions that are designed to panic on invalid input (e.g., indexing out of bounds). | -| compile_fail | Attempts to compile the code. The test passes if compilation fails. | ok on compilation failure, failed if it compiles successfully. | Use Case: Illustrating language rules, such as the borrow checker or type system constraints. Caveat: Highly brittle. A future Rust version might make the code valid, causing the test to unexpectedly fail.4 | -| no_run | Compiles the code but does not execute it. | ok if compilation succeeds. | Use Case: Essential for examples with undesirable side effects in a test environment, such as network requests, filesystem I/O, or launching a GUI. Guarantees the example is valid Rust code without running it.5 | -| edition2021 | Compiles the code using the specified Rust edition's rules. | ok on success. | Use Case: Demonstrating syntax or idioms that are specific to a particular Rust edition (e.g., edition2018, edition2021).4 | +| Attribute | Action | Test Outcome | Primary Use Case & Caveats | +| ------------ | ------------------------------------------------------------------- | -------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| ignore | Skips both compilation and execution. | ignored | Use Case: For pseudo-code, examples known to be broken, or to temporarily disable a test. Caveat: Provides no guarantee that the code is even syntactically correct. Generally discouraged in favor of more specific attributes.[^3] | +| should_panic | Compiles and runs the code. The test passes if the code panics. | ok on panic, failed if it does not panic. | Use Case: Demonstrating functions that are designed to panic on invalid input (e.g., indexing out of bounds). | +| compile_fail | Attempts to compile the code. The test passes if compilation fails. | ok on compilation failure, failed if it compiles successfully. | Use Case: Illustrating language rules, such as the borrow checker or type system constraints. Caveat: Highly brittle. A future Rust version might make the code valid, causing the test to unexpectedly fail.[^4] | +| no_run | Compiles the code but does not execute it. | ok if compilation succeeds. | Use Case: Essential for examples with undesirable side effects in a test environment, such as network requests, filesystem I/O, or launching a GUI. Guarantees the example is valid Rust code without running it.[^5] | +| edition2021 | Compiles the code using the specified Rust edition's rules. | ok on success. | Use Case: Demonstrating syntax or idioms that are specific to a particular Rust edition (e.g., edition2018, edition2021).[^4] | ### 3.2 Detailed Attribute Breakdown - `ignore`: This is the bluntest instrument in the toolbox. It tells `rustdoc` to do nothing with the code block. It is almost always better to either fix the example using hidden lines or use a more descriptive attribute like - `no_run`.3 Its main legitimate use is for non-Rust code blocks or + `no_run`.[^3] Its main legitimate use is for non-Rust code blocks or illustrative pseudo-code. - `should_panic`: This attribute inverts the normal test condition. It is used to document and verify behavior that intentionally results in a panic. The test will fail if the code completes successfully or panics for a reason - other than the one expected (if a specific panic message is asserted).3 + other than the one expected (if a specific panic message is asserted).[^3] - `compile_fail`: This is a powerful tool for creating educational examples that demonstrate what *not* to do. It is frequently used in documentation about ownership, borrowing, and lifetimes to show code that the compiler will correctly reject. However, developers must be aware of its fragility. An evolution in the Rust language or compiler could make previously invalid code - compile, which would break the test.4 + compile, which would break the test.[^4] - `no_run`: This attribute strikes a crucial balance between test verification and practicality. For an example that demonstrates how to download a file from the internet, you want to ensure the example code is syntactically correct and uses the API properly, but you do not want your CI server to actually perform a network request every time tests are run. `no_run` - provides this guarantee by compiling the code without executing it.5 + provides this guarantee by compiling the code without executing it.[^5] - `edition20xx`: This attribute allows an example to be tested against a specific Rust edition. This is important for crates that support multiple editions and need to demonstrate edition-specific features or migration - paths.4 + paths.[^4] ## The DRY Principle in Doctests: Managing Shared and Complex Logic @@ -303,7 +305,7 @@ flag provided by `rustdoc`: `doctest`. A common mistake is to try to place shared test logic in a block guarded by `#[cfg(test)]`. This will not work, because `rustdoc` does not enable the `test` configuration flag during its compilation process; `#[cfg(test)]` is reserved for unit and integration tests -run directly by `cargo test`.12 +run directly by `cargo test`.[^12] Instead, `rustdoc` sets its own unique `doctest` flag. By guarding a module or function with `#[cfg(doctest)]`, developers can write helper code that is @@ -362,7 +364,7 @@ pub struct TestContext { /*... */ } This pattern is the most effective way to achieve DRY doctests. It centralizes setup logic, improves maintainability, and cleanly separates testing concerns -from production code.12 +from production code.[^12] ### 4.3 Advanced DRY: Programmatic Doctest Generation @@ -376,7 +378,7 @@ Crates like `quote-doctest` address this by allowing developers to programmatically construct a doctest from a `TokenStream`. This enables the generation of doctests from the same source of truth that generates the code they are intended to test, representing the ultimate application of the DRY -principle in this domain.14 +principle in this domain.[^14] ## Conditional Compilation Strategies for Doctests @@ -398,7 +400,7 @@ a Windows machine). **The Mechanism**: `rustdoc` always invokes the compiler with the `--cfg doc` flag set. By adding `doc` to an item's `#[cfg]` attribute, a developer can instruct the compiler to include that item specifically for documentation -builds.15 +builds.[^15] **The Pattern**: @@ -419,7 +421,7 @@ This distinction highlights the "cfg duality." The `#[cfg(doc)]` attribute controls the *table of contents* of the documentation; it determines which items are parsed and rendered. The actual compilation of a doctest, however, happens in a separate, later stage. In that stage, the `doc` cfg is *not* -passed to the compiler.15 The compiler only sees the host +passed to the compiler.[^15] The compiler only sees the host `cfg` (e.g., `target_os = "windows"`), so the `UnixSocket` type is not available, and the test fails to compile. `#[cfg(doc)]` affects what is @@ -456,7 +458,7 @@ Rust When the `"serde"` feature is disabled, the code inside the block is compiled out. The doctest becomes an empty program that runs, does nothing, and is reported as `ok`. While simple to write, this can be misleading, as the test -suite reports a "pass" for a test that was effectively skipped.16 +suite reports a "pass" for a test that was effectively skipped.[^16] Pattern 2: cfg_attr to Conditionally ignore the Test @@ -481,7 +483,7 @@ With this pattern, if the `"serde"` feature is disabled, the test is marked as the feature is enabled, the `ignore` is omitted, and the test runs normally. This approach provides clearer feedback but is significantly more verbose and less ergonomic, especially when applied to outer (`///`) doc comments, as the -`cfg_attr` must be applied to every single line of the comment.16 +`cfg_attr` must be applied to every single line of the comment.[^16] ### 5.3 Displaying Feature Requirements in Docs: `#[doc(cfg(...))]` @@ -505,7 +507,7 @@ pub fn function_requiring_serde() { /*... */ } This will render a banner in the documentation for `function_requiring_serde` that reads, "This is only available when the `serde` feature is enabled." This attribute is purely for documentation generation and is independent of, but -often used alongside, the conditional test execution patterns.16 +often used alongside, the conditional test execution patterns.[^16] ## Doctests in the Wider Project Ecosystem @@ -518,29 +520,29 @@ limitations is key to maintaining a healthy and well-tested Rust project. A robust testing strategy leverages three distinct types of tests, each with its own purpose: -- **Doctests**: These are ideal for simple, "happy-path" examples of your +- **Doctests**: These are ideal for simple, "happy-path" examples of the public API. Their dual purpose is to provide clear documentation for users and to act as a basic sanity check that the examples remain correct over time. They should be easy to read and focused on illustrating a single - concept.6 + concept.[^6] - **Unit Tests (**`#[test]` **in** `src/`**)**: These are for testing the - nitty-gritty details of your implementation. They are placed in submodules - within your source files (often `mod tests {... }`) and are compiled only - with `#[cfg(test)]`. Because they live inside the crate, they can access - private functions and modules, making them perfect for testing internal - logic, edge cases, and specific error conditions.1 + nitty-gritty details of the implementation. They are placed in submodules + within the source files (often `mod tests {... }`) and are compiled only with + `#[cfg(test)]`. Because they live inside the crate, they can access private + functions and modules, making them perfect for testing internal logic, edge + cases, and specific error conditions.[^1] - **Integration Tests (in the** `tests/` **directory)**: These test the crate from a completely external perspective, much like doctests. However, they are not constrained by the need to be readable documentation. They are suited for testing complex user workflows, interactions between multiple API entry - points, and the overall behavior of the library as a black box.6 + points, and the overall behavior of the library as a black box.[^6] ### 6.2 The Unsolved Problem: Testing Private APIs As established, the `rustdoc` compilation model makes testing private items in -doctests impossible by design.1 The community has developed several +doctests impossible by design.[^1] The community has developed several workarounds, but each comes with significant trade-offs 1: 1. `ignore` **the test**: This allows the example to exist in the documentation @@ -574,25 +576,25 @@ real-world challenges when working with doctests. audiences. It needs to render cleanly on platforms like GitHub and [crates.io](http://crates.io), where hidden lines (`#...`) look like ugly, commented-out code. At the same time, it should contain testable examples, - which often require hidden lines for setup.11 The best practice is to avoid - maintaining the README manually. Instead, use a tool like + which often require hidden lines for setup.[^11] The best practice is to + avoid maintaining the README manually. Instead, use a tool like `cargo-readme`. This tool generates a `README.md` file from your crate-level documentation (in `lib.rs`), automatically stripping out the hidden lines. This provides a single source of truth that is both fully testable via `cargo test --doc` and produces a clean, professional README for external - sites.11 + sites.[^11] - **Developer Ergonomics in IDEs**: Writing code inside documentation comments can be a subpar experience. IDEs and tools like `rust-analyzer` often provide limited or no autocompletion, real-time error checking, or refactoring - support for code within a comment block.18 A common and effective workflow to - mitigate this is to first write and debug the example as a standard + support for code within a comment block.[^18] A common and effective workflow + to mitigate this is to first write and debug the example as a standard `#[test]` function in a temporary file or test module. This allows the developer to leverage the full power of the IDE. Once the code is working correctly, it can be copied into the doc comment, and the necessary - formatting (`///`, `#`, etc.) can be applied.18 + formatting (`///`, `#`, etc.) can be applied.[^18] ## Conclusion and Recommendations @@ -631,63 +633,43 @@ mastering doctests: ### **Works cited** - 1. rust - How can I write documentation tests for private modules …, - accessed on July 15, 2025, - - - 1. Rustdoc doctests need fixing - Swatinem, accessed on July 15, 2025, - - - 1. Documentation tests - The rustdoc book - Rust Documentation, accessed on - July 15, 2025, - - 1. Documentation tests - - GitHub Pages, accessed on July 15, 2025, - - - 1. Documentation tests - - MIT, accessed on July 15, 2025, - - - 1. How to organize your Rust tests - LogRocket Blog, accessed on July 15, - 2025, - - 1. Best way to organise tests in Rust - Reddit, accessed on July 15, 2025, - - - 1. Writing Rust Documentation - DEV Community, accessed on July 15, 2025, - - - 1. The rustdoc book, accessed on July 15, 2025, - - - 1. Documentation - Rust API Guidelines, accessed on July 15, 2025, - - - 1. Best practice for doc testing README - help - The Rust Programming Language - Forum, accessed on July 15, 2025, - - - 1. Compile_fail doc test ignored in cfg(test) - help - The Rust Programming - Language Forum, accessed on July 15, 2025, - - - 1. Test setup for doctests - help - The Rust Programming Language Forum, - accessed on July 15, 2025, - - - 1. quote_doctest - Rust - [Docs.rs](http://Docs.rs), accessed on July 15, - 2025, - - 1. Advanced features - The rustdoc book - Rust Documentation, accessed on July - 15, 2025, - - 1. rust - How can I conditionally execute a module-level doctest based …, - accessed on July 15, 2025, - - - 1. How would one achieve conditional compilation with Rust projects that have - doctests?, accessed on July 15, 2025, - - - 1. How do you write your doc tests? : r/rust - Reddit, accessed on July 15, - 2025, - +[^1]: rust - How can I write documentation tests for private modules …, +accessed on July 15, 2025, + +[^2]: Rustdoc doctests need fixing - Swatinem, accessed on July 15, 2025, + +[^3]: Documentation tests - The rustdoc book - Rust Documentation, accessed on +July 15, 2025, +[^4]: Documentation tests - - GitHub Pages, accessed on July 15, 2025, + +[^5]: Documentation tests - - MIT, accessed on July 15, 2025, + +[^6]: How to organize your Rust tests - LogRocket Blog, accessed on July 15, +2025, + +[^8]: Writing Rust Documentation - DEV Community, accessed on July 15, 2025, + +[^9]: The rustdoc book, accessed on July 15, 2025, + +[^10]: Documentation - Rust API Guidelines, accessed on July 15, 2025, + +[^11]: Best practice for doc testing README - help - The Rust Programming + Language Forum, accessed on July 15, 2025, + +[^12]: Compile_fail doc test ignored in cfg(test) - help - The Rust Programming +Language Forum, accessed on July 15, 2025, + +accessed on July 15, 2025, + +[^14]: quote_doctest - Rust - [Docs.rs](http://Docs.rs), accessed on July 15, +2025, +[^15]: Advanced features - The rustdoc book - Rust Documentation, accessed on + July 15, 2025, +[^16]: rust - How can I conditionally execute a module-level doctest based …, +accessed on July 15, 2025, + + have doctests?, accessed on July 15, 2025, + +[^18]: How do you write your doc tests? : r/rust - Reddit, accessed on July 15, +2025, + diff --git a/docs/rust-testing-with-rstest-fixtures.md b/docs/rust-testing-with-rstest-fixtures.md index d83678e7..0e83c682 100644 --- a/docs/rust-testing-with-rstest-fixtures.md +++ b/docs/rust-testing-with-rstest-fixtures.md @@ -73,8 +73,8 @@ quality and developer productivity: - **Readability:** By injecting dependencies as function arguments, `rstest` makes the requirements of a test explicit and easy to understand. The test function's signature clearly documents what it needs to run. This allows - developers to "focus on the important stuff in your tests" by abstracting - away the setup details. + developers to focus on the important aspects of tests by abstracting away the + setup details. - **Reusability:** Fixtures defined with `rstest` are reusable components. A single fixture, such as one setting up a database connection or creating a complex data structure, can be used across multiple tests, eliminating @@ -306,7 +306,7 @@ function will be executed five times, and each test will receive a fresh, independent instance of the fixture's result. This behaviour is crucial for test isolation. The `rstest` macro effectively desugars a test like `fn the_test(injected: i32)` into something conceptually similar to -`#[test] fn the_test() { let injected = injected_fixture_func(); /*... */ }` +`#[test] fn the_test() { let injected = injected_fixture_func(); /* … */ }` within the test body, implying a new call each time. Test isolation prevents the state from one test from inadvertently affecting diff --git a/docs/wireframe-1-0-detailed-development-roadmap.md b/docs/wireframe-1-0-detailed-development-roadmap.md index 7cfacd7c..577aed2d 100644 --- a/docs/wireframe-1-0-detailed-development-roadmap.md +++ b/docs/wireframe-1-0-detailed-development-roadmap.md @@ -70,4 +70,4 @@ ready for a 1.0 release.* | 4.3 | Advanced Testing: Performance | Implement the criterion benchmark suite. Create micro-benchmarks for individual components (e.g., `PushHandle` contention) and macro-benchmarks for end-to-end throughput and latency. | Medium | All | | 4.4 | Comprehensive User Guides | Write the official documentation for the new features. Create separate guides for "Duplex Messaging & Pushes", "Streaming Responses", and "Message Fragmentation". Each guide must include runnable examples and explain the relevant concepts and APIs. | Large | All | | 4.5 | High-Quality Examples | Create at least two complete, high-quality examples demonstrating real-world use cases. These should include server-initiated MySQL packets (e.g., LOCAL INFILE and session-state trackers) and a push-driven protocol such as WebSocket heart-beats or MQTT broker fan-out. | Medium | All | -| 4.6 | Changelog & 1.0 Release | Finalise the `CHANGELOG.md` with a comprehensive summary of all new features, enhancements, and breaking changes. Tag and publish the 1.0.0 release. | Small | All | +| 4.6 | Changelog & 1.0 Release | Finalize the `CHANGELOG.md` with a comprehensive summary of all new features, enhancements, and breaking changes. Tag and publish the 1.0.0 release. | Small | All | diff --git a/docs/wireframe-client-design.md b/docs/wireframe-client-design.md index 32ce4531..01bc4cca 100644 --- a/docs/wireframe-client-design.md +++ b/docs/wireframe-client-design.md @@ -12,8 +12,8 @@ are intentionally generic: transport adapters, framing, serialization, routing, and middleware form a pipeline that is largely independent of server-specific logic. The design document outlines these layers, which process frames from raw bytes to typed messages and back -again【F:docs/rust-binary-router-library-design.md†L316-L371】. By reusing -these pieces, we can implement a lightweight client without duplicating +again【F:docs/rust-binary-router-library-design.md†L316-L371】. Reusing these +pieces enables the implementation of a lightweight client without duplicating protocol code. ## Core Components diff --git a/docs/wireframe-testing-crate.md b/docs/wireframe-testing-crate.md index c96eed66..20316914 100644 --- a/docs/wireframe-testing-crate.md +++ b/docs/wireframe-testing-crate.md @@ -108,8 +108,8 @@ pub async fn drive_with_frames_mut(app: &mut WireframeApp, frames: Vec>) For most tests the input frame is preassembled from raw bytes. A small wrapper can accept any `serde::Serialize` value and perform the encoding and framing before delegating to `drive_with_frame`. This mirrors the patterns in -`tests/ routes.rs` where structs are converted to bytes with -`BincodeSerializer` and then wrapped in a length‑prefixed frame. +`tests/routes.rs`, where structs convert to bytes with `BincodeSerializer` and +are then wrapped in a length‑prefixed frame. ```rust #[derive(serde::Serialize)] From c70c853d0527d6945ae9ef3852abb78ad5f19f89 Mon Sep 17 00:00:00 2001 From: Leynos Date: Sun, 20 Jul 2025 23:29:01 +0100 Subject: [PATCH 03/11] Fix link formatting --- ...antipatterns-and-refactoring-strategies.md | 2 +- docs/contents.md | 8 ++- docs/mocking-network-outages-in-rust.md | 20 +++---- docs/rust-binary-router-library-design.md | 32 ++++++----- docs/rust-doctest-dry-guide.md | 56 +++++++++---------- docs/rust-testing-with-rstest-fixtures.md | 2 +- ...-set-philosophy-and-capability-maturity.md | 4 +- ...eframe-1-0-detailed-development-roadmap.md | 2 +- docs/wireframe-client-design.md | 8 +-- docs/wireframe-testing-crate.md | 12 ++-- 10 files changed, 74 insertions(+), 72 deletions(-) diff --git a/docs/complexity-antipatterns-and-refactoring-strategies.md b/docs/complexity-antipatterns-and-refactoring-strategies.md index a97be85f..77395fa4 100644 --- a/docs/complexity-antipatterns-and-refactoring-strategies.md +++ b/docs/complexity-antipatterns-and-refactoring-strategies.md @@ -200,7 +200,7 @@ development practices that prioritise short-term speed over long-term structural integrity. Rushed development cycles, lack of clear design, or cutting corners on maintenance can lead to the gradual accumulation of conditional logic within a single function. As new requirements or edge cases -are handled, developers might add more +are handled, developers might add more. `if` statements or loops to an existing method rather than stepping back to refactor and create appropriate abstractions. diff --git a/docs/contents.md b/docs/contents.md index e1b59f86..a2f01f7a 100644 --- a/docs/contents.md +++ b/docs/contents.md @@ -29,9 +29,11 @@ reference when navigating the project's design and architecture material. - [Wireframe 1.0 roadmap](wireframe-1-0-detailed-development-roadmap.md) Detailed tasks leading to Wireframe 1.0. - [Project roadmap](roadmap.md) High-level development roadmap. -- [1.0 - philosophy](the-road-to-wireframe-1-0-feature-set-philosophy-and-capability-maturity.md) - Philosophy and feature set for Wireframe 1.0. + +- [1.0 philosophy][philosophy] Philosophy and feature set for Wireframe 1.0. + +[philosophy]: +the-road-to-wireframe-1-0-feature-set-philosophy-and-capability-maturity.md ## Testing diff --git a/docs/mocking-network-outages-in-rust.md b/docs/mocking-network-outages-in-rust.md index 8e439f02..30599f49 100644 --- a/docs/mocking-network-outages-in-rust.md +++ b/docs/mocking-network-outages-in-rust.md @@ -9,13 +9,12 @@ outages** (timeouts, connection resets, partial sends) and verify that the server handles them gracefully. However, inducing such failures reliably is tricky without a controlled test environment. -In this tutorial, we demonstrate how to refactor and test `mxd`’s server -components to **simulate unreliable network conditions**. We’ll introduce a -transport abstraction to inject simulated failures, and use -`tokio-test::io::Builder` for custom I/O streams. We’ll also leverage `rstest` -for parameterized tests and `mockall` for mocking, where appropriate. The -result will be a suite of tests ensuring `mxd`’s server remains stable even -when the network is not. +This tutorial demonstrates how to refactor and test `mxd`’s server components +to **simulate unreliable network conditions**. The approach introduces a +transport abstraction to inject failures and uses `tokio-test::io::Builder` for +custom I/O streams. `rstest` is leveraged for parameterised tests and `mockall` +is used for mocking where appropriate. The outcome is a suite of tests that +ensures `mxd`’s server remains stable even when the network is not. ## Overview of `mxd`’s Server Networking @@ -238,10 +237,9 @@ a similar approach for failure scenarios. **1. Simulating a Handshake Timeout:** In this scenario, the client connects but **never sends the handshake bytes**, causing the server’s 5-second timeout -to elapse. To test this without an actual 5-second delay, we can take advantage -of Tokio’s ability to **pause time** in tests. By annotating our test with -`#[tokio::test(start_paused = true)]`, the Tokio runtime’s clock is frozen at -start. We can then `.advance` the clock programmatically to trigger the timeout. +to elapse. Testing without a real 5-second delay is possible by pausing the +Tokio clock in tests via `#[tokio::test(start_paused = true)]`, then advancing +the clock programmatically to trigger the timeout. Using `Builder`, we create a `reader` that yields **no data at all** (so the server will be stuck waiting), and after advancing time past 5 seconds, the diff --git a/docs/rust-binary-router-library-design.md b/docs/rust-binary-router-library-design.md index 96fdc395..542a0bd3 100644 --- a/docs/rust-binary-router-library-design.md +++ b/docs/rust-binary-router-library-design.md @@ -20,8 +20,8 @@ known for its intuitive routing, data extraction, and middleware systems.4 "wireframe" intends to adapt these successful patterns to the domain of binary protocols. -A key aspect of the proposed design is the utilization of `wire-rs` 6 for -message serialization and deserialization, contingent upon its ability to +A key aspect of the proposed design is the utilization of `wire-rs`[^wire-rs] +for message serialization and deserialization, contingent upon its ability to support or be augmented with derivable `Encode` and `Decode` traits. This, combined with a layered architecture and high-level abstractions, seeks to provide developers with a more declarative and less error-prone environment for @@ -72,19 +72,20 @@ Effective and efficient binary serialization is fundamental to any library dealing with wire protocols. Several Rust crates offer solutions in this space, each with distinct characteristics. -- `wire-rs`: The user query specifically suggests considering `wire-rs`.6 This - library provides an extensible interface for converting data to and from wire - protocols, supporting non-contiguous buffers and `no_std` environments. It - features `WireReader` and `WireWriter` for manual data reading and writing, - with explicit control over endianness.6 However, the available information - does not clearly indicate the presence or nature of derivable `Encode` and - `Decode` traits for automatic (de)serialization of complex types.6 The - ability to automatically generate (de)serialization logic via derive macros - is crucial for achieving "wireframe's" goal of reducing source code - complexity. If such derive macros are not a core feature of `wire-rs`, - "wireframe" would need to either contribute them, provide its own wrapper - traits that enable derivation while using `wire-rs` internally, or consider - alternative serialization libraries. +- `wire-rs`: The user query specifically suggests considering + `wire-rs`.[^wire-rs] This library provides an extensible interface for + converting data to and from wire protocols, supporting non-contiguous buffers + and `no_std` environments. It features `WireReader` and `WireWriter` for + manual data reading and writing, with explicit control over + endianness.[^wire-rs] However, the available information does not clearly + indicate the presence or nature of derivable `Encode` and `Decode` traits for + automatic (de)serialization of complex types.[^wire-rs] The ability to + automatically generate (de)serialization logic via derive macros is crucial + for achieving "wireframe's" goal of reducing source code complexity. If such + derive macros are not a core feature of `wire-rs`, "wireframe" would need to + either contribute them, provide its own wrapper traits that enable derivation + while using `wire-rs` internally, or consider alternative serialization + libraries. - `bincode`: `bincode` is a widely used binary serialization library that integrates well with Serde.8 It offers high performance and configurable @@ -1597,3 +1598,4 @@ integration with a (de)serialization library offering derivable traits and the Actix-like API components, along with gathering community feedback, will be crucial next steps to validate this approach and refine the library's features into a valuable tool for the Rust ecosystem. +[^wire-rs]: diff --git a/docs/rust-doctest-dry-guide.md b/docs/rust-doctest-dry-guide.md index 5c1e5e7f..f3a76d36 100644 --- a/docs/rust-doctest-dry-guide.md +++ b/docs/rust-doctest-dry-guide.md @@ -4,12 +4,12 @@ To master the art of writing effective documentation tests in Rust, one must first understand the foundational principles upon which the `rustdoc` tool -operates. Its behavior, particularly its testing mechanism, is not an arbitrary -collection of features but a direct consequence of a deliberate design -philosophy. The core of this philosophy is that every doctest should validate -the public API of a crate from the perspective of an external user. This single -principle dictates the entire compilation model and explains both the power and -the inherent limitations of doctests. +operates. Its behaviour, particularly its testing mechanism, is not an +arbitrary collection of features but a direct consequence of a deliberate +design philosophy. The core of this philosophy is that every doctest should +validate the public API of a crate from the perspective of an external user. +This single principle dictates the entire compilation model and explains both +the power and the inherent limitations of doctests. ### 1.1 The "Separate Crate" Paradigm @@ -106,9 +106,9 @@ Doctests reside within documentation comments. Rust recognizes two types: Within these comments, a code block is denoted by triple backticks (`). While rustdoc defaults to assuming the language is Rust, explicitly add the` - rust + rust ` language specifier for clarity.[^3] A doctest is considered to "pass" if it compiles successfully and runs to completion without panicking. To verify that a function produces a specific output, developers should use the standard assertion macros, such as ` - assert!`,`assert_eq!`, and`assert_ne!`.[^3] ### 2.2 The Philosophy of a Good Example @@ -226,12 +226,12 @@ resolve this conflict. They allow the developer to inject the necessary boilerplate to satisfy the compiler without burdening the human reader with irrelevant details. Understanding them as clever workarounds, rather than as first-class language features, helps explain their sometimes quirky, text-based -behavior. +behaviour. ## Advanced Doctest Control and Attributes Beyond basic pass/fail checks, `rustdoc` provides a suite of attributes to -control doctest behavior with fine-grained precision. These attributes, placed +control doctest behaviour with fine-grained precision. These attributes, placed in the header of a code block (e.g., \`\`\`\`ignore\`), allow developers to handle expected failures, non-executable examples, and other complex scenarios. @@ -241,13 +241,13 @@ Choosing the correct attribute is critical for communicating the intent of an example and ensuring the test suite provides meaningful feedback. The following table provides a comparative reference for the most common doctest attributes. -| Attribute | Action | Test Outcome | Primary Use Case & Caveats | -| ------------ | ------------------------------------------------------------------- | -------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| ignore | Skips both compilation and execution. | ignored | Use Case: For pseudo-code, examples known to be broken, or to temporarily disable a test. Caveat: Provides no guarantee that the code is even syntactically correct. Generally discouraged in favor of more specific attributes.[^3] | -| should_panic | Compiles and runs the code. The test passes if the code panics. | ok on panic, failed if it does not panic. | Use Case: Demonstrating functions that are designed to panic on invalid input (e.g., indexing out of bounds). | -| compile_fail | Attempts to compile the code. The test passes if compilation fails. | ok on compilation failure, failed if it compiles successfully. | Use Case: Illustrating language rules, such as the borrow checker or type system constraints. Caveat: Highly brittle. A future Rust version might make the code valid, causing the test to unexpectedly fail.[^4] | -| no_run | Compiles the code but does not execute it. | ok if compilation succeeds. | Use Case: Essential for examples with undesirable side effects in a test environment, such as network requests, filesystem I/O, or launching a GUI. Guarantees the example is valid Rust code without running it.[^5] | -| edition2021 | Compiles the code using the specified Rust edition's rules. | ok on success. | Use Case: Demonstrating syntax or idioms that are specific to a particular Rust edition (e.g., edition2018, edition2021).[^4] | +| Attribute | Action | Test Outcome | Primary Use Case & Caveats | +| ------------ | ------------------------------------------------------------------- | -------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| ignore | Skips both compilation and execution. | ignored | Use Case: For pseudocode, examples known to be broken, or to temporarily disable a test. Warning: Provides no guarantee that the code is even syntactically correct. Generally discouraged in favour of more specific attributes.[^3] | +| should_panic | Compiles and runs the code. The test passes if the code panics. | OK on panic, failed if it does not panic. | Use Case: Demonstrating functions that are designed to panic on invalid input (e.g., indexing out of bounds). | +| compile_fail | Attempts to compile the code. The test passes if compilation fails. | OK on compilation failure, failed if it compiles successfully. | Use Case: Illustrating language rules, such as the borrow checker or type system constraints. Warning: Highly brittle. A future Rust version might make the code valid, causing the test to unexpectedly fail.[^4] | +| no_run | Compiles the code but does not execute it. | OK if compilation succeeds. | Use Case: Essential for examples with undesirable side effects in a test environment, such as network requests, filesystem I/O, or launching a GUI. Guarantees the example is valid Rust code without running it.[^5] | +| edition2021 | Compiles the code using the specified Rust edition's rules. | OK on success. | Use Case: Demonstrating syntax or idioms that are specific to a particular Rust edition (e.g., edition2018, edition2021).[^4] | ### 3.2 Detailed Attribute Breakdown @@ -255,10 +255,10 @@ table provides a comparative reference for the most common doctest attributes. to do nothing with the code block. It is almost always better to either fix the example using hidden lines or use a more descriptive attribute like `no_run`.[^3] Its main legitimate use is for non-Rust code blocks or - illustrative pseudo-code. + illustrative pseudocode. - `should_panic`: This attribute inverts the normal test condition. It is used - to document and verify behavior that intentionally results in a panic. The + to document and verify behaviour that intentionally results in a panic. The test will fail if the code completes successfully or panics for a reason other than the one expected (if a specific panic message is asserted).[^3] @@ -271,10 +271,10 @@ table provides a comparative reference for the most common doctest attributes. - `no_run`: This attribute strikes a crucial balance between test verification and practicality. For an example that demonstrates how to download a file - from the internet, you want to ensure the example code is syntactically - correct and uses the API properly, but you do not want your CI server to - actually perform a network request every time tests are run. `no_run` - provides this guarantee by compiling the code without executing it.[^5] + from the internet, the example code must be syntactically correct and use the + API properly, but it is undesirable for the CI server to perform a network + request during every test run. `no_run` provides this guarantee by compiling + the code without executing it.[^5] - `edition20xx`: This attribute allows an example to be tested against a specific Rust edition. This is important for crates that support multiple @@ -537,7 +537,7 @@ its own purpose: from a completely external perspective, much like doctests. However, they are not constrained by the need to be readable documentation. They are suited for testing complex user workflows, interactions between multiple API entry - points, and the overall behavior of the library as a black box.[^6] + points, and the overall behaviour of the library as a black box.[^6] ### 6.2 The Unsolved Problem: Testing Private APIs @@ -574,7 +574,7 @@ real-world challenges when working with doctests. - **The** `README.md` **Dilemma**: A project's `README.md` file serves multiple audiences. It needs to render cleanly on platforms like GitHub and - [crates.io](http://crates.io), where hidden lines (`#...`) look like ugly, + [crates.io](http://crates.io), where hidden lines (`#...`) loOK like ugly, commented-out code. At the same time, it should contain testable examples, which often require hidden lines for setup.[^11] The best practice is to avoid maintaining the README manually. Instead, use a tool like @@ -607,7 +607,7 @@ mastering doctests: 1. **Embrace the Model**: Always remember that a doctest is an external integration test compiled in a separate crate. This mental model explains - nearly all of its behavior. + nearly all of its behaviour. 2. **Prioritize Clarity**: Write examples that teach the *why*, not just the *how*. Use hidden lines (`#`) ruthlessly to eliminate boilerplate and focus @@ -638,7 +638,7 @@ accessed on July 15, 2025, [^2]: Rustdoc doctests need fixing - Swatinem, accessed on July 15, 2025, -[^3]: Documentation tests - The rustdoc book - Rust Documentation, accessed on +[^3]: Documentation tests - The rustdoc boOK - Rust Documentation, accessed on July 15, 2025, [^4]: Documentation tests - - GitHub Pages, accessed on July 15, 2025, @@ -663,7 +663,7 @@ accessed on July 15, 2025, [^14]: quote_doctest - Rust - [Docs.rs](http://Docs.rs), accessed on July 15, 2025, -[^15]: Advanced features - The rustdoc book - Rust Documentation, accessed on +[^15]: Advanced features - The rustdoc boOK - Rust Documentation, accessed on July 15, 2025, [^16]: rust - How can I conditionally execute a module-level doctest based …, accessed on July 15, 2025, diff --git a/docs/rust-testing-with-rstest-fixtures.md b/docs/rust-testing-with-rstest-fixtures.md index 0e83c682..11fda497 100644 --- a/docs/rust-testing-with-rstest-fixtures.md +++ b/docs/rust-testing-with-rstest-fixtures.md @@ -369,7 +369,7 @@ failure might obscure subsequent ones. ### B. Combinatorial Testing with `#[values]`: Generating Test Matrices -The `#[values(...)]` attribute is used on test function arguments to generate +The `#[values(…)]` attribute is used on test function arguments to generate tests for every possible combination of the provided values (the Cartesian product). This is particularly useful for testing interactions between different parameters or ensuring comprehensive coverage across various input diff --git a/docs/the-road-to-wireframe-1-0-feature-set-philosophy-and-capability-maturity.md b/docs/the-road-to-wireframe-1-0-feature-set-philosophy-and-capability-maturity.md index 8bad69b2..ea48f7e5 100644 --- a/docs/the-road-to-wireframe-1-0-feature-set-philosophy-and-capability-maturity.md +++ b/docs/the-road-to-wireframe-1-0-feature-set-philosophy-and-capability-maturity.md @@ -4,7 +4,7 @@ The `wireframe` library began with a simple premise: to provide a fast, safe, and ergonomic toolkit for building request-response servers over custom binary -protocols. Its success to date is rooted in a core philosophy that prioritizes +protocols. Its success to date is rooted in a core philosophy that prioritises protocol-agnosticism, developer experience, and the full power of Rust's safety and performance guarantees. @@ -100,7 +100,7 @@ about. The core of this actor is a `tokio::select!` loop that multiplexes frames from multiple sources onto the outbound socket. To ensure that time-sensitive control messages (like heartbeats or session notifications) are not delayed by -large data transfers, this loop will be explicitly prioritized using +large data transfers, this loop will be explicitly prioritised using `select!(biased;)`. The polling order will be: diff --git a/docs/wireframe-1-0-detailed-development-roadmap.md b/docs/wireframe-1-0-detailed-development-roadmap.md index 577aed2d..dd37301e 100644 --- a/docs/wireframe-1-0-detailed-development-roadmap.md +++ b/docs/wireframe-1-0-detailed-development-roadmap.md @@ -24,7 +24,7 @@ which all public-facing features will be built.* | ---- | ------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ | ---------- | | 1.1 | Core Response & Error Types | Define the new `Response` enum with `Single`, `Vec`, `Stream` and `Empty` variants. Implement the generic `WireframeError` enum to distinguish between I/O and protocol errors. | Small | — | | 1.2 | Priority Push Channels | Implement the internal dual-channel `mpsc` mechanism within the connection state to handle high-priority and low-priority pushed frames. | Medium | — | -| 1.3 | Connection Actor Write Loop | Convert per-request workers into stateful connection actors. Implement a `select!(biased; ...)` loop that polls for shutdown signals, high/low priority pushes and the handler response stream in that strict order. | Large | #1.2 | +| 1.3 | Connection Actor Write Loop | Convert per-request workers into stateful connection actors. Implement a `select!(biased; …)` loop that polls for shutdown signals, high/low priority pushes and the handler response stream in that strict order. | Large | #1.2 | | 1.4 | Initial FragmentStrategy Trait | Define the initial `FragmentStrategy` trait and the `FragmentMeta` struct. Focus on the core methods: `decode_header` and `encode_header`. | Medium | — | | 1.5 | Basic FragmentAdapter | Implement the `FragmentAdapter` as a `FrameProcessor`. Build the inbound reassembly logic for a single, non-multiplexed stream of fragments and the outbound logic for splitting a single large frame. | Large | #1.4 | | 1.6 | Internal Hook Plumbing | Add the invocation points for the protocol-specific hooks (`before_send`, `on_command_end`, etc.) within the connection actor, even if the public trait is not yet defined. | Small | #1.3 | diff --git a/docs/wireframe-client-design.md b/docs/wireframe-client-design.md index 01bc4cca..67e27e80 100644 --- a/docs/wireframe-client-design.md +++ b/docs/wireframe-client-design.md @@ -11,10 +11,8 @@ The library currently focuses on server development. However, the core layers are intentionally generic: transport adapters, framing, serialization, routing, and middleware form a pipeline that is largely independent of server-specific logic. The design document outlines these layers, which process frames from raw -bytes to typed messages and back -again【F:docs/rust-binary-router-library-design.md†L316-L371】. Reusing these -pieces enables the implementation of a lightweight client without duplicating -protocol code. +bytes to typed messages and back[^router-design]. Reusing these pieces enables +the implementation of a lightweight client without duplicating protocol code. ## Core Components @@ -94,3 +92,5 @@ extensions might include: By leveraging the existing abstractions for framing and serialization, client support can share most of the server’s implementation while providing a small ergonomic API. +[^router-design]: See [wireframe router + design](rust-binary-router-library-design.md#implementation-details). diff --git a/docs/wireframe-testing-crate.md b/docs/wireframe-testing-crate.md index 20316914..9e86f823 100644 --- a/docs/wireframe-testing-crate.md +++ b/docs/wireframe-testing-crate.md @@ -7,8 +7,8 @@ frames, enabling fast tests without opening real network connections. ## Motivation The existing tests in [`tests/`](../tests) use helper functions such as -`run_app_with_frame` and `run_app_with_frames` to feed length‑prefixed frames -through an in‑memory duplex stream. These helpers simplify testing handlers by +`run_app_with_frame` and `run_app_with_frames` to feed length-prefixed frames +through an in-memory duplex stream. These helpers simplify testing handlers by allowing assertions on encoded responses without spinning up a full server. Encapsulating this logic in a dedicated crate keeps test code concise and reusable across projects. @@ -78,8 +78,8 @@ assert on these failure conditions directly. ### Custom Buffer Capacity -A variant accepting a buffer `capacity` allows fine‑tuning the size of the -in‑memory duplex channel, matching the existing +A variant accepting a buffer `capacity` allows fine-tuning the size of the +in-memory duplex channel, matching the existing `run_app_with_frame_with_capacity` and `run_app_with_frames_with_capacity` helpers. @@ -109,7 +109,7 @@ For most tests the input frame is preassembled from raw bytes. A small wrapper can accept any `serde::Serialize` value and perform the encoding and framing before delegating to `drive_with_frame`. This mirrors the patterns in `tests/routes.rs`, where structs convert to bytes with `BincodeSerializer` and -are then wrapped in a length‑prefixed frame. +are then wrapped in a length-prefixed frame. ```rust #[derive(serde::Serialize)] @@ -149,7 +149,7 @@ with prebuilt frames and their responses decoded for assertions. - **Isolation**: Handlers can be tested without spinning up a full server or opening sockets. - **Reusability**: Projects consuming `wireframe` can depend on - `wireframe_testing` in their dev‑dependencies to leverage the same helpers. + `wireframe_testing` in their dev-dependencies to leverage the same helpers. - **Clarity**: Abstracting the duplex stream logic keeps test cases focused on behaviour instead of transport details. From d2aa7f8d00c3f9f3a3dea0126224773655185703 Mon Sep 17 00:00:00 2001 From: Leynos Date: Mon, 21 Jul 2025 00:21:12 +0100 Subject: [PATCH 04/11] Refine docs per review --- ...eframe-a-guide-to-production-resilience.md | 5 +-- docs/mocking-network-outages-in-rust.md | 33 +++++++++---------- docs/rust-binary-router-library-design.md | 7 ++-- docs/rust-doctest-dry-guide.md | 12 +++---- ...-set-philosophy-and-capability-maturity.md | 4 +-- ...eframe-1-0-detailed-development-roadmap.md | 16 ++++----- docs/wireframe-testing-crate.md | 12 +++---- 7 files changed, 45 insertions(+), 44 deletions(-) diff --git a/docs/hardening-wireframe-a-guide-to-production-resilience.md b/docs/hardening-wireframe-a-guide-to-production-resilience.md index 03337a80..e3420892 100644 --- a/docs/hardening-wireframe-a-guide-to-production-resilience.md +++ b/docs/hardening-wireframe-a-guide-to-production-resilience.md @@ -32,8 +32,9 @@ The core mechanism relies on two primitives from the `tokio` ecosystem: - `CancellationToken`**:** A single root token is created at server startup. This token is cloned and distributed to every spawned task, including connection actors and any user-defined background workers. When the server - needs to shut down (e.g., on receipt of `SIGINT`), it calls `.cancel()` on - the root token, a signal that is immediately visible to all clones. + needs to shut down (e.g., on receipt of `SIGINT`), + it calls `.cancel()` on the root token, a signal that is immediately visible + to all clones. - `TaskTracker`**:** The server uses a `TaskTracker` to `spawn` all tasks. After triggering cancellation, the main server task calls `tracker.close()` and diff --git a/docs/mocking-network-outages-in-rust.md b/docs/mocking-network-outages-in-rust.md index 30599f49..c2f978a7 100644 --- a/docs/mocking-network-outages-in-rust.md +++ b/docs/mocking-network-outages-in-rust.md @@ -98,9 +98,9 @@ scenario. ## Introducing a Testable Transport Abstraction Currently, `handle_client` is tied to a real `TcpStream`. To test network -failures, we need to run `handle_client` (or its subroutines) with a *simulated -stream*. We’ll achieve this by abstracting the transport layer behind a trait -or generics, so that in tests we can substitute a mock stream object. +failures, the function must run with a *simulated stream*. The design abstracts +the transport layer behind a trait or generics, enabling tests to substitute a +mock stream object. **Refactoring** `handle_client`**:** A straightforward approach is to make `handle_client` generic over the stream’s reader and writer. The Tokio docs @@ -146,11 +146,10 @@ can apply this by splitting the logic: handshake logic will use the provided `reader` and `writer` as well. With this change, `client_handler` no longer assumes a real network -`TcpStream`; we can pass in any in-memory or mock stream for testing. -**Importantly**, the production code doesn’t lose functionality – we still -create actual TCP listeners/streams, but we hand off to the generic handler. -This refactor maintains the same behaviour while enabling injection of test -streams. +`TcpStream`; any in-memory or mock stream can be supplied for testing. +**Importantly**, production code retains full functionality – actual TCP +listeners/streams still hand off to the generic handler. The refactor maintains +the same behaviour while enabling injection of test streams. *Example – generic handler signature:* @@ -438,7 +437,7 @@ let test_reader = Builder::new() Here, after the handshake, any attempt by the server to read further will immediately get a `ConnectionReset` error. In `handle_client`, this is caught by the generic `Err(e)` arm (not EOF), and the function will return an error. -We can assert that result is an `Err` and matches the expected kind. +Assert that the result is an `Err` matching the expected kind. **6. Simulating Partial Write Failures:** So far, we’ve focused on read-side issues. But what if the server fails while **writing** to the client (for @@ -497,10 +496,10 @@ deterministic test suite for network failures. ## Parameterizing Tests with `rstest` -As the above examples show, many scenarios follow a similar pattern of setup -and assertion. We can use the `rstest` crate to avoid repetitive code by -parameterizing the scenarios. The `#[rstest]` attribute allows us to define -multiple cases for a single test function. +The preceding examples reveal a common pattern of setup and assertion across +scenarios. Leveraging the `rstest` crate avoids repetitive code by +parameterising the scenarios. The `#[rstest]` attribute defines multiple cases +for a single test function. For instance, we might create a single test function `test_network_outage_scenarios` with parameters indicating the scenario type. @@ -557,10 +556,10 @@ async fn test_network_outage_scenarios(scenario: Scenario) { } ``` -Above, each `case(...)` provides a different `Scenario` variant. The test -builds the appropriate `test_reader`/`test_writer` and then invokes -`client_handler`. We use a `should_error` flag to assert the expected outcome. -This single parametrized test replaces multiple individual tests, reducing +In the snippet above, each `case(...)` macro provides a different `Scenario` +variant. The test builds the appropriate `test_reader`/`test_writer` and then +invokes `client_handler`. A `should_error` flag asserts the expected outcome. +This single parametrised test replaces multiple individual tests, reducing duplication. All scenarios still run in isolation with distinct setups, thanks to `rstest`. diff --git a/docs/rust-binary-router-library-design.md b/docs/rust-binary-router-library-design.md index 542a0bd3..cbe624a6 100644 --- a/docs/rust-binary-router-library-design.md +++ b/docs/rust-binary-router-library-design.md @@ -16,9 +16,9 @@ This report outlines the design of "wireframe," a novel Rust library aimed at substantially reducing source code complexity when building applications that handle arbitrary frame-based binary wire protocols. The design draws inspiration from the ergonomic API of Actix Web 4, a popular Rust web framework -known for its intuitive routing, data extraction, and middleware systems.4 -"wireframe" intends to adapt these successful patterns to the domain of binary -protocols. +known for its intuitive routing, data extraction, and middleware +systems.[^actix-web] "wireframe" intends to adapt these successful patterns to +the domain of binary protocols. A key aspect of the proposed design is the utilization of `wire-rs`[^wire-rs] for message serialization and deserialization, contingent upon its ability to @@ -1599,3 +1599,4 @@ Actix-like API components, along with gathering community feedback, will be crucial next steps to validate this approach and refine the library's features into a valuable tool for the Rust ecosystem. [^wire-rs]: +[^actix-web]: Actix Web 4 – diff --git a/docs/rust-doctest-dry-guide.md b/docs/rust-doctest-dry-guide.md index f3a76d36..e1045cdf 100644 --- a/docs/rust-doctest-dry-guide.md +++ b/docs/rust-doctest-dry-guide.md @@ -18,7 +18,7 @@ running within the library's own context, but as an entirely separate, temporary crate.[^1] When a developer executes `cargo test --doc`, `rustdoc` initiates a multi-stage process for every code -block found in the documentation comments 3: +block found in the documentation comments[^3]: 1. **Parsing and Extraction**: `rustdoc` first parses the source code of the library, resolving conditional compilation attributes (`#[cfg]`) to @@ -241,7 +241,7 @@ Choosing the correct attribute is critical for communicating the intent of an example and ensuring the test suite provides meaningful feedback. The following table provides a comparative reference for the most common doctest attributes. -| Attribute | Action | Test Outcome | Primary Use Case & Caveats | +| Attribute | Action | Test Outcome | Primary Use Case & Warnings | | ------------ | ------------------------------------------------------------------- | -------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | ignore | Skips both compilation and execution. | ignored | Use Case: For pseudocode, examples known to be broken, or to temporarily disable a test. Warning: Provides no guarantee that the code is even syntactically correct. Generally discouraged in favour of more specific attributes.[^3] | | should_panic | Compiles and runs the code. The test passes if the code panics. | OK on panic, failed if it does not panic. | Use Case: Demonstrating functions that are designed to panic on invalid input (e.g., indexing out of bounds). | @@ -314,7 +314,7 @@ excluded from normal production builds and standard unit test runs, preventing any pollution of the final binary or the public API. The typical implementation pattern is to create a private helper module within -your library: +the library: Rust @@ -543,7 +543,7 @@ its own purpose: As established, the `rustdoc` compilation model makes testing private items in doctests impossible by design.[^1] The community has developed several -workarounds, but each comes with significant trade-offs 1: +workarounds, but each comes with significant trade-offs[^1]: 1. `ignore` **the test**: This allows the example to exist in the documentation but sacrifices the guarantee of correctness. It is the least desirable @@ -574,12 +574,12 @@ real-world challenges when working with doctests. - **The** `README.md` **Dilemma**: A project's `README.md` file serves multiple audiences. It needs to render cleanly on platforms like GitHub and - [crates.io](http://crates.io), where hidden lines (`#...`) loOK like ugly, + [crates.io](http://crates.io), where hidden lines (`#...`) look like ugly, commented-out code. At the same time, it should contain testable examples, which often require hidden lines for setup.[^11] The best practice is to avoid maintaining the README manually. Instead, use a tool like - `cargo-readme`. This tool generates a `README.md` file from your crate-level + `cargo-readme`. This tool generates a `README.md` file from the crate-level documentation (in `lib.rs`), automatically stripping out the hidden lines. This provides a single source of truth that is both fully testable via `cargo test --doc` and produces a clean, professional README for external diff --git a/docs/the-road-to-wireframe-1-0-feature-set-philosophy-and-capability-maturity.md b/docs/the-road-to-wireframe-1-0-feature-set-philosophy-and-capability-maturity.md index ea48f7e5..8bad69b2 100644 --- a/docs/the-road-to-wireframe-1-0-feature-set-philosophy-and-capability-maturity.md +++ b/docs/the-road-to-wireframe-1-0-feature-set-philosophy-and-capability-maturity.md @@ -4,7 +4,7 @@ The `wireframe` library began with a simple premise: to provide a fast, safe, and ergonomic toolkit for building request-response servers over custom binary -protocols. Its success to date is rooted in a core philosophy that prioritises +protocols. Its success to date is rooted in a core philosophy that prioritizes protocol-agnosticism, developer experience, and the full power of Rust's safety and performance guarantees. @@ -100,7 +100,7 @@ about. The core of this actor is a `tokio::select!` loop that multiplexes frames from multiple sources onto the outbound socket. To ensure that time-sensitive control messages (like heartbeats or session notifications) are not delayed by -large data transfers, this loop will be explicitly prioritised using +large data transfers, this loop will be explicitly prioritized using `select!(biased;)`. The polling order will be: diff --git a/docs/wireframe-1-0-detailed-development-roadmap.md b/docs/wireframe-1-0-detailed-development-roadmap.md index dd37301e..4b6095c1 100644 --- a/docs/wireframe-1-0-detailed-development-roadmap.md +++ b/docs/wireframe-1-0-detailed-development-roadmap.md @@ -20,14 +20,14 @@ public consumption. and message processing. This phase establishes the internal architecture upon which all public-facing features will be built.* -| Item | Name | Details | Size | Depends on | -| ---- | ------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ | ---------- | -| 1.1 | Core Response & Error Types | Define the new `Response` enum with `Single`, `Vec`, `Stream` and `Empty` variants. Implement the generic `WireframeError` enum to distinguish between I/O and protocol errors. | Small | — | -| 1.2 | Priority Push Channels | Implement the internal dual-channel `mpsc` mechanism within the connection state to handle high-priority and low-priority pushed frames. | Medium | — | -| 1.3 | Connection Actor Write Loop | Convert per-request workers into stateful connection actors. Implement a `select!(biased; …)` loop that polls for shutdown signals, high/low priority pushes and the handler response stream in that strict order. | Large | #1.2 | -| 1.4 | Initial FragmentStrategy Trait | Define the initial `FragmentStrategy` trait and the `FragmentMeta` struct. Focus on the core methods: `decode_header` and `encode_header`. | Medium | — | -| 1.5 | Basic FragmentAdapter | Implement the `FragmentAdapter` as a `FrameProcessor`. Build the inbound reassembly logic for a single, non-multiplexed stream of fragments and the outbound logic for splitting a single large frame. | Large | #1.4 | -| 1.6 | Internal Hook Plumbing | Add the invocation points for the protocol-specific hooks (`before_send`, `on_command_end`, etc.) within the connection actor, even if the public trait is not yet defined. | Small | #1.3 | +| Item | Name | Details | Size | Depends on | +| ---- | ------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ | ---------- | +| 1.1 | Core Response & Error Types | Define the new `Response` enum with `Single`, `Vec`, `Stream` and `Empty` variants. Implement the generic `WireframeError` enum to distinguish between I/O and protocol errors. | Small | — | +| 1.2 | Priority Push Channels | Implement the internal dual-channel `mpsc` mechanism within the connection state to handle high-priority and low-priority pushed frames. | Medium | — | +| 1.3 | Connection Actor Write Loop | Convert per-request workers into stateful connection actors. Implement a `select!(biased; …)` loop that polls for shutdown signals, high-/low-priority pushes and the handler response stream in that strict order. | Large | #1.2 | +| 1.4 | Initial FragmentStrategy Trait | Define the initial `FragmentStrategy` trait and the `FragmentMeta` struct. Focus on the core methods: `decode_header` and `encode_header`. | Medium | — | +| 1.5 | Basic FragmentAdapter | Implement the `FragmentAdapter` as a `FrameProcessor`. Build the inbound reassembly logic for a single, non-multiplexed stream of fragments and the outbound logic for splitting a single large frame. | Large | #1.4 | +| 1.6 | Internal Hook Plumbing | Add the invocation points for the protocol-specific hooks (`before_send`, `on_command_end`, etc.) within the connection actor, even if the public trait is not yet defined. | Small | #1.3 | ## Phase 2: Public APIs & Developer Ergonomics diff --git a/docs/wireframe-testing-crate.md b/docs/wireframe-testing-crate.md index 9e86f823..dc48d80d 100644 --- a/docs/wireframe-testing-crate.md +++ b/docs/wireframe-testing-crate.md @@ -67,9 +67,9 @@ where These functions mirror the behaviour of `run_app_with_frame` and `run_app_with_frames` found in the repository’s test utilities. They create a `tokio::io::duplex` stream, spawn the application as a background task, and -write the provided frame(s) to the client side of the stream. After the app -finishes processing, the helpers collect the bytes written back and return them -for inspection. +write the provided frame(s) to the client side of the stream. After the +application finishes processing, the helpers collect the bytes written back and +return them for inspection. Any I/O errors surfaced by the duplex stream or failures while decoding a length prefix propagate through the returned `IoResult`. Malformed or truncated @@ -107,9 +107,9 @@ pub async fn drive_with_frames_mut(app: &mut WireframeApp, frames: Vec>) For most tests the input frame is preassembled from raw bytes. A small wrapper can accept any `serde::Serialize` value and perform the encoding and framing -before delegating to `drive_with_frame`. This mirrors the patterns in -`tests/routes.rs`, where structs convert to bytes with `BincodeSerializer` and -are then wrapped in a length-prefixed frame. +before delegating to `drive_with_frame`. The approach mirrors patterns in +`tests/routes.rs`, where structs convert to bytes with `BincodeSerializer` +before being wrapped in a length-prefixed frame. ```rust #[derive(serde::Serialize)] From 5bc18062d5c82f3d8f871b44d919215606858226 Mon Sep 17 00:00:00 2001 From: Leynos Date: Mon, 21 Jul 2025 01:23:34 +0100 Subject: [PATCH 05/11] Address review feedback --- ...ening-wireframe-a-guide-to-production-resilience.md | 9 ++++----- docs/roadmap.md | 2 +- docs/rust-binary-router-library-design.md | 10 ++++++---- docs/rust-doctest-dry-guide.md | 6 +++--- docs/wireframe-1-0-detailed-development-roadmap.md | 2 +- 5 files changed, 15 insertions(+), 14 deletions(-) diff --git a/docs/hardening-wireframe-a-guide-to-production-resilience.md b/docs/hardening-wireframe-a-guide-to-production-resilience.md index e3420892..bbef9224 100644 --- a/docs/hardening-wireframe-a-guide-to-production-resilience.md +++ b/docs/hardening-wireframe-a-guide-to-production-resilience.md @@ -29,14 +29,13 @@ The core mechanism relies on two primitives from the `tokio` ecosystem: `tokio_util::sync::CancellationToken` for signalling and `tokio_util::task::TaskTracker` for synchronisation. -- `CancellationToken`**:** A single root token is created at server startup. +- `CancellationToken`: A single root token is created at server startup. This token is cloned and distributed to every spawned task, including connection actors and any user-defined background workers. When the server - needs to shut down (e.g., on receipt of `SIGINT`), - it calls `.cancel()` on the root token, a signal that is immediately visible - to all clones. + needs to shut down (e.g., on receipt of `SIGINT`), it calls `.cancel()` on + the root token, a signal that is immediately visible to all clones. -- `TaskTracker`**:** The server uses a `TaskTracker` to `spawn` all tasks. After +- `TaskTracker`: The server uses a `TaskTracker` to `spawn` all tasks. After triggering cancellation, the main server task calls `tracker.close()` and then `tracker.wait().await`. This call will only resolve once every single tracked task has completed, guaranteeing that no tasks are orphaned. diff --git a/docs/roadmap.md b/docs/roadmap.md index a15a58be..842e3249 100644 --- a/docs/roadmap.md +++ b/docs/roadmap.md @@ -171,7 +171,7 @@ production environments. - [ ] Use `loom` for concurrency testing of shared state (`tests/advanced/concurrency_loom.rs`). -## Phase 6: Application-Level Streaming (Multi-Packet Responses) (Priority Focus) +## Phase 6: Multi-Packet Streaming Responses (Priority Focus) This is the next major feature set. It enables a handler to return multiple, distinct messages over time in response to a single request, forming a logical diff --git a/docs/rust-binary-router-library-design.md b/docs/rust-binary-router-library-design.md index cbe624a6..1b51fd2b 100644 --- a/docs/rust-binary-router-library-design.md +++ b/docs/rust-binary-router-library-design.md @@ -268,9 +268,9 @@ The development of "wireframe" adheres to the following principles: it handles beyond the assumption of a frame-based structure. Users should be able to define their own framing logic and message types. - **Performance**: Leveraging Rust's inherent performance characteristics is - crucial.2 While developer ergonomics is a primary focus, the design must - avoid introducing unnecessary overhead. Asynchronous operations, powered by a - runtime like Tokio, are essential for efficient I/O and concurrency. + crucial.[^perf] While developer ergonomics is a primary focus, the design + must avoid introducing unnecessary overhead. Asynchronous operations, powered + by a runtime like Tokio, are essential for efficient I/O and concurrency. - **Safety**: The library will harness Rust's strong type system and ownership model to prevent common networking bugs, such as data races and use-after-free errors, contributing to more reliable software. @@ -329,7 +329,7 @@ handling to be managed and customized independently. built-in framing logic, potentially by implementing traits like Tokio's `Decoder` and `Encoder`. - **Deserialization/Serialization Engine**: This engine converts the byte - payload of incoming frames into strongly-typed Rust data structures + payload of incoming frames into strongly typed Rust data structures (messages) and serializes outgoing Rust messages into byte payloads for outgoing frames. This is the primary role intended for `wire-rs` 6 or an alternative like `bincode` 11 or `postcard`.12 A minimal wrapper trait in the @@ -1600,3 +1600,5 @@ crucial next steps to validate this approach and refine the library's features into a valuable tool for the Rust ecosystem. [^wire-rs]: [^actix-web]: Actix Web 4 – +[^perf]: See *Rust Performance Book* – + diff --git a/docs/rust-doctest-dry-guide.md b/docs/rust-doctest-dry-guide.md index e1045cdf..7c2b851e 100644 --- a/docs/rust-doctest-dry-guide.md +++ b/docs/rust-doctest-dry-guide.md @@ -214,7 +214,7 @@ primary use cases include: on the API item being documented.[^3] 3. **Hiding** `use` **Statements**: While often useful to show which types are - involved, `use` statements can sometimes be hidden to de-clutter very simple + involved, `use` statements can sometimes be hidden to de-clutter simple examples. The existence of features like hidden lines and the `(())` shorthand reveals a @@ -557,8 +557,8 @@ workarounds, but each comes with significant trade-offs[^1]: 3. **Use** `cfg_attr` **to conditionally make items public**: This involves adding an attribute like `#[cfg_attr(feature = "doctest-private", visibility::make(pub))]` to every - private item you wish to test. While robust, it is highly invasive and adds - significant boilerplate throughout the codebase. + private item that requires testing. While robust, it is highly invasive and + adds significant boilerplate throughout the codebase. The expert recommendation is to acknowledge this limitation and not fight the tool. Do not compromise a clean API design for the sake of doctests. Use diff --git a/docs/wireframe-1-0-detailed-development-roadmap.md b/docs/wireframe-1-0-detailed-development-roadmap.md index 4b6095c1..bc753bcd 100644 --- a/docs/wireframe-1-0-detailed-development-roadmap.md +++ b/docs/wireframe-1-0-detailed-development-roadmap.md @@ -26,7 +26,7 @@ which all public-facing features will be built.* | 1.2 | Priority Push Channels | Implement the internal dual-channel `mpsc` mechanism within the connection state to handle high-priority and low-priority pushed frames. | Medium | — | | 1.3 | Connection Actor Write Loop | Convert per-request workers into stateful connection actors. Implement a `select!(biased; …)` loop that polls for shutdown signals, high-/low-priority pushes and the handler response stream in that strict order. | Large | #1.2 | | 1.4 | Initial FragmentStrategy Trait | Define the initial `FragmentStrategy` trait and the `FragmentMeta` struct. Focus on the core methods: `decode_header` and `encode_header`. | Medium | — | -| 1.5 | Basic FragmentAdapter | Implement the `FragmentAdapter` as a `FrameProcessor`. Build the inbound reassembly logic for a single, non-multiplexed stream of fragments and the outbound logic for splitting a single large frame. | Large | #1.4 | +| 1.5 | Basic FragmentAdapter | Implement the `FragmentAdapter` as a `FrameProcessor`. Build the inbound reassembly logic for a single, non-multiplexed stream of fragments, and the outbound logic for splitting a single large frame. | Large | #1.4 | | 1.6 | Internal Hook Plumbing | Add the invocation points for the protocol-specific hooks (`before_send`, `on_command_end`, etc.) within the connection actor, even if the public trait is not yet defined. | Small | #1.3 | ## Phase 2: Public APIs & Developer Ergonomics From 5bf1e8570fe2487f624cf333d78e36d30c690051 Mon Sep 17 00:00:00 2001 From: Leynos Date: Mon, 21 Jul 2025 02:59:56 +0100 Subject: [PATCH 06/11] Wrap docs and fix footnotes --- docs/roadmap.md | 2 +- docs/rust-doctest-dry-guide.md | 88 +++++++++++++++++----------------- 2 files changed, 45 insertions(+), 45 deletions(-) diff --git a/docs/roadmap.md b/docs/roadmap.md index 842e3249..6beba8d3 100644 --- a/docs/roadmap.md +++ b/docs/roadmap.md @@ -128,7 +128,7 @@ lifecycle control. - [x] **Lifecycle Hooks:** - [x] Implement `on_connect` and `on_disconnect` hooks for session - initialisation and cleanup (`src/hooks.rs`). + initialization and cleanup (`src/hooks.rs`). - [x] Write tests to verify lifecycle hook behaviour (`tests/lifecycle.rs`). diff --git a/docs/rust-doctest-dry-guide.md b/docs/rust-doctest-dry-guide.md index 7c2b851e..73d19084 100644 --- a/docs/rust-doctest-dry-guide.md +++ b/docs/rust-doctest-dry-guide.md @@ -71,7 +71,7 @@ CI/CD cycle, a common pain point in the Rust community.[^2] The architectural purity of the `rustdoc` model—its insistence on simulating an external user—creates a fundamental trade-off. On one hand, it provides an unparalleled guarantee that the public documentation is accurate and that the -examples work as advertised, creating true "living documentation".[^8] On the +examples work as advertised, creating true "living documentation".[^7] On the other hand, this same purity prevents the use of doctests for verifying documentation of internal, private APIs. This forces a bifurcation of documentation strategy. Public-facing documentation can be tied directly to @@ -95,33 +95,33 @@ clear, illustrative, and robust. ### 2.1 The Anatomy of a Doctest -Doctests reside within documentation comments. Rust recognizes two types: +Doctests reside within documentation comments. Rust recognises two types: -- **Outer doc comments (**`///`**)**: These document the item that follows them - (e.g., a function, struct, or module). This is the most common type.[^8] +- **Outer doc comments (`///`)**: These document the item that follows them + (e.g., a function, struct, or module). This is the most common type.[^7] -- **Inner doc comments (**`//!`**)**: These document the item they are inside - of (e.g., a module or the crate itself). They are typically used at the top - of `lib.rs` or `mod.rs` to provide crate- or module-level documentation.[^9] - Within these comments, a code block is - denoted by triple backticks - (`). While rustdoc defaults to assuming the language is Rust, explicitly add the` - rust - ` language specifier for clarity.[^3] A doctest is considered to "pass" if it compiles successfully and runs to completion without panicking. To verify that a function produces a specific output, developers should use the standard assertion macros, such as ` - assert!`,`assert_eq!`, and`assert_ne!`.[^3] +- **Inner doc comments (`//!`)**: These document the item they are inside + (e.g., a module or the crate itself). They are typically used at the top of + `lib.rs` or `mod.rs` to provide crate- or module-level documentation.[^8] + + Within these comments, a code block is +denoted by triple back-ticks (```). While `rustdoc` defaults to Rust syntax, +explicitly add the `rust` language specifier for clarity.[^3] A doctest +"passes" when it compiles and runs without panicking. To assert specific +outcomes, use the standard macros `assert!`, `assert_eq!`, and +`assert_ne!`.[^3] ### 2.2 The Philosophy of a Good Example The purpose of a documentation example extends beyond merely demonstrating syntax. A reader can typically be expected to understand the mechanics of calling a function or instantiating a struct. A truly valuable example -illustrates *why* and in *what context* an item should be used.[^10] It should +illustrates *why* and in *what context* an item should be used.[^9] It should tell a small story or solve a miniature problem that illuminates the item's purpose. For instance, an example for `String::clone()` should not just show `hello.clone();`, but should demonstrate -a scenario where ownership rules necessitate creating a copy.[^10] +a scenario where ownership rules necessitate creating a copy.[^9] To achieve this, examples must be clear and concise. Any code that is not directly relevant to the point being made—such as complex setup, boilerplate, @@ -138,7 +138,7 @@ returns a `Result` or `Option`. This mismatch leads to a compilation error.[^3] Using `.unwrap()` or `.expect()` in examples is strongly discouraged. It is considered an anti-pattern because users often copy example code verbatim, and -encouraging panicking on errors is contrary to robust application design.[^10] +encouraging panicking on errors is contrary to robust application design.[^9] Instead, two canonical solutions exist. Solution 1: The Explicit main Function @@ -166,7 +166,7 @@ Rust ``` In this pattern, the reader only sees the core, fallible code, while the test -itself is a complete, well-behaved program.[^10] +itself is a complete, well-behaved program.[^9] Solution 2: The Implicit Result-Returning main @@ -206,7 +206,7 @@ primary use cases include: 1. **Hiding** `main` **Wrappers**: As demonstrated in the error-handling examples, the entire `fn main() -> Result<...> {... }` and `Ok(())` scaffolding can be hidden, presenting the user with only the relevant - code.[^10] + code.[^9] 2. **Hiding Setup Code**: If an example requires some preliminary setup—like creating a temporary file, defining a helper struct for the test, or @@ -221,7 +221,7 @@ The existence of features like hidden lines and the `(())` shorthand reveals a core tension in `rustdoc`'s design. The compilation model is rigid: every test must be a valid, standalone program.[^2] However, the ideal documentation example is often just a small, illustrative snippet that is not a valid program -on its own.[^10] These ergonomic features are pragmatic "patches" designed to +on its own.[^9] These ergonomic features are pragmatic "patches" designed to resolve this conflict. They allow the developer to inject the necessary boilerplate to satisfy the compiler without burdening the human reader with irrelevant details. Understanding them as clever workarounds, rather than as @@ -305,7 +305,7 @@ flag provided by `rustdoc`: `doctest`. A common mistake is to try to place shared test logic in a block guarded by `#[cfg(test)]`. This will not work, because `rustdoc` does not enable the `test` configuration flag during its compilation process; `#[cfg(test)]` is reserved for unit and integration tests -run directly by `cargo test`.[^12] +run directly by `cargo test`.[^11] Instead, `rustdoc` sets its own unique `doctest` flag. By guarding a module or function with `#[cfg(doctest)]`, developers can write helper code that is @@ -364,7 +364,7 @@ pub struct TestContext { /*... */ } This pattern is the most effective way to achieve DRY doctests. It centralizes setup logic, improves maintainability, and cleanly separates testing concerns -from production code.[^12] +from production code.[^11] ### 4.3 Advanced DRY: Programmatic Doctest Generation @@ -378,7 +378,7 @@ Crates like `quote-doctest` address this by allowing developers to programmatically construct a doctest from a `TokenStream`. This enables the generation of doctests from the same source of truth that generates the code they are intended to test, representing the ultimate application of the DRY -principle in this domain.[^14] +principle in this domain.[^12] ## Conditional Compilation Strategies for Doctests @@ -400,7 +400,7 @@ a Windows machine). **The Mechanism**: `rustdoc` always invokes the compiler with the `--cfg doc` flag set. By adding `doc` to an item's `#[cfg]` attribute, a developer can instruct the compiler to include that item specifically for documentation -builds.[^15] +builds.[^13] **The Pattern**: @@ -421,7 +421,7 @@ This distinction highlights the "cfg duality." The `#[cfg(doc)]` attribute controls the *table of contents* of the documentation; it determines which items are parsed and rendered. The actual compilation of a doctest, however, happens in a separate, later stage. In that stage, the `doc` cfg is *not* -passed to the compiler.[^15] The compiler only sees the host +passed to the compiler.[^13] The compiler only sees the host `cfg` (e.g., `target_os = "windows"`), so the `UnixSocket` type is not available, and the test fails to compile. `#[cfg(doc)]` affects what is @@ -458,7 +458,7 @@ Rust When the `"serde"` feature is disabled, the code inside the block is compiled out. The doctest becomes an empty program that runs, does nothing, and is reported as `ok`. While simple to write, this can be misleading, as the test -suite reports a "pass" for a test that was effectively skipped.[^16] +suite reports a "pass" for a test that was effectively skipped.[^14] Pattern 2: cfg_attr to Conditionally ignore the Test @@ -483,7 +483,7 @@ With this pattern, if the `"serde"` feature is disabled, the test is marked as the feature is enabled, the `ignore` is omitted, and the test runs normally. This approach provides clearer feedback but is significantly more verbose and less ergonomic, especially when applied to outer (`///`) doc comments, as the -`cfg_attr` must be applied to every single line of the comment.[^16] +`cfg_attr` must be applied to every single line of the comment.[^14] ### 5.3 Displaying Feature Requirements in Docs: `#[doc(cfg(...))]` @@ -507,7 +507,7 @@ pub fn function_requiring_serde() { /*... */ } This will render a banner in the documentation for `function_requiring_serde` that reads, "This is only available when the `serde` feature is enabled." This attribute is purely for documentation generation and is independent of, but -often used alongside, the conditional test execution patterns.[^16] +often used alongside, the conditional test execution patterns.[^14] ## Doctests in the Wider Project Ecosystem @@ -576,25 +576,25 @@ real-world challenges when working with doctests. audiences. It needs to render cleanly on platforms like GitHub and [crates.io](http://crates.io), where hidden lines (`#...`) look like ugly, commented-out code. At the same time, it should contain testable examples, - which often require hidden lines for setup.[^11] The best practice is to + which often require hidden lines for setup.[^10] The best practice is to avoid maintaining the README manually. Instead, use a tool like `cargo-readme`. This tool generates a `README.md` file from the crate-level documentation (in `lib.rs`), automatically stripping out the hidden lines. This provides a single source of truth that is both fully testable via `cargo test --doc` and produces a clean, professional README for external - sites.[^11] + sites.[^10] - **Developer Ergonomics in IDEs**: Writing code inside documentation comments can be a subpar experience. IDEs and tools like `rust-analyzer` often provide limited or no autocompletion, real-time error checking, or refactoring - support for code within a comment block.[^18] A common and effective workflow + support for code within a comment block.[^15] A common and effective workflow to mitigate this is to first write and debug the example as a standard `#[test]` function in a temporary file or test module. This allows the developer to leverage the full power of the IDE. Once the code is working correctly, it can be copied into the doc comment, and the necessary - formatting (`///`, `#`, etc.) can be applied.[^18] + formatting (`///`, `#`, etc.) can be applied.[^15] ## Conclusion and Recommendations @@ -605,9 +605,9 @@ have evolved to manage its constraints, developers can write doctests that are effective, ergonomic, and maintainable. To summarize the key principles for mastering doctests: -1. **Embrace the Model**: Always remember that a doctest is an external - integration test compiled in a separate crate. This mental model explains - nearly all of its behaviour. +1. **Embrace the Model**: Treat a doctest as an external integration test + compiled in a separate crate; this mental model explains nearly all of its + behaviour. 2. **Prioritize Clarity**: Write examples that teach the *why*, not just the *how*. Use hidden lines (`#`) ruthlessly to eliminate boilerplate and focus @@ -647,29 +647,29 @@ July 15, 2025, [^6]: How to organize your Rust tests - LogRocket Blog, accessed on July 15, 2025, -[^8]: Writing Rust Documentation - DEV Community, accessed on July 15, 2025, +[^7]: Writing Rust Documentation - DEV Community, accessed on July 15, 2025, -[^9]: The rustdoc book, accessed on July 15, 2025, +[^8]: The rustdoc book, accessed on July 15, 2025, -[^10]: Documentation - Rust API Guidelines, accessed on July 15, 2025, +[^9]: Documentation - Rust API Guidelines, accessed on July 15, 2025, -[^11]: Best practice for doc testing README - help - The Rust Programming +[^10]: Best practice for doc testing README - help - The Rust Programming Language Forum, accessed on July 15, 2025, -[^12]: Compile_fail doc test ignored in cfg(test) - help - The Rust Programming +[^11]: Compile_fail doc test ignored in cfg(test) - help - The Rust Programming Language Forum, accessed on July 15, 2025, accessed on July 15, 2025, -[^14]: quote_doctest - Rust - [Docs.rs](http://Docs.rs), accessed on July 15, +[^12]: quote_doctest - Rust - [Docs.rs](http://Docs.rs), accessed on July 15, 2025, -[^15]: Advanced features - The rustdoc boOK - Rust Documentation, accessed on +[^13]: Advanced features - The rustdoc boOK - Rust Documentation, accessed on July 15, 2025, -[^16]: rust - How can I conditionally execute a module-level doctest based …, +[^14]: rust - How can I conditionally execute a module-level doctest based …, accessed on July 15, 2025, have doctests?, accessed on July 15, 2025, -[^18]: How do you write your doc tests? : r/rust - Reddit, accessed on July 15, +[^15]: How do you write your doc tests? : r/rust - Reddit, accessed on July 15, 2025, From 9966fe198c85e8c36357f91c3e96ef6c565374ed Mon Sep 17 00:00:00 2001 From: Leynos Date: Mon, 21 Jul 2025 12:26:17 +0100 Subject: [PATCH 07/11] Address formatting feedback --- docs/rust-doctest-dry-guide.md | 32 ++++++-------------- docs/rust-testing-with-rstest-fixtures.md | 36 ++++++++++++++++------- docs/wireframe-client-design.md | 3 +- 3 files changed, 37 insertions(+), 34 deletions(-) diff --git a/docs/rust-doctest-dry-guide.md b/docs/rust-doctest-dry-guide.md index 73d19084..76bbccd9 100644 --- a/docs/rust-doctest-dry-guide.md +++ b/docs/rust-doctest-dry-guide.md @@ -148,9 +148,7 @@ function within the doctest that returns a Result. This leverages the Termination trait, which is implemented for Result. The surrounding boilerplate can then be hidden from the rendered documentation. -Rust - -``` +```Rust /// # Examples /// /// ``` @@ -174,9 +172,7 @@ rustdoc provides a lesser-known but more concise shorthand for this exact scenario. If a code block ends with the literal token (()), rustdoc will automatically wrap the code in a main function that returns a Result. -Rust - -``` +```Rust /// # Examples /// /// ``` @@ -214,7 +210,7 @@ primary use cases include: on the API item being documented.[^3] 3. **Hiding** `use` **Statements**: While often useful to show which types are - involved, `use` statements can sometimes be hidden to de-clutter simple + involved, `use` statements can sometimes be hidden to declutter simple examples. The existence of features like hidden lines and the `(())` shorthand reveals a @@ -316,9 +312,7 @@ any pollution of the final binary or the public API. The typical implementation pattern is to create a private helper module within the library: -Rust - -``` +```Rust // In lib.rs or a submodule /// A function that requires a complex environment to test. @@ -404,9 +398,7 @@ builds.[^13] **The Pattern**: -Rust - -``` +```Rust /// A socket that is only available on Unix platforms. #[cfg(any(target_os = "unix", doc))] pub struct UnixSocket; @@ -440,9 +432,7 @@ Pattern 1: #\[cfg\] Inside the Code Block This pattern involves placing a #\[cfg\] attribute directly on the code within the doctest itself. -Rust - -``` +```Rust /// This example only runs if the "serde" feature is enabled. /// /// ``` @@ -466,9 +456,7 @@ A more explicit and accurate pattern uses the cfg_attr attribute to conditionally add the ignore flag to the doctest's header. This is typically done with inner doc comments (//!). -Rust - -``` +```Rust //! #![cfg_attr(not(feature = "serde"), doc = "```ignore")] //! #![cfg_attr(feature = "serde", doc = "```")] //! // Example code that requires the "serde" feature. @@ -492,9 +480,7 @@ feature-gated items in the generated documentation. This is achieved with the `#[doc(cfg(...))]` attribute, which requires enabling the `#![feature(doc_cfg)]` feature gate at the crate root. -Rust - -``` +```Rust // At the crate root (lib.rs) #![feature(doc_cfg)] @@ -526,7 +512,7 @@ its own purpose: time. They should be easy to read and focused on illustrating a single concept.[^6] -- **Unit Tests (**`#[test]` **in** `src/`**)**: These are for testing the +- **Unit tests (`#[test]` in `src/`)**: These are for testing the nitty-gritty details of the implementation. They are placed in submodules within the source files (often `mod tests {... }`) and are compiled only with `#[cfg(test)]`. Because they live inside the crate, they can access private diff --git a/docs/rust-testing-with-rstest-fixtures.md b/docs/rust-testing-with-rstest-fixtures.md index 11fda497..1034415b 100644 --- a/docs/rust-testing-with-rstest-fixtures.md +++ b/docs/rust-testing-with-rstest-fixtures.md @@ -289,8 +289,11 @@ Here are a few examples illustrating different kinds of fixtures: #[rstest] fn test_add_to_repository(mut empty_repository: impl Repository) { - empty_repository.add_item("item1", "Test Item"); - assert_eq!(empty_repository.get_item_name("item1"), Some("Test Item".to_string())); + empty_repository.add_item("item1", "Test Item"); + assert_eq!( + empty_repository.get_item_name("item1"), + Some("Test Item".to_string()) + ); } ``` @@ -404,7 +407,8 @@ fn test_state_transitions( # initial_state: State, #[values(Event::Process, Event::Error, Event::Fatal)] event: Event ) { - // In a real test, you'd have more specific assertions based on expected_next_state + // In a real test, you'd have more specific assertions + // based on `expected_next_state`. let next_state = initial_state.process(event); println!("Testing: {:?} + {:?} -> {:?}", initial_state, event, next_state); // For demonstration, a generic assertion: @@ -445,7 +449,11 @@ use rstest::*; // fn db_connection() -> UserDb { UserDb::new() } // #[rstest] -// fn test_user_retrieval(db_connection: UserDb, #[case] user_id: u32, #[case] expected_name: Option<&str>) { +// fn test_user_retrieval( +// db_connection: UserDb, +// #[case] user_id: u32, +// #[case] expected_name: Option<&str>, +// ) { // let user = db_connection.fetch_user(user_id); // assert_eq!(user.map(|u| u.name), expected_name.map(String::from)); // } @@ -483,7 +491,10 @@ fn derived_value(base_value: i32) -> i32 { } #[fixture] -fn configured_item(derived_value: i32, #[default("item_")] prefix: String) -> String { +fn configured_item( + derived_value: i32, + #[default("item_")] prefix: String, +) -> String { format!("{}{}", prefix, derived_value) } @@ -493,7 +504,9 @@ fn test_composed_fixture(configured_item: String) { } #[rstest] -fn test_composed_fixture_with_override(#[with("special_")] configured_item: String) { +fn test_composed_fixture_with_override( + #[with("special_")] configured_item: String, +) { assert_eq!(configured_item, "special_20"); } ``` @@ -565,9 +578,8 @@ Sometimes a fixture's function name might be long and descriptive, but a shorter or different name is preferred for the argument in a test or another fixture. The `#[from(original_fixture_name)]` attribute on an argument allows renaming. This is particularly useful when destructuring the result of a -fixture. -```rust +```fixture. use rstest::*; #[fixture] @@ -576,12 +588,16 @@ fn complex_user_data_fixture() -> (String, u32, String) { } #[rstest] -fn test_with_renamed_fixture(#[from(complex_user_data_fixture)] user_info: (String, u32, String)) { +fn test_with_renamed_fixture( + #[from(complex_user_data_fixture)] user_info: (String, u32, String), +) { assert_eq!(user_info.0, "Alice"); } #[rstest] -fn test_with_destructured_fixture(#[from(complex_user_data_fixture)] (name, _, _): (String, u32, String)) { +fn test_with_destructured_fixture( + #[from(complex_user_data_fixture)] (name, _, _): (String, u32, String), +) { assert_eq!(name, "Alice"); } ``` diff --git a/docs/wireframe-client-design.md b/docs/wireframe-client-design.md index 67e27e80..7f5f3547 100644 --- a/docs/wireframe-client-design.md +++ b/docs/wireframe-client-design.md @@ -11,7 +11,8 @@ The library currently focuses on server development. However, the core layers are intentionally generic: transport adapters, framing, serialization, routing, and middleware form a pipeline that is largely independent of server-specific logic. The design document outlines these layers, which process frames from raw -bytes to typed messages and back[^router-design]. Reusing these pieces enables +bytes to typed messages and back[^router-design]. +Reusing these pieces enables the implementation of a lightweight client without duplicating protocol code. ## Core Components From 972a08181e9ec69653905502e10c035eac7dcd7c Mon Sep 17 00:00:00 2001 From: Leynos Date: Mon, 21 Jul 2025 13:35:50 +0100 Subject: [PATCH 08/11] Apply review suggestions --- README.md | 6 +++--- .../asynchronous-outbound-messaging-design.md | 5 ++--- ...havioural-testing-in-rust-with-cucumber.md | 4 +--- docs/documentation-style-guide.md | 4 ++-- ...ge-fragmentation-and-re-assembly-design.md | 12 +++-------- ...i-packet-and-streaming-responses-design.md | 20 +++++-------------- docs/rust-binary-router-library-design.md | 10 ++++------ docs/rust-doctest-dry-guide.md | 4 ++-- docs/rust-testing-with-rstest-fixtures.md | 6 +++--- ...-set-philosophy-and-capability-maturity.md | 12 +++-------- docs/wireframe-client-design.md | 7 +++---- docs/wireframe-testing-crate.md | 6 ++---- 12 files changed, 33 insertions(+), 63 deletions(-) diff --git a/README.md b/README.md index 360b5981..84055884 100644 --- a/README.md +++ b/README.md @@ -1,9 +1,9 @@ # Wireframe **Wireframe** is an experimental Rust library that simplifies building servers -and clients for custom binary protocols. The design borrows heavily from [Actix -Web](https://actix.rs/) to provide a familiar, declarative API for routing, -extractors, and middleware. +and clients for custom binary protocols. The design borrows heavily from +[Actix Web](https://actix.rs/) to provide a familiar, declarative API for +routing, extractors, and middleware. ## Motivation diff --git a/docs/asynchronous-outbound-messaging-design.md b/docs/asynchronous-outbound-messaging-design.md index b4ee4d50..94af1fb6 100644 --- a/docs/asynchronous-outbound-messaging-design.md +++ b/docs/asynchronous-outbound-messaging-design.md @@ -32,8 +32,8 @@ protocols. Only the queue management utilities in `src/push.rs` exist at present. The connection actor and its write loop are still to be implemented. The remaining sections describe how to build that actor from first principles using the -biased `select!` loop presented in [Section -3](#3-core-architecture-the-connection-actor). +biased `select!` loop presented in +[Section 3](#3-core-architecture-the-connection-actor). ## 2. Design Goals & Requirements @@ -162,7 +162,6 @@ The flow diagram below summarises the fairness logic. after N high-priority frames. - ```mermaid flowchart TD A[Start select! loop] --> B{High-priority frame available?} diff --git a/docs/behavioural-testing-in-rust-with-cucumber.md b/docs/behavioural-testing-in-rust-with-cucumber.md index bec9a6f2..720f2215 100644 --- a/docs/behavioural-testing-in-rust-with-cucumber.md +++ b/docs/behavioural-testing-in-rust-with-cucumber.md @@ -988,9 +988,7 @@ update the specification. **Worked Example (GitHub Actions):** -YAML - -```yaml +```YAML # .github/workflows/ci.yml name: Rust CI diff --git a/docs/documentation-style-guide.md b/docs/documentation-style-guide.md index 8df08e78..6504167b 100644 --- a/docs/documentation-style-guide.md +++ b/docs/documentation-style-guide.md @@ -71,7 +71,7 @@ contents of the manual. they do not execute during documentation tests. - Put function attributes after the doc comment. -````rust +```rust /// Returns the sum of `a` and `b`. /// /// # Parameters @@ -90,7 +90,7 @@ contents of the manual. pub fn add(a: i32, b: i32) -> i32 { a + b } -```` +``` ## Diagrams and images diff --git a/docs/generic-message-fragmentation-and-re-assembly-design.md b/docs/generic-message-fragmentation-and-re-assembly-design.md index bfbe7951..027bf0e8 100644 --- a/docs/generic-message-fragmentation-and-re-assembly-design.md +++ b/docs/generic-message-fragmentation-and-re-assembly-design.md @@ -56,9 +56,7 @@ interleaved fragments from different logical messages on the same connection. To support this, the `FragmentAdapter` will not maintain a single re-assembly state, but a map of concurrent re-assembly processes. -Rust - -``` +```Rust use dashmap::DashMap; use std::sync::atomic::AtomicU64; use std::time::{Duration, Instant}; @@ -101,9 +99,7 @@ inject their specific fragmentation rules into the generic `FragmentAdapter`. The trait is designed to be context-aware and expressive, allowing it to model a wide range of protocols. -Rust - -``` +```Rust use bytes::BytesMut; use std::io; @@ -162,9 +158,7 @@ pub trait FragmentStrategy: 'static + Send + Sync { Developers will enable fragmentation by adding the `FragmentAdapter` to their `FrameProcessor` chain via the `WireframeApp` builder. -Rust - -``` +```Rust // Example: Configuring a server for MySQL-style fragmentation. WireframeServer::new(|| { WireframeApp::new() diff --git a/docs/multi-packet-and-streaming-responses-design.md b/docs/multi-packet-and-streaming-responses-design.md index 80d587a1..a9c55766 100644 --- a/docs/multi-packet-and-streaming-responses-design.md +++ b/docs/multi-packet-and-streaming-responses-design.md @@ -73,9 +73,7 @@ The public API is designed for clarity, performance, and ergonomic flexibility. The `Response` enum is the primary return type for all handlers. It is enhanced to provide optimised paths for common response patterns. -Rust - -``` +```Rust use futures_core::stream::Stream; use std::pin::Pin; @@ -111,9 +109,7 @@ To enable more robust error handling, a generic error enum will be introduced. This allows the framework and protocol implementations to distinguish between unrecoverable transport failures and logical, protocol-level errors. -Rust - -``` +```Rust /// A generic error type for wireframe operations. # pub enum WireframeError { @@ -143,9 +139,7 @@ The following examples illustrate how developers will use the new API. Existing code continues to work without modification, fulfilling goal **G4**. -Rust - -``` +```Rust async fn handle_ping(_req: Request) -> Result, MyError> { // `MyFrame` implements `Into>` Ok(build_pong_frame().into()) @@ -157,9 +151,7 @@ async fn handle_ping(_req: Request) -> Result, MyErro For simple, fixed-size multi-part responses, like a MySQL result set header, `Response::Vec` is both ergonomic and performant. -Rust - -``` +```Rust async fn handle_select_headers(_req: Request) -> Result, MyError> { // Pre-build frames for: column-count, column-def, EOF let frames = vec!; @@ -172,9 +164,7 @@ async fn handle_select_headers(_req: Request) -> Result Result, PgError> { diff --git a/docs/rust-binary-router-library-design.md b/docs/rust-binary-router-library-design.md index 1b51fd2b..555da2ff 100644 --- a/docs/rust-binary-router-library-design.md +++ b/docs/rust-binary-router-library-design.md @@ -621,14 +621,14 @@ handler. route guards. Guards would be functions that evaluate conditions on the incoming message or connection context before a handler is chosen. - ````rust + ```rust Router::new() .message_guarded( MessageType::GenericCommand, | msg_header: &CommandHeader | msg_header.sub_type == CommandSubType::Special, handle_special_command ).message(MessageType::GenericCommand, handle_generic_command) // Fallback ``` | - ```` + ``` The routing mechanism essentially implements a form of pattern matching or a state machine that operates on message identifiers. A clear, declarative API @@ -878,9 +878,8 @@ and verifying a session token from a custom frame header) to be encapsulated into reusable components. This further reduces code duplication across multiple handlers and keeps the handler functions lean and focused on their specific business tasks, mirroring the benefits seen with Actix Web's `FromRequest` -trait. -```mermaid +```trait. classDiagram class FromMessageRequest { <> @@ -1052,9 +1051,8 @@ will provide a comprehensive error handling strategy. - `WireframeError`: A top-level public error enum will be defined to encompass all possible errors that can occur within the "wireframe" system. This provides a single error type that users can match on for top-level error - management. - ```rust +```management. pub enum WireframeError { Io(std::io::Error), Framing(FramingError), diff --git a/docs/rust-doctest-dry-guide.md b/docs/rust-doctest-dry-guide.md index 76bbccd9..033f31b9 100644 --- a/docs/rust-doctest-dry-guide.md +++ b/docs/rust-doctest-dry-guide.md @@ -95,7 +95,7 @@ clear, illustrative, and robust. ### 2.1 The Anatomy of a Doctest -Doctests reside within documentation comments. Rust recognises two types: +Doctests reside within documentation comments. Rust recognizes two types: - **Outer doc comments (`///`)**: These document the item that follows them (e.g., a function, struct, or module). This is the most common type.[^7] @@ -243,7 +243,7 @@ table provides a comparative reference for the most common doctest attributes. | should_panic | Compiles and runs the code. The test passes if the code panics. | OK on panic, failed if it does not panic. | Use Case: Demonstrating functions that are designed to panic on invalid input (e.g., indexing out of bounds). | | compile_fail | Attempts to compile the code. The test passes if compilation fails. | OK on compilation failure, failed if it compiles successfully. | Use Case: Illustrating language rules, such as the borrow checker or type system constraints. Warning: Highly brittle. A future Rust version might make the code valid, causing the test to unexpectedly fail.[^4] | | no_run | Compiles the code but does not execute it. | OK if compilation succeeds. | Use Case: Essential for examples with undesirable side effects in a test environment, such as network requests, filesystem I/O, or launching a GUI. Guarantees the example is valid Rust code without running it.[^5] | -| edition2021 | Compiles the code using the specified Rust edition's rules. | OK on success. | Use Case: Demonstrating syntax or idioms that are specific to a particular Rust edition (e.g., edition2018, edition2021).[^4] | +| edition20xx | Compiles the code using the specified Rust edition's rules. | OK on success. | Use Case: Demonstrating syntax or idioms that are specific to a particular Rust edition (e.g., edition2018, edition2021).[^4] | ### 3.2 Detailed Attribute Breakdown diff --git a/docs/rust-testing-with-rstest-fixtures.md b/docs/rust-testing-with-rstest-fixtures.md index 1034415b..af026b25 100644 --- a/docs/rust-testing-with-rstest-fixtures.md +++ b/docs/rust-testing-with-rstest-fixtures.md @@ -131,7 +131,7 @@ tokio = { version = "1", default-features = false, features = ["test-util"] } rstest = "0.18" ``` -### B. Your First Fixture: Defining with `#[fixture]` +### B. First Fixture: Defining with `#[fixture]` A fixture in `rstest` is essentially a Rust function that provides some data or performs some setup action, with its result being injectable into tests. To @@ -1029,7 +1029,7 @@ readable and maintainable. For tests that need to process data from multiple input files, `rstest` provides the `#[files("glob_pattern")]` attribute. This attribute can be used -on a test function argument to inject file paths that match a given glob +to inject file paths into a test function argument that match a given glob pattern. The argument type is typically `PathBuf`. It can also inject file contents directly as `&str` or `&[u8]` by specifying a mode, e.g., `#[files("glob_pattern", mode = "str")]`. Additional attributes like @@ -1041,7 +1041,7 @@ use rstest::*; use std::path::PathBuf; use std::fs; -// Assume you have files in `tests/test_data/` like `file1.txt`, `file2.json` +// Assume files exist in `tests/test_data/` like `file1.txt`, `file2.json` #[rstest] #[files("tests/test_data/*.txt")] // Injects PathBuf for each.txt file diff --git a/docs/the-road-to-wireframe-1-0-feature-set-philosophy-and-capability-maturity.md b/docs/the-road-to-wireframe-1-0-feature-set-philosophy-and-capability-maturity.md index 8bad69b2..a319beca 100644 --- a/docs/the-road-to-wireframe-1-0-feature-set-philosophy-and-capability-maturity.md +++ b/docs/the-road-to-wireframe-1-0-feature-set-philosophy-and-capability-maturity.md @@ -46,9 +46,7 @@ ergonomic, declarative approach replaces the previous imperative model. Handlers will return an enhanced `Response` enum, giving developers clear and efficient ways to express their intent. -Rust - -``` +```Rust pub enum Response { /// A single frame, as before. Single(F), @@ -68,9 +66,7 @@ generating frames with minimal API complexity. The project recommends `async-stream` as the canonical method for constructing `Response::Stream` values. -Rust - -``` +```Rust // Example of a declarative handler using async-stream async fn handle_large_query(req: Request) -> io::Result> { let stream = async_stream::try_stream! { @@ -136,9 +132,7 @@ any specific wire format. The `FragmentStrategy` trait will be enhanced to be more expressive and context-aware: -Rust - -``` +```Rust /// Metadata decoded from a single fragment's header. pub struct FragmentMeta { pub payload_len: usize, diff --git a/docs/wireframe-client-design.md b/docs/wireframe-client-design.md index 7f5f3547..abca896c 100644 --- a/docs/wireframe-client-design.md +++ b/docs/wireframe-client-design.md @@ -11,9 +11,8 @@ The library currently focuses on server development. However, the core layers are intentionally generic: transport adapters, framing, serialization, routing, and middleware form a pipeline that is largely independent of server-specific logic. The design document outlines these layers, which process frames from raw -bytes to typed messages and back[^router-design]. -Reusing these pieces enables -the implementation of a lightweight client without duplicating protocol code. +bytes to typed messages and back[^1]. Reusing these pieces enables the +implementation of a lightweight client without duplicating protocol code. ## Core Components @@ -93,5 +92,5 @@ extensions might include: By leveraging the existing abstractions for framing and serialization, client support can share most of the server’s implementation while providing a small ergonomic API. -[^router-design]: See [wireframe router +[^1]: See [wireframe router design](rust-binary-router-library-design.md#implementation-details). diff --git a/docs/wireframe-testing-crate.md b/docs/wireframe-testing-crate.md index dc48d80d..4b055ea3 100644 --- a/docs/wireframe-testing-crate.md +++ b/docs/wireframe-testing-crate.md @@ -19,9 +19,8 @@ reusable across projects. - `Cargo.toml` enabling the `tokio` and `rstest` dependencies used by the helpers. - `src/lib.rs` exposing asynchronous functions for driving apps with raw - frames. -```toml +```frames. [dependencies] tokio = { version = "1", features = ["macros", "rt"] } @@ -81,9 +80,8 @@ assert on these failure conditions directly. A variant accepting a buffer `capacity` allows fine-tuning the size of the in-memory duplex channel, matching the existing `run_app_with_frame_with_capacity` and `run_app_with_frames_with_capacity` -helpers. -```rust +```helpers. pub async fn drive_with_frame_with_capacity( app: WireframeApp, frame: Vec, From 3a2eaa58dc6bd9b21c6b5a11878af0641caaa853 Mon Sep 17 00:00:00 2001 From: Leynos Date: Mon, 21 Jul 2025 13:45:18 +0100 Subject: [PATCH 09/11] Fixed accidental broken fence --- docs/rust-testing-with-rstest-fixtures.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/docs/rust-testing-with-rstest-fixtures.md b/docs/rust-testing-with-rstest-fixtures.md index af026b25..d28ae45e 100644 --- a/docs/rust-testing-with-rstest-fixtures.md +++ b/docs/rust-testing-with-rstest-fixtures.md @@ -578,8 +578,9 @@ Sometimes a fixture's function name might be long and descriptive, but a shorter or different name is preferred for the argument in a test or another fixture. The `#[from(original_fixture_name)]` attribute on an argument allows renaming. This is particularly useful when destructuring the result of a +fixture. -```fixture. +```rust use rstest::*; #[fixture] From 1fdc2febf4491f590b9203c97b4f0df122c1f1d3 Mon Sep 17 00:00:00 2001 From: Leynos Date: Mon, 21 Jul 2025 13:46:32 +0100 Subject: [PATCH 10/11] Fix spelling of LICENSE file --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 84055884..252d5cf4 100644 --- a/README.md +++ b/README.md @@ -316,7 +316,7 @@ applications【F:docs/roadmap.md†L1-L24】. ## Licence Wireframe is distributed under the terms of the ISC licence. See -[LICENCE](LICENSE) for details. +[LICENSE](LICENSE) for details. [data-extraction-guide]: docs/rust-binary-router-library-design.md#53-data-extraction-and-type-safety From bdacbc608620b019e51734698ebc89d68a733f57 Mon Sep 17 00:00:00 2001 From: Leynos Date: Mon, 21 Jul 2025 13:49:08 +0100 Subject: [PATCH 11/11] Fix some more accidentally broken fences --- docs/rust-binary-router-library-design.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/docs/rust-binary-router-library-design.md b/docs/rust-binary-router-library-design.md index 555da2ff..0e3309d8 100644 --- a/docs/rust-binary-router-library-design.md +++ b/docs/rust-binary-router-library-design.md @@ -878,8 +878,9 @@ and verifying a session token from a custom frame header) to be encapsulated into reusable components. This further reduces code duplication across multiple handlers and keeps the handler functions lean and focused on their specific business tasks, mirroring the benefits seen with Actix Web's `FromRequest` +trait. -```trait. +```mermaid classDiagram class FromMessageRequest { <> @@ -1051,8 +1052,9 @@ will provide a comprehensive error handling strategy. - `WireframeError`: A top-level public error enum will be defined to encompass all possible errors that can occur within the "wireframe" system. This provides a single error type that users can match on for top-level error + management. -```management. +```rust pub enum WireframeError { Io(std::io::Error), Framing(FramingError),