docs: Stdcoroutine-0: Boost.Coroutine to C++20 std::coroutine migration plan#6643
docs: Stdcoroutine-0: Boost.Coroutine to C++20 std::coroutine migration plan#6643pratikmankawde wants to merge 1 commit intodevelopfrom
Conversation
There was a problem hiding this comment.
Three issues flagged inline: a year typo in the document header, and two design gaps in the gRPC migration path — CallData lifetime analysis is missing (potential use-after-free), and ServerContext cancellation propagation is unaddressed for suspended coroutines.
Review by Claude Opus 4.6 · Prompt: V12
|
|
||
| > **Status:** Implementation Complete | ||
| > **Author:** Pratik Mankawde | ||
| > **Created:** 2026-02-25 |
There was a problem hiding this comment.
Typo in year — should be 2025-02-25:
| > **Created:** 2026-02-25 | |
| > **Created:** 2025-02-25 |
There was a problem hiding this comment.
I work in the future, not in the past!
|
|
||
| - A client (e.g., a wallet app) sends an RPC request to the rippled server. | ||
| - The server wraps the request in a coroutine and schedules it on a worker thread from the JobQueue. | ||
| - The handler processes the request. Most handlers finish immediately and return a response. |
There was a problem hiding this comment.
Plan gap: CallData ownership chain under C++20 not analyzed — potential use-after-free risk.
In the Boost model, shared_ptr<Coro> inside the lambda ensures CallData outlives the coroutine. With C++20, if the gRPC completion queue fires and destroys CallData while the coroutine frame still holds a reference (via RPC::Context), this is a use-after-free — the exact dangling reference risk from Concern 5, but unaddressed for the gRPC code path.
Suggested addition in Milestone 2, task 2.3: Explicitly audit CallData object lifetime relative to the CoroTaskRunner frame. Ensure CallData is kept alive (e.g., via shared_from_this() or explicit capture) for the full coroutine duration. Add a TSAN/ASAN test specifically for gRPC request lifetime.
See: gRPC
| e.g. doRipplePathFind`"] | ||
| YIELD["`**coro.yield()** | ||
| Suspends execution`"] | ||
| RESUME["`**coro.post()** |
There was a problem hiding this comment.
Plan gap: gRPC ServerContext cancellation propagation not addressed in the migration.
When CallData::process() is migrated to postCoroTask() in Phase 2 (task 2.3), there is no discussion of what happens if the gRPC client disconnects or times out while the coroutine is suspended (e.g., during pathfinding). The coroutine will resume on the JobQueue with no awareness of cancellation — wasting resources and potentially writing to a dead stream.
Suggested addition in Phase 2, task 2.3: Document whether grpc::ServerContext* is threaded through RPC::Context. If so, add a cancellation check in JobQueueAwaiter::await_suspend() or at the co_await resume point: if grpc_context->IsCancelled(), return codes.Canceled rather than continuing into the handler body.
See: gRPC
255ecc1 to
956c105
Compare
BUILD.md
Outdated
| for recipe in "${recipes[@]}"; do | ||
| echo "Checking out recipe '${recipe}'..." | ||
| git sparse-checkout add recipes/${recipe} | ||
| for ((index = 1; index <= ${#recipes[@]}; index++)); do |
There was a problem hiding this comment.
Off-by-one: Bash arrays are 0-indexed, so this skips recipes[0] (ed25519) and reads an out-of-bounds element at the end. Fix the loop bounds:
| for ((index = 1; index <= ${#recipes[@]}; index++)); do | |
| for ((index = 0; index < ${#recipes[@]}; index++)); do |
BUILD.md
Outdated
| cd ../.. | ||
|
|
||
| # Export the recipes into the local cache. | ||
| for ((index = 1; index <= ${#recipes[@]}; index++)); do |
There was a problem hiding this comment.
Same off-by-one bug as line 161 — skips first recipe and reads past the end of the array.
| for ((index = 1; index <= ${#recipes[@]}; index++)); do | |
| for ((index = 0; index < ${#recipes[@]}; index++)); do |
|
|
||
| > **Status:** Implementation Complete | ||
| > **Author:** Pratik Mankawde | ||
| > **Created:** 2026-02-25 |
There was a problem hiding this comment.
Year looks like a typo — should this be 2025-02-25?
| > **Created:** 2026-02-25 | |
| > **Created:** 2025-02-25 |
Comprehensive migration plan documenting the switch from Boost.Coroutine2 to C++20 standard coroutines in rippled, including research analysis, implementation phases, risk assessment, and testing strategy. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
956c105 to
b78202a
Compare
There was a problem hiding this comment.
Five issues flagged inline: a year typo in the document header, and four architectural gaps in the gRPC migration coverage — missing streaming RPC audit, missing CompletionQueue lifecycle analysis, missing shutdown handling in task 2.3, and a high-severity regression where the FAQ documents a 30-second thread-blocking synchronous wait that contradicts the migration's core goals.
Review by Claude Opus 4.6 · Prompt: V12
|
|
||
| > **Status:** Implementation Complete | ||
| > **Author:** Pratik Mankawde | ||
| > **Created:** 2026-02-25 |
There was a problem hiding this comment.
Typo in year — should be 2025, not 2026:
| > **Created:** 2026-02-25 | |
| > **Created:** 2025-02-25 |
| | `coroutine<void>::push_type` | `JobQueue.h:53` | Yield function type | | ||
| | `boost::context::protected_fixedsize_stack(1536 * 1024)` | `Coro.ipp:14` | Stack size configuration | | ||
| | `#include <boost/coroutine2/all.hpp>` | `JobQueue.h:11` | Header inclusion | | ||
|
|
There was a problem hiding this comment.
Section 5.4 lists only the unary gRPC entry point — streaming RPC handlers are not audited. Add a paragraph to Section 5.4 confirming: (a) which rippled proto methods are unary vs streaming, (b) whether any streaming handler calls postCoro() or yield(), and (c) whether streaming handlers use a separate code path unaffected by this migration. Without this, a streaming RPC could silently retain the old Boost path after Phase 4 cleanup removes Coro.
| (parallel to postCoro)`"] | ||
| P1D["Unit tests for new primitives"] | ||
| P1A --> P1B --> P1C --> P1D | ||
| end |
There was a problem hiding this comment.
No design notes on CallData lifecycle with gRPC's CompletionQueue. The plan identifies GRPCServer.cpp:102 as an entry point but doesn't verify that CoroTaskRunner lifetime outlives all CompletionQueue callbacks that reference it, or that coroutine frame ownership is safe across tag firings. Add an analysis tracing: CQ tag posted → process() called → coroutine suspended → CQ tag fires again → coroutine resumed, and confirm no raw coroutine_handle<> is stored in CQ tags without RAII ownership.
See: gRPC
| - Replace `m_jobQueue.postCoro(jtCLIENT_RPC, ...)` with `postCoroTask()` | ||
| - Update lambda to return `CoroTask<void>` (add `co_return`) | ||
| - Update `processSession` to accept new coroutine type | ||
|
|
There was a problem hiding this comment.
Task 2.3 is missing gRPC shutdown handling. The old Coro::post() returned false when the JobQueue was stopping, letting the CallData handler detect shutdown and call Finish() with an appropriate status. Add a sub-task: Verify that when addJob() returns false during shutdown, the awaiter causes the coroutine to terminate and the gRPC call is finished with grpc::StatusCode::UNAVAILABLE. Write a test that shuts down the JobQueue while a gRPC coroutine is suspended and confirms no RPC hangs indefinitely.
See: gRPC
|
|
||
| | # | File | Phase | Purpose | | ||
| | --- | ------------------------------------- | ----- | ---------------------------------------- | | ||
| | 1 | `include/xrpl/core/CoroTask.h` | 1 | `CoroTask<T>` return type + promise_type | |
There was a problem hiding this comment.
The FAQ admits blocking a worker thread for up to 30 seconds via std::condition_variable, directly contradicting the migration's goal of freeing threads during suspension and voiding the performance gains claimed in Section 4.4 for this code path. Either implement PathFindAwaiter (task 3.2) to properly suspend the coroutine, or at minimum document this as a known regression and ensure the pathfinding timeout is capped below the gRPC deadline so the thread is guaranteed to be released before the client times out.
See: std::condition_variable | gRPC
|
Given that our coroutine use case isn't different from others at all, I think we can use the existing coroutine implementation in boost.asio instead of reinventing the wheel and implementing our own This approach gives us some benefits:
To make it work, we'll need to implement an executor that meets asio's requirement, refactor JobQueue and Coro to use the executor. As the second phase, we refactor to use boost.asio coroutine, and then we can replace Workers with boost::thread_pool. I propose this plan: Phase 1:
Phase 2: Replace Coro with C++20 coroutines
Phase 3: Replace Workers with boost::asio::thread_pool
|
High Level Overview of Change
Adds a comprehensive migration plan document (
BoostToStdCoroutineSwitchPlan.md) for switching rippled from Boost.Coroutine2 (stackful) to C++20 standard coroutines (stackless).This is PR 0 in the StdCoroutineSwitch chain — it contains only the plan document, no code changes.
PR Chain
pratik/Swtich-to-std-coroutines→developpratik/std-coro/add-coroutine-primitives→developpratik/std-coro/migrate-entry-points→add-coroutine-primitivespratik/std-coro/migrate-test-code→migrate-entry-pointspratik/std-coro/cleanup-boost-coroutine→migrate-test-codepratik/std-coro/tsan-fixes→cleanup-boost-coroutineContext of Change
The plan covers:
JobQueue::Corointernals, entry points, handlersCoroTask<T>,JobQueueAwaiter,CoroTaskRunner, API mappingKnown Plan-vs-Implementation Divergences
The following aspects evolved during implementation and differ from the plan:
co_await-based migration; actual implementation usesstd::condition_variablesynchronous blocking (simpler, no coroutine needed since only one handler suspends)RPC::Context::corofield — plan proposed replacing the type; actual implementation removed it entirelyCoroTaskRunner— not in original plan; emerged as needed lifecycle manager wrappingCoroTask<void>yieldAndPost()API — added to work around GCC-12 compiler bug with external awaiters at multipleco_awaitpointsBoost::contextretained — Section 1.7 claims the migration will "Remove external dependency on Boost.Coroutine (and transitively Boost.Context)". This is incorrect:Boost::contextcannot be removed becauseboost::asio::spawn(used byyield_to.htest infra andSpawn.hserver code) still depends on it. The cleanup branch correctly replacedBoost::coroutine→Boost::contextin CMake and addedBOOST_USE_ASAN/BOOST_USE_TSAN/BOOST_USE_UCONTEXTdefines for sanitizer fiber-switching annotations — none of which is discussed in the plan.API Impact
libxrplchange (any change that may affectlibxrplor dependents oflibxrpl)