Canopy is designed to be coroutine-library agnostic. While the library is currently written against libcoro, the core abstractions have been isolated to enable porting to other coroutine libraries with minimal changes.
The coroutine support is encapsulated in a single header file that defines macros for all coroutine primitives. This allows the underlying implementation to be swapped without modifying the rest of the codebase.
All coroutine abstractions are defined in:
rpc/include/rpc/internal/coroutine_support.h
To support a new coroutine library, the following abstractions must be provided:
| Macro | Purpose | Blocking mode | Requirements |
|---|---|---|---|
CORO_TASK(x) |
Return type for coroutine functions | x | Must be awaitable, copyable/movable |
CO_RETURN |
Return from coroutine | return | Coroutine return statement |
CO_AWAIT |
Suspend until completion | Must work with co_await expression |
|
SPAWN(x) |
Launch a task and not wait | Must spawn a separate thread or use a pool | |
SYNC_WAIT(x) |
Blocking wait for coroutine | Must block current thread/task until completion |
#ifdef CANOPY_BUILD_COROUTINE
#include <coro/coro.hpp>
#define CORO_TASK(x) coro::task<x>
#define CO_RETURN co_return
#define CO_AWAIT co_await
#define SYNC_WAIT(x) coro::sync_wait(x)
#else
#define CORO_TASK(x) x
#define CO_RETURN return
#define CO_AWAIT
#define SYNC_WAIT(x) x
#endifReplace the libcoro includes and macros with your chosen library:
#ifdef CANOPY_BUILD_COROUTINE
#include <your_library/task.hpp>
#include <your_library/sync_wait.hpp>
#define CORO_TASK(x) your_library::task<x>
#define CO_RETURN co_return
#define CO_AWAIT co_await
#define SYNC_WAIT(x) your_library::sync_wait(x)
#else
#define CORO_TASK(x) x
#define CO_RETURN return
#define CO_AWAIT
#define SYNC_WAIT(x) x
#endifModify the CMakeLists.txt to link against your chosen library:
# Remove libcoro dependency
# target_link_libraries(target PUBLIC libcoro)
# Add your library
find_package(YourCoroutineLibrary REQUIRED)
target_link_libraries(target PUBLIC YourCoroutineLibrary::YourCoroutineLibrary)For transports that use async I/O (TCP, SPSC), you may need to adapt the networking primitives:
- libcoro: Uses
coro::schedulerandcoro::net::tcp::* - Asio: Uses
asio::io_contextandasio::ip::tcp::* - libunifex: Uses
unifex::single_thread_contextand sender-based operations
- libcoro - Current implementation, C++20 coroutine library
- libunifex - Facebook's sender/receiver framework
- cppcoro - Lewis Baker's foundational coroutine library
- Asio - Cross-platform async I/O library
| Feature | libcoro | libunifex | cppcoro | Asio |
|---|---|---|---|---|
| task | Yes | Via sender | Yes | awaitable |
| sync_wait | Yes | Via sync_wait | Yes | Via io_context |
| Thread pool | Yes | Yes | static_thread_pool | io_context |
| TCP I/O | Yes | Via libunifex | Limited | Yes |
| UDP I/O | Yes | Via libunifex | Limited | Yes |
| Timers | Yes | Via libunifex | Limited | Yes |
| Active development | Yes | No | No | No |
Some advanced features may require library-specific extensions:
- io_scheduler integration - Currently tied to libcoro's io_scheduler for TCP and SPSC transports
- Network primitives - TCP client/server abstractions are libcoro-specific
- Channel/back-channel support - May require adaptation for sender/receiver models
The current macro layer around CORO_TASK, CO_AWAIT, and SYNC_WAIT is enough to swap the coroutine syntax, but it is not yet enough to make Canopy independent from a specific async runtime. The next step is to move the ownership boundary up so that coroutine scheduling, networking, timers, and socket status are Canopy abstractions rather than direct libcoro types.
Canopy should own the public async surface used by transports and streams:
canopy::task<T>canopy::sync_wait()canopy::schedulercanopy::io_statuscanopy::tcp_clientcanopy::tcp_listenercanopy::stream_socketcanopy::timeror equivalent timeout primitive
Transport and stream code should depend only on these Canopy abstractions. Backend-specific code should live behind an adapter layer.
The preferred model is:
- Canopy defines runtime-facing interfaces and value types.
- A backend adapter implements those interfaces for a concrete coroutine library.
libcorobecomes one backend rather than the type system used directly bystreaming, transports, and demos.
This keeps the rest of Canopy insulated from:
coro::schedulercoro::taskcoro::net::tcp::client/servercoro::net::io_status- backend-specific timeout and polling mechanics
Add a Canopy-owned header set that defines:
- coroutine task aliases or wrappers
- scheduler abstraction
- I/O result/status types
- socket and listener interfaces
- timeout and timer abstractions
At this phase the implementation can still delegate entirely to libcoro, but direct libcoro types should stop appearing in transport-facing public headers.
Refactor:
streamstream_acceptor- TCP stream classes
- io_uring stream classes
- listener classes
so their public APIs use Canopy types rather than coro::* and coro::net::*.
This is the point where streaming stops leaking the backend choice into the wider codebase.
Move backend-specific code into implementation-specific areas, for example:
streaming/backends/libcoro/...streaming/backends/io_uring/...
The libcoro backend would adapt:
- task execution
- scheduler integration
- TCP connect/accept
- polling and timeouts
The io_uring path can then be treated as a Canopy backend implementation decision rather than a transport class that is hardwired to libcoro scheduling semantics.
Once the facade is in place, Canopy can choose a backend-specific execution strategy without changing the transport API:
- shared io_uring ring per runtime or per worker
- dedicated Canopy I/O thread or I/O executor
- backend-specific timeout strategy
- backend-specific accept/connect implementation
This is the point where io_uring can be tuned for Canopy rather than shaped around a generic external scheduler.
After libcoro is behind the facade, other backends can be introduced incrementally:
- Asio
- cppcoro
- libunifex
- a Canopy-owned runtime
Wrapping only coro::scheduler does not fully decouple the codebase. The following also need to move behind Canopy abstractions:
- task type
- blocking wait
- I/O status values
- TCP client and listener types
- stream/socket ownership
- timer and timeout support
If these remain as libcoro types in public APIs, the dependency still leaks through the entire transport layer.
This separation is especially important for io_uring. A direct io_uring implementation often benefits from a different execution model than a generic TCP scheduler, for example:
- shared rings instead of one ring per connection
- dedicated completion processing
- batching submits and completions
- transport-aware timeout policies
Keeping io_uring behind a Canopy backend makes those choices local to the backend implementation and avoids coupling transport classes to a specific coroutine library.
The first useful implementation milestone is:
- Introduce Canopy async facade headers.
- Convert
streamandstream_acceptorAPIs to Canopy-owned async and I/O types. - Provide a
libcoroadapter that preserves current behaviour. - Move TCP and io_uring stream implementations behind that adapter boundary.
At that point Canopy remains functional with the current runtime, while the dependency surface is narrow enough to support alternative backends and a Canopy-owned I/O runtime later.
After porting, ensure all tests pass in both blocking and coroutine modes:
# Coroutine mode
cmake --preset Debug_Coroutine
cmake --build build --target all
ctest --test-dir build
# Blocking mode (default)
cmake --preset Debug
cmake --build build --target all
ctest --test-dir build