Skip to content

Commit fcca9c2

Browse files
[2/9] docs: add Backend enum tests and documentation (#106)
* test: add comprehensive tests for Backend enum Add 10 new tests covering: - Backend enum traits (Default, Copy, Clone, Debug, PartialEq, Eq) - All three backends handle call/cast correctly - Backend::Thread isolates blocking work from async runtime - Multiple backends can run concurrently with independent state - Backend::default() works in start() * docs: add comprehensive Backend enum documentation Document each backend option with: - Comparison table showing execution model, best use cases, and limitations - Code examples for each backend - Detailed "When to Use" guide with advantages and avoid-when advice - Per-variant documentation with specific use cases * fix: allow clone_on_copy in test that verifies Clone trait * docs: clarify Backend::Thread still uses async runtime internally * Fixed default backend test --------- Co-authored-by: Esteban Dimitroff Hodi <esteban.dimitroff@lambdaclass.com>
1 parent 7c8df03 commit fcca9c2

File tree

1 file changed

+307
-7
lines changed

1 file changed

+307
-7
lines changed

concurrency/src/tasks/gen_server.rs

Lines changed: 307 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -16,20 +16,88 @@ const DEFAULT_CALL_TIMEOUT: Duration = Duration::from_secs(5);
1616

1717
/// Execution backend for GenServer.
1818
///
19-
/// Determines how the GenServer's async loop is executed.
19+
/// Determines how the GenServer's async loop is executed. Choose based on
20+
/// the nature of your workload:
21+
///
22+
/// # Backend Comparison
23+
///
24+
/// | Backend | Execution Model | Best For | Limitations |
25+
/// |---------|-----------------|----------|-------------|
26+
/// | `Async` | Tokio task | Non-blocking I/O, async operations | Blocks runtime if sync code runs too long |
27+
/// | `Blocking` | Tokio blocking pool | Short blocking operations (file I/O, DNS) | Shared pool with limited threads |
28+
/// | `Thread` | Dedicated OS thread with own runtime | Long-running services, isolation from main runtime | Higher memory overhead per GenServer |
29+
///
30+
/// **Note**: All backends use async internally. For fully synchronous code without any async
31+
/// runtime, use [`threads::GenServer`](crate::threads::GenServer) instead.
32+
///
33+
/// # Examples
34+
///
35+
/// ```ignore
36+
/// // For typical async workloads (HTTP handlers, database queries)
37+
/// let handle = MyServer::new().start();
38+
///
39+
/// // For occasional blocking operations (file reads, external commands)
40+
/// let handle = MyServer::new().start_with_backend(Backend::Blocking);
41+
///
42+
/// // For CPU-intensive or permanently blocking services
43+
/// let handle = MyServer::new().start_with_backend(Backend::Thread);
44+
/// ```
45+
///
46+
/// # When to Use Each Backend
47+
///
48+
/// ## `Backend::Async` (Default)
49+
/// - **Advantages**: Lightweight, efficient, good for high concurrency
50+
/// - **Use when**: Your GenServer does mostly async I/O (network, database)
51+
/// - **Avoid when**: Your code blocks (e.g., `std::thread::sleep`, heavy computation)
52+
///
53+
/// ## `Backend::Blocking`
54+
/// - **Advantages**: Prevents blocking the async runtime, uses tokio's managed pool
55+
/// - **Use when**: You have occasional blocking operations that complete quickly
56+
/// - **Avoid when**: You need guaranteed thread availability or long-running blocks
57+
///
58+
/// ## `Backend::Thread`
59+
/// - **Advantages**: Isolated from main runtime, dedicated thread won't affect other tasks
60+
/// - **Use when**: Long-running singleton services that shouldn't share the main runtime
61+
/// - **Avoid when**: You need many GenServers (each gets its own OS thread + runtime)
62+
/// - **Note**: Still uses async internally (own runtime). For sync code, use `threads::GenServer`
2063
#[derive(Debug, Clone, Copy, Default, PartialEq, Eq)]
2164
pub enum Backend {
2265
/// Run on tokio async runtime (default).
23-
/// Best for non-blocking, async workloads.
66+
///
67+
/// Best for non-blocking, async workloads. The GenServer runs as a
68+
/// lightweight tokio task, enabling high concurrency with minimal overhead.
69+
///
70+
/// **Warning**: If your `handle_call` or `handle_cast` blocks synchronously
71+
/// (e.g., `std::thread::sleep`, CPU-heavy loops), it will block the entire
72+
/// tokio runtime thread, affecting other tasks.
2473
#[default]
2574
Async,
75+
2676
/// Run on tokio's blocking thread pool.
27-
/// Use for blocking operations that eventually complete.
28-
/// The pool is shared and limited in size.
77+
///
78+
/// Use for GenServers that perform blocking operations like:
79+
/// - Synchronous file I/O
80+
/// - DNS lookups
81+
/// - External process calls
82+
/// - Short CPU-bound computations
83+
///
84+
/// The pool is shared across all `spawn_blocking` calls and has a default
85+
/// limit of 512 threads. If the pool is exhausted, new blocking tasks wait.
2986
Blocking,
30-
/// Run on a dedicated OS thread.
31-
/// Use for long-running blocking operations or singleton services
32-
/// that should not interfere with the async runtime.
87+
88+
/// Run on a dedicated OS thread with its own async runtime.
89+
///
90+
/// Use for GenServers that:
91+
/// - Need isolation from the main tokio runtime
92+
/// - Are long-running singleton services
93+
/// - Should not compete with other tasks for runtime resources
94+
///
95+
/// Each GenServer gets its own thread with a separate tokio runtime,
96+
/// providing isolation from other async tasks. Higher memory overhead
97+
/// (~2MB stack per thread plus runtime overhead).
98+
///
99+
/// **Note**: This still uses async internally. For fully synchronous code
100+
/// without any async runtime, use [`threads::GenServer`](crate::threads::GenServer).
33101
Thread,
34102
}
35103

@@ -643,4 +711,236 @@ mod tests {
643711
assert!(rx.is_closed())
644712
});
645713
}
714+
715+
// ==================== Backend enum tests ====================
716+
717+
#[test]
718+
pub fn backend_default_is_async() {
719+
assert_eq!(Backend::default(), Backend::Async);
720+
}
721+
722+
#[test]
723+
#[allow(clippy::clone_on_copy)]
724+
pub fn backend_enum_is_copy_and_clone() {
725+
let backend = Backend::Async;
726+
let copied = backend; // Copy
727+
let cloned = backend.clone(); // Clone - intentionally testing Clone trait
728+
assert_eq!(backend, copied);
729+
assert_eq!(backend, cloned);
730+
}
731+
732+
#[test]
733+
pub fn backend_enum_debug_format() {
734+
assert_eq!(format!("{:?}", Backend::Async), "Async");
735+
assert_eq!(format!("{:?}", Backend::Blocking), "Blocking");
736+
assert_eq!(format!("{:?}", Backend::Thread), "Thread");
737+
}
738+
739+
#[test]
740+
pub fn backend_enum_equality() {
741+
assert_eq!(Backend::Async, Backend::Async);
742+
assert_eq!(Backend::Blocking, Backend::Blocking);
743+
assert_eq!(Backend::Thread, Backend::Thread);
744+
assert_ne!(Backend::Async, Backend::Blocking);
745+
assert_ne!(Backend::Async, Backend::Thread);
746+
assert_ne!(Backend::Blocking, Backend::Thread);
747+
}
748+
749+
// ==================== Backend functionality tests ====================
750+
751+
/// Simple counter GenServer for testing all backends
752+
struct Counter {
753+
count: u64,
754+
}
755+
756+
#[derive(Clone)]
757+
enum CounterCall {
758+
Get,
759+
Increment,
760+
Stop,
761+
}
762+
763+
#[derive(Clone)]
764+
enum CounterCast {
765+
Increment,
766+
}
767+
768+
impl GenServer for Counter {
769+
type CallMsg = CounterCall;
770+
type CastMsg = CounterCast;
771+
type OutMsg = u64;
772+
type Error = ();
773+
774+
async fn handle_call(
775+
&mut self,
776+
message: Self::CallMsg,
777+
_: &GenServerHandle<Self>,
778+
) -> CallResponse<Self> {
779+
match message {
780+
CounterCall::Get => CallResponse::Reply(self.count),
781+
CounterCall::Increment => {
782+
self.count += 1;
783+
CallResponse::Reply(self.count)
784+
}
785+
CounterCall::Stop => CallResponse::Stop(self.count),
786+
}
787+
}
788+
789+
async fn handle_cast(
790+
&mut self,
791+
message: Self::CastMsg,
792+
_: &GenServerHandle<Self>,
793+
) -> CastResponse {
794+
match message {
795+
CounterCast::Increment => {
796+
self.count += 1;
797+
CastResponse::NoReply
798+
}
799+
}
800+
}
801+
}
802+
803+
#[test]
804+
pub fn backend_async_handles_call_and_cast() {
805+
let runtime = rt::Runtime::new().unwrap();
806+
runtime.block_on(async move {
807+
let mut counter = Counter { count: 0 }.start();
808+
809+
// Test call
810+
let result = counter.call(CounterCall::Get).await.unwrap();
811+
assert_eq!(result, 0);
812+
813+
let result = counter.call(CounterCall::Increment).await.unwrap();
814+
assert_eq!(result, 1);
815+
816+
// Test cast
817+
counter.cast(CounterCast::Increment).await.unwrap();
818+
rt::sleep(Duration::from_millis(10)).await; // Give time for cast to process
819+
820+
let result = counter.call(CounterCall::Get).await.unwrap();
821+
assert_eq!(result, 2);
822+
823+
// Stop
824+
let final_count = counter.call(CounterCall::Stop).await.unwrap();
825+
assert_eq!(final_count, 2);
826+
});
827+
}
828+
829+
#[test]
830+
pub fn backend_blocking_handles_call_and_cast() {
831+
let runtime = rt::Runtime::new().unwrap();
832+
runtime.block_on(async move {
833+
let mut counter = Counter { count: 0 }.start_with_backend(Backend::Blocking);
834+
835+
// Test call
836+
let result = counter.call(CounterCall::Get).await.unwrap();
837+
assert_eq!(result, 0);
838+
839+
let result = counter.call(CounterCall::Increment).await.unwrap();
840+
assert_eq!(result, 1);
841+
842+
// Test cast
843+
counter.cast(CounterCast::Increment).await.unwrap();
844+
rt::sleep(Duration::from_millis(50)).await; // Give time for cast to process
845+
846+
let result = counter.call(CounterCall::Get).await.unwrap();
847+
assert_eq!(result, 2);
848+
849+
// Stop
850+
let final_count = counter.call(CounterCall::Stop).await.unwrap();
851+
assert_eq!(final_count, 2);
852+
});
853+
}
854+
855+
#[test]
856+
pub fn backend_thread_handles_call_and_cast() {
857+
let runtime = rt::Runtime::new().unwrap();
858+
runtime.block_on(async move {
859+
let mut counter = Counter { count: 0 }.start_with_backend(Backend::Thread);
860+
861+
// Test call
862+
let result = counter.call(CounterCall::Get).await.unwrap();
863+
assert_eq!(result, 0);
864+
865+
let result = counter.call(CounterCall::Increment).await.unwrap();
866+
assert_eq!(result, 1);
867+
868+
// Test cast
869+
counter.cast(CounterCast::Increment).await.unwrap();
870+
rt::sleep(Duration::from_millis(50)).await; // Give time for cast to process
871+
872+
let result = counter.call(CounterCall::Get).await.unwrap();
873+
assert_eq!(result, 2);
874+
875+
// Stop
876+
let final_count = counter.call(CounterCall::Stop).await.unwrap();
877+
assert_eq!(final_count, 2);
878+
});
879+
}
880+
881+
#[test]
882+
pub fn backend_thread_isolates_blocking_work() {
883+
// Similar to badly_behaved_thread but using Backend::Thread
884+
let runtime = rt::Runtime::new().unwrap();
885+
runtime.block_on(async move {
886+
let mut badboy = BadlyBehavedTask.start_with_backend(Backend::Thread);
887+
let _ = badboy.cast(Unused).await;
888+
let mut goodboy = WellBehavedTask { count: 0 }.start();
889+
let _ = goodboy.cast(Unused).await;
890+
rt::sleep(Duration::from_secs(1)).await;
891+
let count = goodboy.call(InMessage::GetCount).await.unwrap();
892+
893+
// goodboy should have run normally because badboy is on a separate thread
894+
match count {
895+
OutMsg::Count(num) => {
896+
assert_eq!(num, 10);
897+
}
898+
}
899+
goodboy.call(InMessage::Stop).await.unwrap();
900+
});
901+
}
902+
903+
#[test]
904+
pub fn multiple_backends_concurrent() {
905+
let runtime = rt::Runtime::new().unwrap();
906+
runtime.block_on(async move {
907+
// Start counters on all three backends
908+
let mut async_counter = Counter { count: 0 }.start();
909+
let mut blocking_counter = Counter { count: 100 }.start_with_backend(Backend::Blocking);
910+
let mut thread_counter = Counter { count: 200 }.start_with_backend(Backend::Thread);
911+
912+
// Increment each
913+
async_counter.call(CounterCall::Increment).await.unwrap();
914+
blocking_counter.call(CounterCall::Increment).await.unwrap();
915+
thread_counter.call(CounterCall::Increment).await.unwrap();
916+
917+
// Verify each has independent state
918+
let async_val = async_counter.call(CounterCall::Get).await.unwrap();
919+
let blocking_val = blocking_counter.call(CounterCall::Get).await.unwrap();
920+
let thread_val = thread_counter.call(CounterCall::Get).await.unwrap();
921+
922+
assert_eq!(async_val, 1);
923+
assert_eq!(blocking_val, 101);
924+
assert_eq!(thread_val, 201);
925+
926+
// Clean up
927+
async_counter.call(CounterCall::Stop).await.unwrap();
928+
blocking_counter.call(CounterCall::Stop).await.unwrap();
929+
thread_counter.call(CounterCall::Stop).await.unwrap();
930+
});
931+
}
932+
933+
#[test]
934+
pub fn backend_default_works_in_start() {
935+
let runtime = rt::Runtime::new().unwrap();
936+
runtime.block_on(async move {
937+
// Using Backend::default() should work the same as Backend::Async
938+
let mut counter = Counter { count: 42 }.start_with_backend(Backend::Async);
939+
940+
let result = counter.call(CounterCall::Get).await.unwrap();
941+
assert_eq!(result, 42);
942+
943+
counter.call(CounterCall::Stop).await.unwrap();
944+
});
945+
}
646946
}

0 commit comments

Comments
 (0)