Skip to content

Commit 074215f

Browse files
Minor tweaks, marked as reviewed
1 parent ab32b7e commit 074215f

File tree

1 file changed

+21
-20
lines changed

1 file changed

+21
-20
lines changed

nservicebus/handlers/async-handlers.md

Lines changed: 21 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -1,19 +1,19 @@
11
---
22
title: Asynchronous Handlers
33
summary: How to deal with synchronous and asynchronous code inside asynchronous handlers
4-
reviewed: 2022-06-28
4+
reviewed: 2025-01-17
55
component: Core
66
versions: '[6.0,)'
77
---
88

99
> [!WARNING]
10-
> It is difficult to give generic advice on how asynchronous code should be structured. It is important to understand compute-bound vs. I/O-bound operations and avoid copying and pasting snippets without analysing the benefits they provide for a given business scenarios. Don't assume; measure it.
10+
> It is difficult to give generic advice on structuring asynchronous code. It is important to understand compute-bound vs. I/O-bound operations and avoid copying and pasting snippets without analyzing their benefits for a given business scenario. Don't assume; measure it.
1111
12-
[Handlers](/nservicebus/handlers/) and [sagas](/nservicebus/sagas/) are executed by threads from the thread pool. Depending on the transport implementation the worker thread pool thread or the I/O thread pool thread might be used. Typically message handlers and sagas issue I/O-bound work, such as sending or publishing messages, storing information into databases, and calling web services. In other cases, message handlers are used to schedule compute-bound work. To be able to write efficient message handlers and sagas, it is crucial to understand the difference between those scenarios.
12+
[Handlers](/nservicebus/handlers/) and [sagas](/nservicebus/sagas/) are executed by threads from the thread pool. Depending on the transport implementation, the worker thread pool thread or the I/O thread pool thread might be used. Message handlers and sagas typically issue I/O-bound work, such as sending or publishing messages, storing information in databases, and calling web services. In other cases, message handlers are used to schedule compute-bound work. To write efficient message handlers and sagas, it is crucial to understand the difference between those scenarios.
1313

1414
## Thread pool
1515

16-
A thread pool is associated with a process and manages the execution of asynchronous callbacks on behalf of the application. Its primary purpose is to reduce the number of application threads and provide efficient management of threads. Every thread pool manages a pool of threads designated to handle one class of workload: either I/O-bound or compute-bound work.
16+
A thread pool is associated with a process and manages the execution of asynchronous callbacks on behalf of the application. Its primary purpose is to reduce the number of application threads and provide efficient thread management. Every thread pool manages a pool of threads designated to handle one class of workload: I/O-bound or compute-bound work.
1717

1818
Further reading:
1919

@@ -27,47 +27,48 @@ Further reading:
2727

2828
Parallel / Compute-bound blocking work happens on the worker thread pool. Things like [`Task.Run`](https://msdn.microsoft.com/en-us/library/system.threading.tasks.task.run.aspx), [`Task.Factory.StartNew`](https://msdn.microsoft.com/en-au/library/dd321439.aspx), or [`Parallel.For`](https://msdn.microsoft.com/en-us/library/system.threading.tasks.parallel.for.aspx) schedule tasks on the worker thread pool.
2929

30-
Whenever a compute-bound work is scheduled, the worker thread pool will start expanding its worker threads (ramp-up phase). Ramping up more worker threads is expensive. The thread injection rate of the worker thread pool is limited.
30+
Whenever a compute-bound work is scheduled, the worker thread pool will expand its worker threads (ramp-up phase). Ramping up more worker threads is expensive. The thread injection rate of the worker thread pool is limited.
3131

3232
#### Compute-bound recommendations:
3333

34-
* Manual scheduling of compute-bound work to the worker thread pool is a top-level concern only. Use [`Task.Run`](https://msdn.microsoft.com/en-us/library/system.threading.tasks.task.run.aspx) or [`Task.Factory.StartNew`](https://msdn.microsoft.com/en-au/library/dd321439.aspx) as high up in the call hierarchy as possible (e.g. in the `Handle` methods of either a [handler](/nservicebus/handlers/) or [saga](/nservicebus/sagas/).
34+
* Manual scheduling of compute-bound work to the worker thread pool is a top-level concern only. Use [`Task.Run`](https://msdn.microsoft.com/en-us/library/system.threading.tasks.task.run.aspx) or [`Task.Factory.StartNew`](https://msdn.microsoft.com/en-au/library/dd321439.aspx) as high up in the call hierarchy as possible (e.g., in the `Handle` methods of either a [handler](/nservicebus/handlers/) or [saga](/nservicebus/sagas/).
35+
3536
* Avoid those operations deeper in the call hierarchy.
3637
* Group compute-bound operations together as much as possible.
3738
* Make compute-bound operations coarse-grained instead of fine-grained.
3839

3940
### I/O-thread pool
4041

41-
I/O-bound work is scheduled on the I/O-thread pool. The I/O-bound thread pool has a fixed number of worker threads (usually equal to the number of cores) which can work concurrently on thousands of I/O-bound tasks. I/O-bound work under Windows uses [I/O completion ports (IOCP)](https://msdn.microsoft.com/en-us/library/windows/desktop/aa365198.aspx) to get notifications when an I/O-bound operation is completed. IOCP enables efficient offloading of I/O-bound work from the user code to the kernel, driver, and hardware without blocking the user code until the I/O work is done. To achieve that, the user code registers notifications in the form of a callback. The callback occurs on an I/O thread which is a pool thread managed by the I/O system that is made available to the user code.
42+
I/O-bound work is scheduled on the I/O-thread pool. The I/O-bound thread pool has a fixed number of worker threads (usually equal to the number of cores), which can work concurrently on thousands of I/O-bound tasks. I/O-bound work under Windows uses [I/O completion ports (IOCP)](https://msdn.microsoft.com/en-us/library/windows/desktop/aa365198.aspx) to get notifications when an I/O-bound operation is completed. IOCP enables efficient offloading of I/O-bound work from the user code to the kernel, driver, and hardware without blocking the user code until the I/O work is done. To achieve that, the user code registers notifications in the form of a callback. The callback occurs on an I/O thread which is a pool thread managed by the I/O system made available to the user code.
4243

43-
I/O-bound work typically takes longer to complete compared to compute-bound work. The I/O system is optimized to keep the thread count low and schedule all callbacks, and therefore the execution of interleaved user code on that one thread. Due to those optimizations, all work gets serialized, and there is minimal context switching as the OS scheduler owns the threads. In general, asynchronous code can handle bursting traffic much better because of the "always-on" nature of the IOCP.
44+
I/O-bound work typically takes longer to complete compared to compute-bound work. The I/O system is optimized to keep the thread count low and schedule all callbacks, thereby allowing the execution of interleaved user code on that one thread. Due to those optimizations, all work gets serialized, and there is minimal context switching as the OS scheduler owns the threads. In general, asynchronous code can handle bursting traffic much better because of the "always-on" nature of the IOCP.
4445

4546
### Memory and allocations
4647

4748
Asynchronous code tends to use much less memory because the amount of memory saved by freeing up a thread in the worker thread pool dwarfs the amount of memory used for all the compiler-generated async structures combined.
4849

4950
### Synchronous vs. asynchronous
5051

51-
If each request is examined in isolation, asynchronous code would be slightly slower than the corresponding synchronous version. There might be extra kernel transitions, task scheduling, etc. involved but the scalability more than makes up for it.
52+
If each request is examined in isolation, asynchronous code would be slightly slower than the corresponding synchronous version. There might be extra kernel transitions, task scheduling, etc., but the scalability more than compensates for this.
5253

5354
From a server perspective, if asynchronous code is compared to synchronous code by looking at one method or one request at a time, then synchronous might make more sense. But if asynchronous code is compared to parallelism — watching the server as a whole — asynchronous wins. Every worker thread that can be freed up on a server is worth freeing up. It reduces the amount of memory needed and frees up the CPU for compute-bound work while saturating the I/O system completely.
5455

5556
## Calling short-running, compute-bound code
5657

57-
Short-running, compute-bound code that is executed in the handler should be executed directly on the I/O-thread that is executing the handler code.
58+
Short-running, compute-bound code that is executed in the handler should be executed directly on the I/O thread that is executing the handler code.
5859

5960
snippet: ShortComputeBoundMessageHandler
6061

61-
Call the code directly and **do not** wrap it with a [`Task.Run`](https://msdn.microsoft.com/en-us/library/system.threading.tasks.task.run.aspx) or [`Task.Factory.StartNew`](https://msdn.microsoft.com/en-au/library/dd321439.aspx).
62+
Call the code directly, and **do not** wrap it with a [`Task.Run`](https://msdn.microsoft.com/en-us/library/system.threading.tasks.task.run.aspx) or [`Task.Factory.StartNew`](https://msdn.microsoft.com/en-au/library/dd321439.aspx).
6263

63-
For the majority of business scenarios, this approach is acceptable since many of the asynchronous base class library methods in the .NET Framework will schedule continuations on the worker thread pool; the likelihood that no I/O-thread is blocked is high.
64+
This approach is acceptable for most business scenarios since many of the asynchronous base class library methods in the .NET Framework will schedule continuations on the worker thread pool; the likelihood that no I/O thread is blocked is high.
6465

6566
## Calling long-running, compute-bound code
6667

6768
> [!WARNING]
68-
> This approach should be used only after a thorough analysis of the runtime behavior and the code involved in the call hierarchy of a handler. Wrapping code inside the handler with `Task.Run` or `Task.Factory.StartNew` can seriously harm the throughput if applied incorrectly. It should be used when multiple long-running compute-bound tasks need to be executed in parallel.
69+
> This approach should be used only after a thorough analysis of the runtime behavior and the code involved in the call hierarchy of a handler. Wrapping code inside the handler with `Task.Run` or `Task.Factory.StartNew` can seriously harm the throughput if applied incorrectly. It should be used when multiple long-running compute-bound tasks must be executed in parallel.
6970
70-
Long-running compute-bound code that is executed in a handler could be offloaded to the worker thread pool.
71+
Long-running compute-bound code executed in a handler could be offloaded to the worker thread pool.
7172

7273
snippet: LongComputeBoundMessageHandler
7374

@@ -83,7 +84,7 @@ snippet: HandlerAwaitsTheTask
8384

8485
### Return the task
8586

86-
For high-throughput scenarios and if there are only one or two asynchronous exit points in the Handle method, the `async` keyword can be avoided completely by returning the task instead of awaiting it. This will omit the state machine creation which drives the async code and reduce the number of allocations on the given code path.
87+
For high-throughput scenarios, and if there are only one or two asynchronous exit points in the Handle method, the `async` keyword can be avoided entirely by returning the task instead of awaiting it. This will omit the state machine creation, which drives the async code, and reduce the number of allocations on the given code path.
8788

8889
snippet: HandlerReturnsATask
8990

@@ -97,13 +98,13 @@ Task-based APIs enable better composition of asynchronous code and allow conscio
9798

9899
#### Batched
99100

100-
By default, all outgoing message operations on the message handler contexts are [batched](/nservicebus/messaging/batched-dispatch.md). Batching means messages are kept in memory and sent out when the handler is completed. So the I/O-bound work happens outside the execution scope of a handler (individual transports may apply optimizations). For a few outgoing message operations it makes sense, to reduce complexity, to sequentially await all the outgoing operations as shown below.
101+
By default, all outgoing message operations on the message handler contexts are [batched](/nservicebus/messaging/batched-dispatch.md). Batching means messages are kept in memory and sent out when the handler is completed. So, the I/O-bound work happens outside the execution scope of a handler (individual transports may apply optimizations). For a few outgoing message operations, it makes sense to reduce complexity to sequentially await all the outgoing operations, as shown below.
101102

102103
snippet: BatchedDispatchHandler
103104

104105
#### Immediate dispatch
105106

106-
[Immediate dispatch](/nservicebus/messaging/send-a-message.md#dispatching-a-message-immediately) means outgoing message operations will be immediately dispatched to the underlying transport. For immediate dispatch operations, it might make sense to execute them concurrently as shown below.
107+
[Immediate dispatch](/nservicebus/messaging/send-a-message.md#dispatching-a-message-immediately) means outgoing message operations will be immediately dispatched to the underlying transport. For immediate dispatch operations, it might make sense to execute them concurrently, as shown below.
107108

108109
snippet: ImmediateDispatchHandler
109110

@@ -117,13 +118,13 @@ It is also possible to limit the concurrency by using [`SemaphoreSlim`](https://
117118

118119
snippet: ConcurrencyLimittingImmediateDispatchHandler
119120

120-
In practice, packaging operations together has proven to be more effective both in regards to memory allocations and performance. The snippet is shown nonetheless for completeness reasons as well as because [`SemaphoreSlim`](https://msdn.microsoft.com/en-us/library/system.threading.semaphoreslim.aspx) is a useful concurrency primitive for various scenarios.
121+
In practice, packaging operations together has proven to be more effective regarding memory allocations and performance. The snippet is shown nonetheless for completeness reasons as well as because [`SemaphoreSlim`](https://msdn.microsoft.com/en-us/library/system.threading.semaphoreslim.aspx) is a helpful concurrency primitive for various scenarios.
121122

122-
## Integration with non-tasked based APIs
123+
## Integration with non-tasked-based APIs
123124

124125
### Events
125126

126-
Sometimes it is necessary to call APIs from an asynchronous handler that uses events as the trigger for completion. Before `async`/`await` was introduced, [`ManualResetEvent`](https://msdn.microsoft.com/en-us/library/system.threading.manualresetevent.aspx) or [`AutoResetEvent`](https://msdn.microsoft.com/en-us/library/system.threading.autoresetevent.aspx) were usually used to synchronize runtime code flow. Unfortunately, these synchronization primitives are of a blocking nature. For asynchronous one-time event synchronization, the [`TaskCompletionSource<TResult>`](https://msdn.microsoft.com/en-us/library/dd449174.aspx) can be used.
127+
Sometimes, it is necessary to call APIs from an asynchronous handler that uses events as the trigger for completion. Before `async`/`await` was introduced, [`ManualResetEvent`](https://msdn.microsoft.com/en-us/library/system.threading.manualresetevent.aspx) or [`AutoResetEvent`](https://msdn.microsoft.com/en-us/library/system.threading.autoresetevent.aspx) were usually used to synchronize runtime code flow. Unfortunately, these synchronization primitives are of a blocking nature. For asynchronous one-time event synchronization, the [`TaskCompletionSource<TResult>`](https://msdn.microsoft.com/en-us/library/dd449174.aspx) can be used.
127128

128129
snippet: HandlerWhichIntegratesWithEvent
129130

0 commit comments

Comments
 (0)