|
1 | 1 | /* |
2 | 2 | Package workerpool queues work to a limited number of goroutines. |
3 | 3 |
|
4 | | -The purpose of the worker pool is to limit the concurrency of tasks |
5 | | -executed by the workers. This is useful when performing tasks that require |
6 | | -sufficient resources (CPU, memory, etc.), and running too many tasks at the |
7 | | -same time would exhaust resources. |
| 4 | +The purpose of the worker pool is to limit the concurrency of tasks executed by |
| 5 | +the workers. This is useful when performing tasks that require sufficient |
| 6 | +resources (CPU, memory, etc.), and running too many tasks at the same time |
| 7 | +would exhaust resources. |
8 | 8 |
|
9 | 9 | Non-blocking task submission |
10 | 10 |
|
11 | | -A task is a function submitted to the worker pool for execution. Submitting |
| 11 | +A task is a function submitted to the worker pool for execution. Submitting |
12 | 12 | tasks to this worker pool will not block, regardless of the number of tasks. |
13 | | -Incoming tasks are immediately dispatched to an available |
14 | | -worker. If no worker is immediately available, or there are already tasks |
15 | | -waiting for an available worker, then the task is put on a waiting queue to |
16 | | -wait for an available worker. |
| 13 | +Incoming tasks are immediately dispatched to an available worker. If no worker |
| 14 | +is immediately available, or there are already tasks waiting for an available |
| 15 | +worker, then the task is put on a waiting queue to wait for an available |
| 16 | +worker. |
17 | 17 |
|
18 | 18 | The intent of the worker pool is to limit the concurrency of task execution, |
19 | | -not limit the number of tasks queued to be executed. Therefore, this unbounded |
20 | | -input of tasks is acceptable as the tasks cannot be discarded. If the number |
21 | | -of inbound tasks is too many to even queue for pending processing, then the |
22 | | -solution is outside the scope of workerpool, and should be solved by |
| 19 | +not limit the number of tasks queued to be executed. Therefore, this unbounded |
| 20 | +input of tasks is acceptable as the tasks cannot be discarded. If the number of |
| 21 | +inbound tasks is too many to even queue for pending processing, then the |
| 22 | +solution is outside the scope of workerpool. It should be solved by |
23 | 23 | distributing load over multiple systems, and/or storing input for pending |
24 | 24 | processing in intermediate storage such as a database, file system, distributed |
25 | 25 | message queue, etc. |
26 | 26 |
|
27 | 27 | Dispatcher |
28 | 28 |
|
29 | 29 | This worker pool uses a single dispatcher goroutine to read tasks from the |
30 | | -input task queue and dispatch them to worker goroutines. This allows for a |
| 30 | +input task queue and dispatch them to worker goroutines. This allows for a |
31 | 31 | small input channel, and lets the dispatcher queue as many tasks as are |
32 | | -submitted when there are no available workers. Additionally, the dispatcher |
33 | | -can adjust the number of workers as appropriate for the work load, without |
34 | | -having to utilize locked counters and checks incurred on task submission. |
| 32 | +submitted when there are no available workers. Additionally, the dispatcher can |
| 33 | +adjust the number of workers as appropriate for the work load, without having |
| 34 | +to utilize locked counters and checks incurred on task submission. |
35 | 35 |
|
36 | 36 | When no tasks have been submitted for a period of time, a worker is removed by |
37 | | -the dispatcher. This is done until there are no more workers to remove. The |
| 37 | +the dispatcher. This is done until there are no more workers to remove. The |
38 | 38 | minimum number of workers is always zero, because the time to start new workers |
39 | 39 | is insignificant. |
40 | 40 |
|
41 | 41 | Usage note |
42 | 42 |
|
43 | 43 | It is advisable to use different worker pools for tasks that are bound by |
44 | | -different resources, or that have different resource use patterns. For |
45 | | -example, tasks that use X Mb of memory may need different concurrency limits |
46 | | -than tasks that use Y Mb of memory. |
| 44 | +different resources, or that have different resource use patterns. For example, |
| 45 | +tasks that use X Mb of memory may need different concurrency limits than tasks |
| 46 | +that use Y Mb of memory. |
47 | 47 |
|
48 | 48 | Waiting queue vs goroutines |
49 | 49 |
|
50 | 50 | When there are no available workers to handle incoming tasks, the tasks are put |
51 | | -on a waiting queue, in this implementation. In implementations mentioned in |
52 | | -the credits below, these tasks were passed to goroutines. Using a queue is |
53 | | -faster and has less memory overhead than creating a separate goroutine for each |
54 | | -waiting task, allowing a much higher number of waiting tasks. Also, using a |
| 51 | +on a waiting queue, in this implementation. In implementations mentioned in the |
| 52 | +credits below, these tasks were passed to goroutines. Using a queue is faster |
| 53 | +and has less memory overhead than creating a separate goroutine for each |
| 54 | +waiting task, allowing a much higher number of waiting tasks. Also, using a |
55 | 55 | waiting queue ensures that tasks are given to workers in the order the tasks |
56 | 56 | were received. |
57 | 57 |
|
|
0 commit comments