@@ -31,6 +31,70 @@ exist, the scheduler chooses the one that has been waiting longest.
3131 at any time unless interrupts have been masked. This applies to both
3232 cooperative threads and preemptive threads.
3333
34+
35+ The kernel can be built with one of several choices for the ready queue
36+ implementation, offering different choices between code size, constant factor
37+ runtime overhead and performance scaling when many threads are added.
38+
39+ * Simple linked-list ready queue (:option: `CONFIG_SCHED_DUMB `)
40+
41+ The scheduler ready queue will be implemented as a simple unordered list, with
42+ very fast constant time performance for single threads and very low code size.
43+ This implementation should be selected on systems with constrained code size
44+ that will never see more than a small number (3, maybe) of runnable threads in
45+ the queue at any given time. On most platforms (that are not otherwise using
46+ the red/black tree) this results in a savings of ~2k of code size.
47+
48+ * Red/black tree ready queue (:option: `CONFIG_SCHED_SCALABLE `)
49+
50+ The scheduler ready queue will be implemented as a red/black tree. This has
51+ rather slower constant-time insertion and removal overhead, and on most
52+ platforms (that are not otherwise using the red/black tree somewhere) requires
53+ an extra ~2kb of code. The resulting behavior will scale cleanly and
54+ quickly into the many thousands of threads.
55+
56+ Use this for applications needing many concurrent runnable threads (> 20 or
57+ so). Most applications won't need this ready queue implementation.
58+
59+ * Traditional multi-queue ready queue (:option: `CONFIG_SCHED_MULTIQ `)
60+
61+ When selected, the scheduler ready queue will be implemented as the
62+ classic/textbook array of lists, one per priority (max 32 priorities).
63+
64+ This corresponds to the scheduler algorithm used in Zephyr versions prior to
65+ 1.12.
66+
67+ It incurs only a tiny code size overhead vs. the "dumb" scheduler and runs in
68+ O(1) time in almost all circumstances with very low constant factor. But it
69+ requires a fairly large RAM budget to store those list heads, and the limited
70+ features make it incompatible with features like deadline scheduling that
71+ need to sort threads more finely, and SMP affinity which need to traverse the
72+ list of threads.
73+
74+ Typical applications with small numbers of runnable threads probably want the
75+ DUMB scheduler.
76+
77+
78+ The wait_q abstraction used in IPC primitives to pend threads for later wakeup
79+ shares the same backend data structure choices as the scheduler, and can use
80+ the same options.
81+
82+ * Scalable wait_q implementation (:option: `CONFIG_WAITQ_SCALABLE `)
83+
84+ When selected, the wait_q will be implemented with a balanced tree. Choose
85+ this if you expect to have many threads waiting on individual primitives.
86+ There is a ~2kb code size increase over :option: `CONFIG_WAITQ_DUMB ` (which may
87+ be shared with :option: `CONFIG_SCHED_SCALABLE `) if the red/black tree is not
88+ used elsewhere in the application, and pend/unpend operations on "small"
89+ queues will be somewhat slower (though this is not generally a performance
90+ path).
91+
92+ * Simple linked-list wait_q (:option: `CONFIG_WAITQ_DUMB `)
93+
94+ When selected, the wait_q will be implemented with a doubly-linked list.
95+ Choose this if you expect to have only a few threads blocked on any single
96+ IPC primitive.
97+
3498Cooperative Time Slicing
3599========================
36100
0 commit comments