Skip to content

Commit cc26abb

Browse files
committed
sched_ext: Rename scx_bpf_dispatch[_vtime]() to scx_bpf_dsq_insert[_vtime]()
In sched_ext API, a repeatedly reported pain point is the overuse of the verb "dispatch" and confusion around "consume": - ops.dispatch() - scx_bpf_dispatch[_vtime]() - scx_bpf_consume() - scx_bpf_dispatch[_vtime]_from_dsq*() This overloading of the term is historical. Originally, there were only built-in DSQs and moving a task into a DSQ always dispatched it for execution. Using the verb "dispatch" for the kfuncs to move tasks into these DSQs made sense. Later, user DSQs were added and scx_bpf_dispatch[_vtime]() updated to be able to insert tasks into any DSQ. The only allowed DSQ to DSQ transfer was from a non-local DSQ to a local DSQ and this operation was named "consume". This was already confusing as a task could be dispatched to a user DSQ from ops.enqueue() and then the DSQ would have to be consumed in ops.dispatch(). Later addition of scx_bpf_dispatch_from_dsq*() made the confusion even worse as "dispatch" in this context meant moving a task to an arbitrary DSQ from a user DSQ. Clean up the API with the following renames: 1. scx_bpf_dispatch[_vtime]() -> scx_bpf_dsq_insert[_vtime]() 2. scx_bpf_consume() -> scx_bpf_dsq_move_to_local() 3. scx_bpf_dispatch[_vtime]_from_dsq*() -> scx_bpf_dsq_move[_vtime]*() This patch performs the first set of renames. Compatibility is maintained by: - The previous kfunc names are still provided by the kernel so that old binaries can run. Kernel generates a warning when the old names are used. - compat.bpf.h provides wrappers for the new names which automatically fall back to the old names when running on older kernels. They also trigger build error if old names are used for new builds. The compat features will be dropped after v6.15. v2: Documentation updates. Signed-off-by: Tejun Heo <[email protected]> Acked-by: Andrea Righi <[email protected]> Acked-by: Changwoo Min <[email protected]> Acked-by: Johannes Bechberger <[email protected]> Acked-by: Giovanni Gherdovich <[email protected]> Cc: Dan Schatzberg <[email protected]> Cc: Ming Yang <[email protected]>
1 parent 72b85bf commit cc26abb

File tree

8 files changed

+144
-97
lines changed

8 files changed

+144
-97
lines changed

Documentation/scheduler/sched-ext.rst

Lines changed: 25 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -130,7 +130,7 @@ optional. The following modified excerpt is from
130130
* Decide which CPU a task should be migrated to before being
131131
* enqueued (either at wakeup, fork time, or exec time). If an
132132
* idle core is found by the default ops.select_cpu() implementation,
133-
* then dispatch the task directly to SCX_DSQ_LOCAL and skip the
133+
* then insert the task directly into SCX_DSQ_LOCAL and skip the
134134
* ops.enqueue() callback.
135135
*
136136
* Note that this implementation has exactly the same behavior as the
@@ -148,15 +148,15 @@ optional. The following modified excerpt is from
148148
cpu = scx_bpf_select_cpu_dfl(p, prev_cpu, wake_flags, &direct);
149149
150150
if (direct)
151-
scx_bpf_dispatch(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL, 0);
151+
scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL, 0);
152152
153153
return cpu;
154154
}
155155
156156
/*
157-
* Do a direct dispatch of a task to the global DSQ. This ops.enqueue()
158-
* callback will only be invoked if we failed to find a core to dispatch
159-
* to in ops.select_cpu() above.
157+
* Do a direct insertion of a task to the global DSQ. This ops.enqueue()
158+
* callback will only be invoked if we failed to find a core to insert
159+
* into in ops.select_cpu() above.
160160
*
161161
* Note that this implementation has exactly the same behavior as the
162162
* default ops.enqueue implementation, which just dispatches the task
@@ -166,7 +166,7 @@ optional. The following modified excerpt is from
166166
*/
167167
void BPF_STRUCT_OPS(simple_enqueue, struct task_struct *p, u64 enq_flags)
168168
{
169-
scx_bpf_dispatch(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags);
169+
scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags);
170170
}
171171
172172
s32 BPF_STRUCT_OPS_SLEEPABLE(simple_init)
@@ -202,7 +202,7 @@ and one local dsq per CPU (``SCX_DSQ_LOCAL``). The BPF scheduler can manage
202202
an arbitrary number of dsq's using ``scx_bpf_create_dsq()`` and
203203
``scx_bpf_destroy_dsq()``.
204204

205-
A CPU always executes a task from its local DSQ. A task is "dispatched" to a
205+
A CPU always executes a task from its local DSQ. A task is "inserted" into a
206206
DSQ. A non-local DSQ is "consumed" to transfer a task to the consuming CPU's
207207
local DSQ.
208208

@@ -229,26 +229,26 @@ The following briefly shows how a waking task is scheduled and executed.
229229
scheduler can wake up any cpu using the ``scx_bpf_kick_cpu()`` helper,
230230
using ``ops.select_cpu()`` judiciously can be simpler and more efficient.
231231

232-
A task can be immediately dispatched to a DSQ from ``ops.select_cpu()`` by
233-
calling ``scx_bpf_dispatch()``. If the task is dispatched to
234-
``SCX_DSQ_LOCAL`` from ``ops.select_cpu()``, it will be dispatched to the
232+
A task can be immediately inserted into a DSQ from ``ops.select_cpu()``
233+
by calling ``scx_bpf_dsq_insert()``. If the task is inserted into
234+
``SCX_DSQ_LOCAL`` from ``ops.select_cpu()``, it will be inserted into the
235235
local DSQ of whichever CPU is returned from ``ops.select_cpu()``.
236-
Additionally, dispatching directly from ``ops.select_cpu()`` will cause the
236+
Additionally, inserting directly from ``ops.select_cpu()`` will cause the
237237
``ops.enqueue()`` callback to be skipped.
238238

239239
Note that the scheduler core will ignore an invalid CPU selection, for
240240
example, if it's outside the allowed cpumask of the task.
241241

242242
2. Once the target CPU is selected, ``ops.enqueue()`` is invoked (unless the
243-
task was dispatched directly from ``ops.select_cpu()``). ``ops.enqueue()``
243+
task was inserted directly from ``ops.select_cpu()``). ``ops.enqueue()``
244244
can make one of the following decisions:
245245

246-
* Immediately dispatch the task to either the global or local DSQ by
247-
calling ``scx_bpf_dispatch()`` with ``SCX_DSQ_GLOBAL`` or
246+
* Immediately insert the task into either the global or local DSQ by
247+
calling ``scx_bpf_dsq_insert()`` with ``SCX_DSQ_GLOBAL`` or
248248
``SCX_DSQ_LOCAL``, respectively.
249249

250-
* Immediately dispatch the task to a custom DSQ by calling
251-
``scx_bpf_dispatch()`` with a DSQ ID which is smaller than 2^63.
250+
* Immediately insert the task into a custom DSQ by calling
251+
``scx_bpf_dsq_insert()`` with a DSQ ID which is smaller than 2^63.
252252

253253
* Queue the task on the BPF side.
254254

@@ -257,11 +257,11 @@ The following briefly shows how a waking task is scheduled and executed.
257257
run, ``ops.dispatch()`` is invoked which can use the following two
258258
functions to populate the local DSQ.
259259

260-
* ``scx_bpf_dispatch()`` dispatches a task to a DSQ. Any target DSQ can
261-
be used - ``SCX_DSQ_LOCAL``, ``SCX_DSQ_LOCAL_ON | cpu``,
262-
``SCX_DSQ_GLOBAL`` or a custom DSQ. While ``scx_bpf_dispatch()``
260+
* ``scx_bpf_dsq_insert()`` inserts a task to a DSQ. Any target DSQ can be
261+
used - ``SCX_DSQ_LOCAL``, ``SCX_DSQ_LOCAL_ON | cpu``,
262+
``SCX_DSQ_GLOBAL`` or a custom DSQ. While ``scx_bpf_dsq_insert()``
263263
currently can't be called with BPF locks held, this is being worked on
264-
and will be supported. ``scx_bpf_dispatch()`` schedules dispatching
264+
and will be supported. ``scx_bpf_dsq_insert()`` schedules insertion
265265
rather than performing them immediately. There can be up to
266266
``ops.dispatch_max_batch`` pending tasks.
267267

@@ -288,12 +288,12 @@ built-in DSQs are used, there is no need to implement ``ops.dispatch()`` as
288288
a task is never queued on the BPF scheduler and both the local and global
289289
DSQs are consumed automatically.
290290

291-
``scx_bpf_dispatch()`` queues the task on the FIFO of the target DSQ. Use
292-
``scx_bpf_dispatch_vtime()`` for the priority queue. Internal DSQs such as
291+
``scx_bpf_dsq_insert()`` inserts the task on the FIFO of the target DSQ. Use
292+
``scx_bpf_dsq_insert_vtime()`` for the priority queue. Internal DSQs such as
293293
``SCX_DSQ_LOCAL`` and ``SCX_DSQ_GLOBAL`` do not support priority-queue
294-
dispatching, and must be dispatched to with ``scx_bpf_dispatch()``. See the
295-
function documentation and usage in ``tools/sched_ext/scx_simple.bpf.c`` for
296-
more information.
294+
dispatching, and must be dispatched to with ``scx_bpf_dsq_insert()``. See
295+
the function documentation and usage in ``tools/sched_ext/scx_simple.bpf.c``
296+
for more information.
297297

298298
Where to Look
299299
=============

kernel/sched/ext.c

Lines changed: 65 additions & 46 deletions
Original file line numberDiff line numberDiff line change
@@ -220,10 +220,10 @@ struct sched_ext_ops {
220220
* dispatch. While an explicit custom mechanism can be added,
221221
* select_cpu() serves as the default way to wake up idle CPUs.
222222
*
223-
* @p may be dispatched directly by calling scx_bpf_dispatch(). If @p
224-
* is dispatched, the ops.enqueue() callback will be skipped. Finally,
225-
* if @p is dispatched to SCX_DSQ_LOCAL, it will be dispatched to the
226-
* local DSQ of whatever CPU is returned by this callback.
223+
* @p may be inserted into a DSQ directly by calling
224+
* scx_bpf_dsq_insert(). If so, the ops.enqueue() will be skipped.
225+
* Directly inserting into %SCX_DSQ_LOCAL will put @p in the local DSQ
226+
* of the CPU returned by this operation.
227227
*
228228
* Note that select_cpu() is never called for tasks that can only run
229229
* on a single CPU or tasks with migration disabled, as they don't have
@@ -237,12 +237,12 @@ struct sched_ext_ops {
237237
* @p: task being enqueued
238238
* @enq_flags: %SCX_ENQ_*
239239
*
240-
* @p is ready to run. Dispatch directly by calling scx_bpf_dispatch()
241-
* or enqueue on the BPF scheduler. If not directly dispatched, the bpf
242-
* scheduler owns @p and if it fails to dispatch @p, the task will
243-
* stall.
240+
* @p is ready to run. Insert directly into a DSQ by calling
241+
* scx_bpf_dsq_insert() or enqueue on the BPF scheduler. If not directly
242+
* inserted, the bpf scheduler owns @p and if it fails to dispatch @p,
243+
* the task will stall.
244244
*
245-
* If @p was dispatched from ops.select_cpu(), this callback is
245+
* If @p was inserted into a DSQ from ops.select_cpu(), this callback is
246246
* skipped.
247247
*/
248248
void (*enqueue)(struct task_struct *p, u64 enq_flags);
@@ -270,11 +270,11 @@ struct sched_ext_ops {
270270
*
271271
* Called when a CPU's local dsq is empty. The operation should dispatch
272272
* one or more tasks from the BPF scheduler into the DSQs using
273-
* scx_bpf_dispatch() and/or consume user DSQs into the local DSQ using
274-
* scx_bpf_consume().
273+
* scx_bpf_dsq_insert() and/or consume user DSQs into the local DSQ
274+
* using scx_bpf_consume().
275275
*
276-
* The maximum number of times scx_bpf_dispatch() can be called without
277-
* an intervening scx_bpf_consume() is specified by
276+
* The maximum number of times scx_bpf_dsq_insert() can be called
277+
* without an intervening scx_bpf_consume() is specified by
278278
* ops.dispatch_max_batch. See the comments on top of the two functions
279279
* for more details.
280280
*
@@ -714,7 +714,7 @@ enum scx_enq_flags {
714714

715715
/*
716716
* Set the following to trigger preemption when calling
717-
* scx_bpf_dispatch() with a local dsq as the target. The slice of the
717+
* scx_bpf_dsq_insert() with a local dsq as the target. The slice of the
718718
* current task is cleared to zero and the CPU is kicked into the
719719
* scheduling path. Implies %SCX_ENQ_HEAD.
720720
*/
@@ -2322,7 +2322,7 @@ static bool task_can_run_on_remote_rq(struct task_struct *p, struct rq *rq,
23222322
/*
23232323
* We don't require the BPF scheduler to avoid dispatching to offline
23242324
* CPUs mostly for convenience but also because CPUs can go offline
2325-
* between scx_bpf_dispatch() calls and here. Trigger error iff the
2325+
* between scx_bpf_dsq_insert() calls and here. Trigger error iff the
23262326
* picked CPU is outside the allowed mask.
23272327
*/
23282328
if (!task_allowed_on_cpu(p, cpu)) {
@@ -2658,7 +2658,7 @@ static void dispatch_to_local_dsq(struct rq *rq, struct scx_dispatch_q *dst_dsq,
26582658
* Dispatching to local DSQs may need to wait for queueing to complete or
26592659
* require rq lock dancing. As we don't wanna do either while inside
26602660
* ops.dispatch() to avoid locking order inversion, we split dispatching into
2661-
* two parts. scx_bpf_dispatch() which is called by ops.dispatch() records the
2661+
* two parts. scx_bpf_dsq_insert() which is called by ops.dispatch() records the
26622662
* task and its qseq. Once ops.dispatch() returns, this function is called to
26632663
* finish up.
26642664
*
@@ -2690,7 +2690,7 @@ static void finish_dispatch(struct rq *rq, struct task_struct *p,
26902690
/*
26912691
* If qseq doesn't match, @p has gone through at least one
26922692
* dispatch/dequeue and re-enqueue cycle between
2693-
* scx_bpf_dispatch() and here and we have no claim on it.
2693+
* scx_bpf_dsq_insert() and here and we have no claim on it.
26942694
*/
26952695
if ((opss & SCX_OPSS_QSEQ_MASK) != qseq_at_dispatch)
26962696
return;
@@ -6258,7 +6258,7 @@ static const struct btf_kfunc_id_set scx_kfunc_set_select_cpu = {
62586258
.set = &scx_kfunc_ids_select_cpu,
62596259
};
62606260

6261-
static bool scx_dispatch_preamble(struct task_struct *p, u64 enq_flags)
6261+
static bool scx_dsq_insert_preamble(struct task_struct *p, u64 enq_flags)
62626262
{
62636263
if (!scx_kf_allowed(SCX_KF_ENQUEUE | SCX_KF_DISPATCH))
62646264
return false;
@@ -6278,7 +6278,8 @@ static bool scx_dispatch_preamble(struct task_struct *p, u64 enq_flags)
62786278
return true;
62796279
}
62806280

6281-
static void scx_dispatch_commit(struct task_struct *p, u64 dsq_id, u64 enq_flags)
6281+
static void scx_dsq_insert_commit(struct task_struct *p, u64 dsq_id,
6282+
u64 enq_flags)
62826283
{
62836284
struct scx_dsp_ctx *dspc = this_cpu_ptr(scx_dsp_ctx);
62846285
struct task_struct *ddsp_task;
@@ -6305,14 +6306,14 @@ static void scx_dispatch_commit(struct task_struct *p, u64 dsq_id, u64 enq_flags
63056306
__bpf_kfunc_start_defs();
63066307

63076308
/**
6308-
* scx_bpf_dispatch - Dispatch a task into the FIFO queue of a DSQ
6309-
* @p: task_struct to dispatch
6310-
* @dsq_id: DSQ to dispatch to
6309+
* scx_bpf_dsq_insert - Insert a task into the FIFO queue of a DSQ
6310+
* @p: task_struct to insert
6311+
* @dsq_id: DSQ to insert into
63116312
* @slice: duration @p can run for in nsecs, 0 to keep the current value
63126313
* @enq_flags: SCX_ENQ_*
63136314
*
6314-
* Dispatch @p into the FIFO queue of the DSQ identified by @dsq_id. It is safe
6315-
* to call this function spuriously. Can be called from ops.enqueue(),
6315+
* Insert @p into the FIFO queue of the DSQ identified by @dsq_id. It is safe to
6316+
* call this function spuriously. Can be called from ops.enqueue(),
63166317
* ops.select_cpu(), and ops.dispatch().
63176318
*
63186319
* When called from ops.select_cpu() or ops.enqueue(), it's for direct dispatch
@@ -6321,14 +6322,14 @@ __bpf_kfunc_start_defs();
63216322
* ops.select_cpu() to be on the target CPU in the first place.
63226323
*
63236324
* When called from ops.select_cpu(), @enq_flags and @dsp_id are stored, and @p
6324-
* will be directly dispatched to the corresponding dispatch queue after
6325-
* ops.select_cpu() returns. If @p is dispatched to SCX_DSQ_LOCAL, it will be
6326-
* dispatched to the local DSQ of the CPU returned by ops.select_cpu().
6325+
* will be directly inserted into the corresponding dispatch queue after
6326+
* ops.select_cpu() returns. If @p is inserted into SCX_DSQ_LOCAL, it will be
6327+
* inserted into the local DSQ of the CPU returned by ops.select_cpu().
63276328
* @enq_flags are OR'd with the enqueue flags on the enqueue path before the
6328-
* task is dispatched.
6329+
* task is inserted.
63296330
*
63306331
* When called from ops.dispatch(), there are no restrictions on @p or @dsq_id
6331-
* and this function can be called upto ops.dispatch_max_batch times to dispatch
6332+
* and this function can be called upto ops.dispatch_max_batch times to insert
63326333
* multiple tasks. scx_bpf_dispatch_nr_slots() returns the number of the
63336334
* remaining slots. scx_bpf_consume() flushes the batch and resets the counter.
63346335
*
@@ -6340,41 +6341,49 @@ __bpf_kfunc_start_defs();
63406341
* %SCX_SLICE_INF, @p never expires and the BPF scheduler must kick the CPU with
63416342
* scx_bpf_kick_cpu() to trigger scheduling.
63426343
*/
6343-
__bpf_kfunc void scx_bpf_dispatch(struct task_struct *p, u64 dsq_id, u64 slice,
6344-
u64 enq_flags)
6344+
__bpf_kfunc void scx_bpf_dsq_insert(struct task_struct *p, u64 dsq_id, u64 slice,
6345+
u64 enq_flags)
63456346
{
6346-
if (!scx_dispatch_preamble(p, enq_flags))
6347+
if (!scx_dsq_insert_preamble(p, enq_flags))
63476348
return;
63486349

63496350
if (slice)
63506351
p->scx.slice = slice;
63516352
else
63526353
p->scx.slice = p->scx.slice ?: 1;
63536354

6354-
scx_dispatch_commit(p, dsq_id, enq_flags);
6355+
scx_dsq_insert_commit(p, dsq_id, enq_flags);
6356+
}
6357+
6358+
/* for backward compatibility, will be removed in v6.15 */
6359+
__bpf_kfunc void scx_bpf_dispatch(struct task_struct *p, u64 dsq_id, u64 slice,
6360+
u64 enq_flags)
6361+
{
6362+
printk_deferred_once(KERN_WARNING "sched_ext: scx_bpf_dispatch() renamed to scx_bpf_dsq_insert()");
6363+
scx_bpf_dsq_insert(p, dsq_id, slice, enq_flags);
63556364
}
63566365

63576366
/**
6358-
* scx_bpf_dispatch_vtime - Dispatch a task into the vtime priority queue of a DSQ
6359-
* @p: task_struct to dispatch
6360-
* @dsq_id: DSQ to dispatch to
6367+
* scx_bpf_dsq_insert_vtime - Insert a task into the vtime priority queue of a DSQ
6368+
* @p: task_struct to insert
6369+
* @dsq_id: DSQ to insert into
63616370
* @slice: duration @p can run for in nsecs, 0 to keep the current value
63626371
* @vtime: @p's ordering inside the vtime-sorted queue of the target DSQ
63636372
* @enq_flags: SCX_ENQ_*
63646373
*
6365-
* Dispatch @p into the vtime priority queue of the DSQ identified by @dsq_id.
6374+
* Insert @p into the vtime priority queue of the DSQ identified by @dsq_id.
63666375
* Tasks queued into the priority queue are ordered by @vtime and always
63676376
* consumed after the tasks in the FIFO queue. All other aspects are identical
6368-
* to scx_bpf_dispatch().
6377+
* to scx_bpf_dsq_insert().
63696378
*
63706379
* @vtime ordering is according to time_before64() which considers wrapping. A
63716380
* numerically larger vtime may indicate an earlier position in the ordering and
63726381
* vice-versa.
63736382
*/
6374-
__bpf_kfunc void scx_bpf_dispatch_vtime(struct task_struct *p, u64 dsq_id,
6375-
u64 slice, u64 vtime, u64 enq_flags)
6383+
__bpf_kfunc void scx_bpf_dsq_insert_vtime(struct task_struct *p, u64 dsq_id,
6384+
u64 slice, u64 vtime, u64 enq_flags)
63766385
{
6377-
if (!scx_dispatch_preamble(p, enq_flags))
6386+
if (!scx_dsq_insert_preamble(p, enq_flags))
63786387
return;
63796388

63806389
if (slice)
@@ -6384,12 +6393,22 @@ __bpf_kfunc void scx_bpf_dispatch_vtime(struct task_struct *p, u64 dsq_id,
63846393

63856394
p->scx.dsq_vtime = vtime;
63866395

6387-
scx_dispatch_commit(p, dsq_id, enq_flags | SCX_ENQ_DSQ_PRIQ);
6396+
scx_dsq_insert_commit(p, dsq_id, enq_flags | SCX_ENQ_DSQ_PRIQ);
6397+
}
6398+
6399+
/* for backward compatibility, will be removed in v6.15 */
6400+
__bpf_kfunc void scx_bpf_dispatch_vtime(struct task_struct *p, u64 dsq_id,
6401+
u64 slice, u64 vtime, u64 enq_flags)
6402+
{
6403+
printk_deferred_once(KERN_WARNING "sched_ext: scx_bpf_dispatch_vtime() renamed to scx_bpf_dsq_insert_vtime()");
6404+
scx_bpf_dsq_insert_vtime(p, dsq_id, slice, vtime, enq_flags);
63886405
}
63896406

63906407
__bpf_kfunc_end_defs();
63916408

63926409
BTF_KFUNCS_START(scx_kfunc_ids_enqueue_dispatch)
6410+
BTF_ID_FLAGS(func, scx_bpf_dsq_insert, KF_RCU)
6411+
BTF_ID_FLAGS(func, scx_bpf_dsq_insert_vtime, KF_RCU)
63936412
BTF_ID_FLAGS(func, scx_bpf_dispatch, KF_RCU)
63946413
BTF_ID_FLAGS(func, scx_bpf_dispatch_vtime, KF_RCU)
63956414
BTF_KFUNCS_END(scx_kfunc_ids_enqueue_dispatch)
@@ -6527,9 +6546,9 @@ __bpf_kfunc void scx_bpf_dispatch_cancel(void)
65276546
* to the current CPU's local DSQ for execution. Can only be called from
65286547
* ops.dispatch().
65296548
*
6530-
* This function flushes the in-flight dispatches from scx_bpf_dispatch() before
6531-
* trying to consume the specified DSQ. It may also grab rq locks and thus can't
6532-
* be called under any BPF locks.
6549+
* This function flushes the in-flight dispatches from scx_bpf_dsq_insert()
6550+
* before trying to consume the specified DSQ. It may also grab rq locks and
6551+
* thus can't be called under any BPF locks.
65336552
*
65346553
* Returns %true if a task has been consumed, %false if there isn't any task to
65356554
* consume.
@@ -6650,7 +6669,7 @@ __bpf_kfunc bool scx_bpf_dispatch_from_dsq(struct bpf_iter_scx_dsq *it__iter,
66506669
* scx_bpf_dispatch_from_dsq_set_vtime() to update.
66516670
*
66526671
* All other aspects are identical to scx_bpf_dispatch_from_dsq(). See
6653-
* scx_bpf_dispatch_vtime() for more information on @vtime.
6672+
* scx_bpf_dsq_insert_vtime() for more information on @vtime.
66546673
*/
66556674
__bpf_kfunc bool scx_bpf_dispatch_vtime_from_dsq(struct bpf_iter_scx_dsq *it__iter,
66566675
struct task_struct *p, u64 dsq_id,

tools/sched_ext/include/scx/common.bpf.h

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -36,8 +36,8 @@ static inline void ___vmlinux_h_sanity_check___(void)
3636

3737
s32 scx_bpf_create_dsq(u64 dsq_id, s32 node) __ksym;
3838
s32 scx_bpf_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, bool *is_idle) __ksym;
39-
void scx_bpf_dispatch(struct task_struct *p, u64 dsq_id, u64 slice, u64 enq_flags) __ksym;
40-
void scx_bpf_dispatch_vtime(struct task_struct *p, u64 dsq_id, u64 slice, u64 vtime, u64 enq_flags) __ksym;
39+
void scx_bpf_dsq_insert(struct task_struct *p, u64 dsq_id, u64 slice, u64 enq_flags) __ksym __weak;
40+
void scx_bpf_dsq_insert_vtime(struct task_struct *p, u64 dsq_id, u64 slice, u64 vtime, u64 enq_flags) __ksym __weak;
4141
u32 scx_bpf_dispatch_nr_slots(void) __ksym;
4242
void scx_bpf_dispatch_cancel(void) __ksym;
4343
bool scx_bpf_consume(u64 dsq_id) __ksym;

0 commit comments

Comments
 (0)