You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Implementation of the bpf_task_work_schedule kfuncs.
Main components:
* struct bpf_task_work_context – Metadata and state management per task
work.
* enum bpf_task_work_state – A state machine to serialize work
scheduling and execution.
* bpf_task_work_schedule() – The central helper that initiates
scheduling.
* bpf_task_work_callback() – Invoked when the actual task_work runs.
* bpf_task_work_irq() – An intermediate step (runs in softirq context)
to enqueue task work.
* bpf_task_work_cancel_and_free() – Cleanup for deleted BPF map entries.
Flow of task work scheduling
1) bpf_task_work_schedule_* is called from BPF code.
2) Transition state from STANDBY to PENDING.
3) irq_work_queue() schedules bpf_task_work_irq().
4) Transition state from PENDING to SCHEDULING.
4) bpf_task_work_irq() attempts task_work_add(). If successful, state
transitions to SCHEDULED.
5) Task work calls bpf_task_work_callback(), which transition state to
RUNNING.
6) BPF callback is executed
7) Context is cleaned up, refcounts released, state set back to
STANDBY.
Map value deletion
If map value that contains bpf_task_work_context is deleted, BPF map
implementation calls bpf_task_work_cancel_and_free().
Deletion is handled by atomically setting state to FREED and
releasing references or letting scheduler do that, depending on the
last state before the deletion:
* SCHEDULING: release references in bpf_task_work_cancel_and_free(),
expect bpf_task_work_irq() to cancel task work.
* SCHEDULED: release references and try to cancel task work in
bpf_task_work_cancel_and_free().
* other states: one of bpf_task_work_irq(), bpf_task_work_schedule(),
bpf_task_work_callback() should cleanup upon detecting the state
switching to FREED.
The state transitions are controlled with atomic_cmpxchg, ensuring:
* Only one thread can successfully enqueue work.
* Proper handling of concurrent deletes (BPF_TW_FREED).
* Safe rollback if task_work_add() fails.
Signed-off-by: Mykyta Yatsenko <[email protected]>
0 commit comments