Skip to content

Commit 7b3d8df

Browse files
Fan YuIngo Molnar
authored andcommitted
sched/psi: Update poll => rtpoll in relevant comments
The PSI trigger code is now making a distinction between privileged and unprivileged triggers, after the following commit: 65457b7 ("sched/psi: Rename existing poll members in preparation") But some comments have not been modified along with the code, so they need to be updated. This will help readers better understand the code. Signed-off-by: Fan Yu <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Suren Baghdasaryan <[email protected]> Cc: Peter Ziljstra <[email protected]> Link: https://lore.kernel.org/r/[email protected]
1 parent 1b8a955 commit 7b3d8df

File tree

1 file changed

+16
-16
lines changed

1 file changed

+16
-16
lines changed

kernel/sched/psi.c

Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -596,7 +596,7 @@ static void init_rtpoll_triggers(struct psi_group *group, u64 now)
596596
group->rtpoll_next_update = now + group->rtpoll_min_period;
597597
}
598598

599-
/* Schedule polling if it's not already scheduled or forced. */
599+
/* Schedule rtpolling if it's not already scheduled or forced. */
600600
static void psi_schedule_rtpoll_work(struct psi_group *group, unsigned long delay,
601601
bool force)
602602
{
@@ -636,45 +636,45 @@ static void psi_rtpoll_work(struct psi_group *group)
636636

637637
if (now > group->rtpoll_until) {
638638
/*
639-
* We are either about to start or might stop polling if no
640-
* state change was recorded. Resetting poll_scheduled leaves
639+
* We are either about to start or might stop rtpolling if no
640+
* state change was recorded. Resetting rtpoll_scheduled leaves
641641
* a small window for psi_group_change to sneak in and schedule
642-
* an immediate poll_work before we get to rescheduling. One
643-
* potential extra wakeup at the end of the polling window
644-
* should be negligible and polling_next_update still keeps
642+
* an immediate rtpoll_work before we get to rescheduling. One
643+
* potential extra wakeup at the end of the rtpolling window
644+
* should be negligible and rtpoll_next_update still keeps
645645
* updates correctly on schedule.
646646
*/
647647
atomic_set(&group->rtpoll_scheduled, 0);
648648
/*
649-
* A task change can race with the poll worker that is supposed to
649+
* A task change can race with the rtpoll worker that is supposed to
650650
* report on it. To avoid missing events, ensure ordering between
651-
* poll_scheduled and the task state accesses, such that if the poll
652-
* worker misses the state update, the task change is guaranteed to
653-
* reschedule the poll worker:
651+
* rtpoll_scheduled and the task state accesses, such that if the
652+
* rtpoll worker misses the state update, the task change is
653+
* guaranteed to reschedule the rtpoll worker:
654654
*
655-
* poll worker:
656-
* atomic_set(poll_scheduled, 0)
655+
* rtpoll worker:
656+
* atomic_set(rtpoll_scheduled, 0)
657657
* smp_mb()
658658
* LOAD states
659659
*
660660
* task change:
661661
* STORE states
662-
* if atomic_xchg(poll_scheduled, 1) == 0:
663-
* schedule poll worker
662+
* if atomic_xchg(rtpoll_scheduled, 1) == 0:
663+
* schedule rtpoll worker
664664
*
665665
* The atomic_xchg() implies a full barrier.
666666
*/
667667
smp_mb();
668668
} else {
669-
/* Polling window is not over, keep rescheduling */
669+
/* The rtpolling window is not over, keep rescheduling */
670670
force_reschedule = true;
671671
}
672672

673673

674674
collect_percpu_times(group, PSI_POLL, &changed_states);
675675

676676
if (changed_states & group->rtpoll_states) {
677-
/* Initialize trigger windows when entering polling mode */
677+
/* Initialize trigger windows when entering rtpolling mode */
678678
if (now > group->rtpoll_until)
679679
init_rtpoll_triggers(group, now);
680680

0 commit comments

Comments
 (0)