Skip to content

Commit fc1892b

Browse files
author
Peter Zijlstra
committed
sched/eevdf: Fixup PELT vs DELAYED_DEQUEUE
Note that tasks that are kept on the runqueue to burn off negative lag, are not in fact runnable anymore, they'll get dequeued the moment they get picked. As such, don't count this time towards runnable. Thanks to Valentin for spotting I had this backwards initially. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Valentin Schneider <[email protected]> Tested-by: Valentin Schneider <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
1 parent 54a58a7 commit fc1892b

File tree

2 files changed

+8
-0
lines changed

2 files changed

+8
-0
lines changed

kernel/sched/fair.c

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5402,6 +5402,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
54025402
!entity_eligible(cfs_rq, se)) {
54035403
if (cfs_rq->next == se)
54045404
cfs_rq->next = NULL;
5405+
update_load_avg(cfs_rq, se, 0);
54055406
se->sched_delayed = 1;
54065407
return false;
54075408
}
@@ -6841,6 +6842,7 @@ requeue_delayed_entity(struct sched_entity *se)
68416842
}
68426843
}
68436844

6845+
update_load_avg(cfs_rq, se, 0);
68446846
se->sched_delayed = 0;
68456847
}
68466848

kernel/sched/sched.h

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -820,6 +820,9 @@ static inline void se_update_runnable(struct sched_entity *se)
820820

821821
static inline long se_runnable(struct sched_entity *se)
822822
{
823+
if (se->sched_delayed)
824+
return false;
825+
823826
if (entity_is_task(se))
824827
return !!se->on_rq;
825828
else
@@ -834,6 +837,9 @@ static inline void se_update_runnable(struct sched_entity *se) { }
834837

835838
static inline long se_runnable(struct sched_entity *se)
836839
{
840+
if (se->sched_delayed)
841+
return false;
842+
837843
return !!se->on_rq;
838844
}
839845

0 commit comments

Comments
 (0)