Skip to content

Commit 62347e2

Browse files
committed
Merge tag 'sched-urgent-2025-07-20' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull scheduler fix from Thomas Gleixner: "A single fix for the scheduler. A recent commit changed the runqueue counter nr_uninterruptible to an unsigned int. Due to the fact that the counters are not updated on migration of a uninterruptble task to a different CPU, these counters can exceed INT_MAX. The counter is cast to long in the load average calculation, which means that the cast expands into negative space resulting in bogus load average values. Convert it back to unsigned long to fix this. * tag 'sched-urgent-2025-07-20' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: sched: Change nr_uninterruptible type to unsigned long
2 parents 5f054ef + 3656978 commit 62347e2

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

kernel/sched/loadavg.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@ long calc_load_fold_active(struct rq *this_rq, long adjust)
8080
long nr_active, delta = 0;
8181

8282
nr_active = this_rq->nr_running - adjust;
83-
nr_active += (int)this_rq->nr_uninterruptible;
83+
nr_active += (long)this_rq->nr_uninterruptible;
8484

8585
if (nr_active != this_rq->calc_load_active) {
8686
delta = nr_active - this_rq->calc_load_active;

kernel/sched/sched.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1149,7 +1149,7 @@ struct rq {
11491149
* one CPU and if it got migrated afterwards it may decrease
11501150
* it on another CPU. Always updated under the runqueue lock:
11511151
*/
1152-
unsigned int nr_uninterruptible;
1152+
unsigned long nr_uninterruptible;
11531153

11541154
union {
11551155
struct task_struct __rcu *donor; /* Scheduler context */

0 commit comments

Comments
 (0)