Skip to content

Commit 317d359

Browse files
Peter ZijlstraIngo Molnar
authored andcommitted
sched/core: Force proper alignment of 'struct util_est'
For some as yet not understood reason, Tony gets unaligned access traps on IA64 because of: struct util_est ue = READ_ONCE(p->se.avg.util_est); and: WRITE_ONCE(p->se.avg.util_est, ue); introduced by commit: d519329 ("sched/fair: Update util_est only on util_avg updates") Normally those two fields should end up on an 8-byte aligned location, but UP and RANDSTRUCT can mess that up so enforce the alignment explicitly. Also make the alignment on sched_avg unconditional, as it is really about data locality, not false-sharing. With or without this patch the layout for sched_avg on a ia64-defconfig build looks like: $ pahole -EC sched_avg ia64-defconfig/kernel/sched/core.o die__process_function: tag not supported (INVALID)! struct sched_avg { /* typedef u64 */ long long unsigned int last_update_time; /* 0 8 */ /* typedef u64 */ long long unsigned int load_sum; /* 8 8 */ /* typedef u64 */ long long unsigned int runnable_load_sum; /* 16 8 */ /* typedef u32 */ unsigned int util_sum; /* 24 4 */ /* typedef u32 */ unsigned int period_contrib; /* 28 4 */ long unsigned int load_avg; /* 32 8 */ long unsigned int runnable_load_avg; /* 40 8 */ long unsigned int util_avg; /* 48 8 */ struct util_est { unsigned int enqueued; /* 56 4 */ unsigned int ewma; /* 60 4 */ } util_est; /* 56 8 */ /* --- cacheline 1 boundary (64 bytes) --- */ /* size: 64, cachelines: 1, members: 9 */ }; Reported-and-Tested-by: Tony Luck <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Frederic Weisbecker <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Norbert Manthey <[email protected]> Cc: Patrick Bellasi <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony <[email protected]> Cc: Vincent Guittot <[email protected]> Fixes: d519329 ("sched/fair: Update util_est only on util_avg updates") Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
1 parent adcc8da commit 317d359

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

include/linux/sched.h

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -300,7 +300,7 @@ struct util_est {
300300
unsigned int enqueued;
301301
unsigned int ewma;
302302
#define UTIL_EST_WEIGHT_SHIFT 2
303-
};
303+
} __attribute__((__aligned__(sizeof(u64))));
304304

305305
/*
306306
* The load_avg/util_avg accumulates an infinite geometric series
@@ -364,7 +364,7 @@ struct sched_avg {
364364
unsigned long runnable_load_avg;
365365
unsigned long util_avg;
366366
struct util_est util_est;
367-
};
367+
} ____cacheline_aligned;
368368

369369
struct sched_statistics {
370370
#ifdef CONFIG_SCHEDSTATS
@@ -435,7 +435,7 @@ struct sched_entity {
435435
* Put into separate cache line so it does not
436436
* collide with read-mostly values above.
437437
*/
438-
struct sched_avg avg ____cacheline_aligned_in_smp;
438+
struct sched_avg avg;
439439
#endif
440440
};
441441

0 commit comments

Comments
 (0)