Skip to content

Commit 7e90ffb

Browse files
ahunter6KAGA-KOKO
authored andcommitted
x86/vdso: Make delta calculation overflow safe
Kernel timekeeping is designed to keep the change in cycles (since the last timer interrupt) below max_cycles, which prevents multiplication overflow when converting cycles to nanoseconds. However, if timer interrupts stop, the calculation will eventually overflow. Add protection against that. Select GENERIC_VDSO_OVERFLOW_PROTECT so that max_cycles is made available in the VDSO data page. Check against max_cycles, falling back to a slower higher precision calculation. Take advantage of the opportunity to move masking and negative motion check into the slow path. The result is a calculation that has similar performance as before. Newer machines showed performance benefit, whereas older Skylake-based hardware such as Intel Kaby Lake was seen <1% worse. Suggested-by: Thomas Gleixner <[email protected]> Signed-off-by: Adrian Hunter <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/r/[email protected]
1 parent 456e378 commit 7e90ffb

File tree

2 files changed

+23
-9
lines changed

2 files changed

+23
-9
lines changed

arch/x86/Kconfig

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -168,6 +168,7 @@ config X86
168168
select GENERIC_TIME_VSYSCALL
169169
select GENERIC_GETTIMEOFDAY
170170
select GENERIC_VDSO_TIME_NS
171+
select GENERIC_VDSO_OVERFLOW_PROTECT
171172
select GUP_GET_PXX_LOW_HIGH if X86_PAE
172173
select HARDIRQS_SW_RESEND
173174
select HARDLOCKUP_CHECK_TIMESTAMP if X86_64

arch/x86/include/asm/vdso/gettimeofday.h

Lines changed: 22 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -319,18 +319,31 @@ static inline bool arch_vdso_cycles_ok(u64 cycles)
319319
*/
320320
static __always_inline u64 vdso_calc_ns(const struct vdso_data *vd, u64 cycles, u64 base)
321321
{
322-
/*
323-
* Due to the MSB/Sign-bit being used as invalid marker (see
324-
* arch_vdso_cycles_valid() above), the effective mask is S64_MAX.
325-
*/
326-
u64 delta = (cycles - vd->cycle_last) & S64_MAX;
322+
u64 delta = cycles - vd->cycle_last;
327323

328324
/*
329-
* Due to the above mentioned TSC wobbles, filter out negative motion.
330-
* Per the above masking, the effective sign bit is now bit 62.
325+
* Negative motion and deltas which can cause multiplication
326+
* overflow require special treatment. This check covers both as
327+
* negative motion is guaranteed to be greater than @vd::max_cycles
328+
* due to unsigned comparison.
329+
*
330+
* Due to the MSB/Sign-bit being used as invalid marker (see
331+
* arch_vdso_cycles_valid() above), the effective mask is S64_MAX,
332+
* but that case is also unlikely and will also take the unlikely path
333+
* here.
331334
*/
332-
if (unlikely(delta & (1ULL << 62)))
333-
return base >> vd->shift;
335+
if (unlikely(delta > vd->max_cycles)) {
336+
/*
337+
* Due to the above mentioned TSC wobbles, filter out
338+
* negative motion. Per the above masking, the effective
339+
* sign bit is now bit 62.
340+
*/
341+
if (delta & (1ULL << 62))
342+
return base >> vd->shift;
343+
344+
/* Handle multiplication overflow gracefully */
345+
return mul_u64_u32_add_u64_shr(delta & S64_MAX, vd->mult, base, vd->shift);
346+
}
334347

335348
return ((delta * vd->mult) + base) >> vd->shift;
336349
}

0 commit comments

Comments
 (0)