Skip to content

Commit bf0eff8

Browse files
anna-marialxKAGA-KOKO
authored andcommitted
x86/vdso: Prepare introduction of struct vdso_clock
To support multiple PTP clocks, the VDSO data structure needs to be reworked. All clock specific data will end up in struct vdso_clock and in struct vdso_time_data there will be array of VDSO clocks. At the moment, vdso_clock is simply a define which maps vdso_clock to vdso_time_data. To prepare for the rework of the data structures, replace the struct vdso_time_data pointer with a struct vdso_clock pointer where applicable. No functional change. Signed-off-by: Anna-Maria Behnsen <[email protected]> Signed-off-by: Nam Cao <[email protected]> Signed-off-by: Thomas Weißschuh <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/all/[email protected]
1 parent 5911e16 commit bf0eff8

File tree

1 file changed

+8
-8
lines changed

1 file changed

+8
-8
lines changed

arch/x86/include/asm/vdso/gettimeofday.h

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -261,7 +261,7 @@ static inline u64 __arch_get_hw_counter(s32 clock_mode,
261261
return U64_MAX;
262262
}
263263

264-
static inline bool arch_vdso_clocksource_ok(const struct vdso_time_data *vd)
264+
static inline bool arch_vdso_clocksource_ok(const struct vdso_clock *vc)
265265
{
266266
return true;
267267
}
@@ -300,34 +300,34 @@ static inline bool arch_vdso_cycles_ok(u64 cycles)
300300
* declares everything with the MSB/Sign-bit set as invalid. Therefore the
301301
* effective mask is S64_MAX.
302302
*/
303-
static __always_inline u64 vdso_calc_ns(const struct vdso_time_data *vd, u64 cycles, u64 base)
303+
static __always_inline u64 vdso_calc_ns(const struct vdso_clock *vc, u64 cycles, u64 base)
304304
{
305-
u64 delta = cycles - vd->cycle_last;
305+
u64 delta = cycles - vc->cycle_last;
306306

307307
/*
308308
* Negative motion and deltas which can cause multiplication
309309
* overflow require special treatment. This check covers both as
310-
* negative motion is guaranteed to be greater than @vd::max_cycles
310+
* negative motion is guaranteed to be greater than @vc::max_cycles
311311
* due to unsigned comparison.
312312
*
313313
* Due to the MSB/Sign-bit being used as invalid marker (see
314314
* arch_vdso_cycles_ok() above), the effective mask is S64_MAX, but that
315315
* case is also unlikely and will also take the unlikely path here.
316316
*/
317-
if (unlikely(delta > vd->max_cycles)) {
317+
if (unlikely(delta > vc->max_cycles)) {
318318
/*
319319
* Due to the above mentioned TSC wobbles, filter out
320320
* negative motion. Per the above masking, the effective
321321
* sign bit is now bit 62.
322322
*/
323323
if (delta & (1ULL << 62))
324-
return base >> vd->shift;
324+
return base >> vc->shift;
325325

326326
/* Handle multiplication overflow gracefully */
327-
return mul_u64_u32_add_u64_shr(delta & S64_MAX, vd->mult, base, vd->shift);
327+
return mul_u64_u32_add_u64_shr(delta & S64_MAX, vc->mult, base, vc->shift);
328328
}
329329

330-
return ((delta * vd->mult) + base) >> vd->shift;
330+
return ((delta * vc->mult) + base) >> vc->shift;
331331
}
332332
#define vdso_calc_ns vdso_calc_ns
333333

0 commit comments

Comments
 (0)