Skip to content

Commit ba2fe75

Browse files
Stephane EranianPeter Zijlstra
authored andcommitted
perf/x86/amd: Add AMD branch sampling period adjustment
Add code to adjust the sampling event period when used with the Branch Sampling feature (BRS). Given the depth of the BRS (16), the period is reduced by that depth such that in the best case scenario, BRS saturates at the desired sampling period. In practice, though, the processor may execute more branches. Given a desired period P and a depth D, the kernel programs the actual period at P - D. After P occurrences of the sampling event, the counter overflows. It then may take X branches (skid) before the NMI is caught and held by the hardware and BRS activates. Then, after D branches, BRS saturates and the NMI is delivered. With no skid, the effective period would be (P - D) + D = P. In practice, however, it will likely be (P - D) + X + D. There is no way to eliminate X or predict X. Signed-off-by: Stephane Eranian <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
1 parent 8910075 commit ba2fe75

File tree

2 files changed

+19
-0
lines changed

2 files changed

+19
-0
lines changed

arch/x86/events/core.c

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1374,6 +1374,13 @@ int x86_perf_event_set_period(struct perf_event *event)
13741374
x86_pmu.set_topdown_event_period)
13751375
return x86_pmu.set_topdown_event_period(event);
13761376

1377+
/*
1378+
* decrease period by the depth of the BRS feature to get
1379+
* the last N taken branches and approximate the desired period
1380+
*/
1381+
if (has_branch_stack(event))
1382+
period = amd_brs_adjust_period(period);
1383+
13771384
/*
13781385
* If we are way outside a reasonable range then just skip forward:
13791386
*/

arch/x86/events/perf_event.h

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1263,6 +1263,14 @@ static inline bool amd_brs_active(void)
12631263
return cpuc->brs_active;
12641264
}
12651265

1266+
static inline s64 amd_brs_adjust_period(s64 period)
1267+
{
1268+
if (period > x86_pmu.lbr_nr)
1269+
return period - x86_pmu.lbr_nr;
1270+
1271+
return period;
1272+
}
1273+
12661274
#else /* CONFIG_CPU_SUP_AMD */
12671275

12681276
static inline int amd_pmu_init(void)
@@ -1287,6 +1295,10 @@ static inline void amd_brs_disable_all(void)
12871295
{
12881296
}
12891297

1298+
static inline s64 amd_brs_adjust_period(s64 period)
1299+
{
1300+
return period;
1301+
}
12901302
#endif /* CONFIG_CPU_SUP_AMD */
12911303

12921304
static inline int is_pebs_pt(struct perf_event *event)

0 commit comments

Comments
 (0)