You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/learning-paths/cross-platform/topdown-compare/1-top-down.md
+7-4Lines changed: 7 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,17 +14,20 @@ Both Intel x86 and Arm Neoverse CPUs provide sophisticated Performance Monitorin
14
14
15
15
While the specific counter names and formulas differ between architectures, both Intel x86 and Arm Neoverse have converged on top-down performance analysis methodologies that categorize performance bottlenecks into four key areas:
16
16
17
-
**Retiring** represents pipeline slots that successfully complete useful work, while **Bad Speculation** accounts for slots wasted on mispredicted branches. Additionally, **Frontend Bound** identifies slots stalled due to instruction fetch and decode limitations, and **Backend Bound** covers slots stalled by execution resource constraints.
17
+
- Retiring
18
+
- Bad Speculation
19
+
- Frontend Bound
20
+
- Backend Bound
18
21
19
-
This Learning Path provides a comparison of how x86 processors implement four-level hierarchical top-down analysis compared to Arm Neoverse's two-stage methodology, highlighting the similarities in approach while explaining the architectural differences in PMU counter events and formulas.
22
+
This Learning Path provides a comparison of how x86 processors implement multi-level hierarchical top-down analysis compared to Arm Neoverse's methodology, highlighting the similarities in approach while explaining the architectural differences in PMU counter events and formulas.
20
23
21
24
## Introduction to top-down performance analysis
22
25
23
-
The top-down methodology makes performance analysis easier by shifting focus from individual PMU counters to pipeline slot utilization. Instead of trying to interpret dozens of seemingly unrelated metrics, you can systematically identify bottlenecks by attributing each CPU pipeline slot to one of four categories.
26
+
The top-down methodology makes performance analysis easier by shifting focus from individual PMU counters to pipeline slot utilization. Instead of trying to interpret dozens of seemingly unrelated metrics, you can systematically identify bottlenecks by attributing each CPU pipeline slot to one of the four categories.
24
27
25
28
**Retiring** represents pipeline slots that successfully complete useful work, while **Bad Speculation** accounts for slots wasted on mispredicted branches and pipeline flushes. **Frontend Bound** identifies slots stalled due to instruction fetch and decode limitations, whereas **Backend Bound** covers slots stalled by execution resource constraints such as cache misses or arithmetic unit availability.
26
29
27
-
The methodology uses a hierarchical approach that allows you to drill down only into the dominant bottleneck category, and avoid the complexity of analyzing all possible performance issues at the same time.
30
+
The methodology allows you to drill down only into the dominant bottleneck category, avoiding the complexity of analyzing all possible performance issues at the same time.
28
31
29
32
The next sections compare the Intel x86 methodology with the Arm top-down methodology.
Copy file name to clipboardExpand all lines: content/learning-paths/cross-platform/topdown-compare/1a-intel.md
+12-10Lines changed: 12 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
---
2
-
title: "Implement Intel x86 4-level hierarchical top-down analysis"
2
+
title: "Understand Intel x86 multi-level hierarchical top-down analysis"
3
3
weight: 4
4
4
5
5
### FIXED, DO NOT MODIFY
@@ -8,9 +8,9 @@ layout: learningpathall
8
8
9
9
## Configure slot-based accounting with Intel x86 PMU counters
10
10
11
-
Intel uses a slot-based accounting model where each CPU cycle provides multiple issue slots. A slot is a hardware resource needed to process micro-operations (uops). More slots means more work can be done per cycle. The number of slots depends on the microarchitecture design but current Intel processor designs typically have four issue slots per cycle.
11
+
Intel uses a slot-based accounting model where each CPU cycle provides multiple issue slots. A slot is a hardware resource needed to process micro-operations (uops). More slots means more work can be done per cycle. The number of slots depends on the microarchitecture design, but current Intel processor designs typically have four issue slots per cycle.
12
12
13
-
Intel's methodology uses a multi-level hierarchy that extends to 4 levels of detail. Each level provides progressively more granular analysis, allowing you to drill down from high-level categories to specific microarchitecture events.
13
+
Intel's methodology uses a multi-level hierarchy that typically extends to 3-4 levels of detail. Each level provides progressively more granular analysis, allowing you to drill down from high-level categories to specific microarchitecture events.
@@ -27,18 +27,20 @@ Where `SLOTS = 4 * CPU_CLK_UNHALTED.THREAD` on most Intel cores.
27
27
28
28
Once you've identified the dominant Level 1 category, Level 2 drills into each area to identify broader causes. This level distinguishes between frontend latency and bandwidth limits, or between memory and core execution stalls in the backend.
29
29
30
-
- Frontend Bound covers frontend latency in comparison with frontend bandwidth
31
-
- Backend Bound covers memory bound in comparison with core bound
32
-
- Bad Speculation covers branch mispredicts in comparison with machine clears
33
-
- Retiring covers base in comparison with microcode sequencer
30
+
- Frontend Bound covers frontend latency compared with frontend bandwidth
31
+
- Backend Bound covers memory bound compared with core bound
32
+
- Bad Speculation covers branch mispredicts compared with machine clears
33
+
- Retiring covers base compared with microcode sequencer
34
34
35
35
## Level 3: Target specific microarchitecture bottlenecks
36
36
37
-
After identifying broader cause categories in Level 2, Level 3 provides fine-grained attribution that pinpoints specific bottlenecks like DRAM latency, cache misses, or port contention. This precision makes it possible to identify the exact root cause and apply targeted optimizations. Memory Bound expands into detailed cache hierarchy analysis including L1 Bound, L2 Bound, L3 Bound, DRAM Bound, and Store Bound categories, while Core Bound breaks down into execution unit constraints such as Divider and Ports Utilization, along with many other specific microarchitecture-level categories that enable precise performance tuning.
37
+
After identifying broader cause categories in Level 2, Level 3 provides fine-grained attribution that pinpoints specific bottlenecks like DRAM latency, cache misses, or port contention. This precision makes it possible to identify the exact root cause and apply targeted optimizations.
38
+
39
+
Memory Bound expands into detailed cache hierarchy analysis including L1 Bound, L2 Bound, L3 Bound, DRAM Bound, and Store Bound categories. Core Bound breaks down into execution unit constraints such as Divider and Ports Utilization, along with many other specific microarchitecture-level categories that enable precise performance tuning.
38
40
39
41
## Level 4: Access specific PMU counter events
40
42
41
-
The final level provides direct access to the specific microarchitecture events that cause the inefficiencies. At this level, you work directly with raw PMU counter values to understand the underlying hardware behavior causing performance bottlenecks. This enables precise tuning by identifying exactly which execution units, cache levels, or pipeline stages are limiting performance, allowing you to apply targeted code optimizations or hardware configuration changes.
43
+
Level 4 provides direct access to the specific microarchitecture events that cause the inefficiencies. At this level, you work directly with raw PMU counter values to understand the underlying hardware behavior causing performance bottlenecks. This enables precise tuning by identifying exactly which execution units, cache levels, or pipeline stages are limiting performance, allowing you to apply targeted code optimizations or hardware configuration changes.
42
44
43
45
## Apply essential Intel x86 PMU counters for analysis
44
46
@@ -63,5 +65,5 @@ Intel processors expose hundreds of performance events, but top-down analysis re
63
65
|`OFFCORE_RESPONSE.*`| Detailed classification of off-core responses (L3 vs. DRAM, local vs. remote socket) |
64
66
65
67
66
-
Using the above levels of metrics you can find out which of the four top-level categories are causing bottlenecks.
68
+
Using the above levels of metrics, you can determine which of the four top-level categories are causing bottlenecks.
Copy file name to clipboardExpand all lines: content/learning-paths/cross-platform/topdown-compare/1b-arm.md
+7-5Lines changed: 7 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
---
2
-
title: "Implement Arm Neoverse 2-stage top-down analysis"
2
+
title: "Understand Arm Neoverse top-down analysis"
3
3
weight: 5
4
4
5
5
### FIXED, DO NOT MODIFY
@@ -9,15 +9,15 @@ layout: learningpathall
9
9
10
10
After understanding Intel's comprehensive 4-level hierarchy, you can explore how Arm approached the same performance analysis challenge with a different philosophy. Arm developed a complementary top-down methodology specifically for Neoverse server cores that prioritizes practical usability while maintaining analysis effectiveness.
11
11
12
-
The Arm Neoverse architecture uses an 8-slot rename unit for pipeline bandwidth accounting, differing from Intel's issue-slot model. Unlike Intel's hierarchical model, Arm employs a streamlined two-stage methodology that balances analysis depth with practical usability.
12
+
The Arm Neoverse architecture uses an 8-slot rename unit for pipeline bandwidth accounting, which differs from Intel's issue-slot model. Unlike Intel's hierarchical model, Arm employs a streamlined two-stage methodology that balances analysis depth with practical usability.
Stage 1 identifies high-level bottlenecks using the same four categories as Intel but with Arm-specific PMU events and formulas. This stage uses slot-based accounting similar to Intel's approach while employing Arm event names and calculations tailored to the Neoverse architecture.
16
+
Stage 1 identifies high-level bottlenecks using the same four categories as Intel, but with Arm-specific PMU events and formulas. This stage uses slot-based accounting similar to Intel's approach while employing Arm event names and calculations tailored to the Neoverse architecture.
17
17
18
18
#### Configure Arm-specific PMU counter formulas
19
19
20
-
Arm uses different top-down metrics based on different events but the concept remains similar to Intel's approach. The key difference lies in the formula calculations and slot accounting methodology:
20
+
Arm uses different top-down metrics based on different events, but the concept remains similar to Intel's approach. The key difference lies in the formula calculations and slot accounting methodology:
21
21
22
22
| Metric | Formula | Purpose |
23
23
| :-- | :-- | :-- |
@@ -32,7 +32,9 @@ Stage 2 focuses on resource-specific effectiveness metrics grouped by CPU compon
32
32
33
33
#### Navigate resource groups without hierarchical constraints
34
34
35
-
Instead of Intel's hierarchical levels, Arm organizes detailed metrics into effectiveness groups that can be explored independently. **Branch Effectiveness** provides misprediction rates and MPKI, while **ITLB/DTLB Effectiveness** measures translation lookaside buffer efficiency. **Cache Effectiveness** groups (L1I/L1D/L2/LL) deliver cache hit ratios and MPKI across the memory hierarchy. Additionally, **Operation Mix** breaks down instruction types (SIMD, integer, load/store), and **Cycle Accounting** tracks frontend versus backend stall percentages.
35
+
Instead of Intel's hierarchical levels, Arm organizes detailed metrics into effectiveness groups that can be explored independently.
36
+
37
+
**Branch Effectiveness** provides misprediction rates and MPKI, while **ITLB/DTLB Effectiveness** measures translation lookaside buffer efficiency. **Cache Effectiveness** groups (L1I/L1D/L2/LL) deliver cache hit ratios and MPKI across the memory hierarchy. Additionally, **Operation Mix** breaks down instruction types (SIMD, integer, load/store), and **Cycle Accounting** tracks frontend versus backend stall percentages.
36
38
37
39
## Apply essential Arm Neoverse PMU counters for analysis
@@ -129,21 +129,21 @@ Done. Final result: 0.000056
129
129
6.029283206 seconds time elapsed
130
130
```
131
131
132
-
Again, showing `Backend_Bound` value very high (0.96). Notice the x86-specific PMU counters:
132
+
Again, showing a `Backend_Bound` value that is very high (0.96). Notice the x86-specific PMU counters:
133
133
-`uops_issued.any` and `uops_retired.retire_slots` for micro-operation accounting
134
134
-`idq_uops_not_delivered.core` for frontend delivery failures
135
135
-`cpu_clk_unhalted.thread` for cycle normalization
136
136
137
137
If you want to learn more, you can continue with the Level 2 and Level 3 hierarchical analysis.
138
138
139
139
140
-
## Use the Arm Neoverse 2-stage top-down methodology
140
+
## Use the Arm Neoverse top-down methodology
141
141
142
-
Arm's approach uses a 2-stage methodology with PMU counters like `STALL_SLOT_BACKEND`, `STALL_SLOT_FRONTEND`, `OP_RETIRED`, and `OP_SPEC` for Stage 1 analysis, followed by resource effectiveness groups in Stage 2.
142
+
Arm's approach uses a methodology with PMU counters like `STALL_SLOT_BACKEND`, `STALL_SLOT_FRONTEND`, `OP_RETIRED`, and `OP_SPEC` for Stage 1 analysis, followed by resource effectiveness groups in Stage 2.
143
143
144
144
Make sure you install the Arm topdown-tool using the [Telemetry Solution install guide](/install-guides/topdown-tool/).
145
145
146
-
Collect Stage 2 general metrics including Instructions Per Cycle (IPC):
146
+
Collect general metrics including Instructions Per Cycle (IPC):
147
147
148
148
```console
149
149
taskset -c 1 topdown-tool -m General ./test 1000000000
This confirms the example has high backend stalls equivalent to x86's Backend_Bound category. Notice how Arm's Stage 1 uses percentage of cycles rather than Intel's slot-based accounting.
181
+
This confirms the example has high backend stalls, equivalent to x86's Backend_Bound category. Notice how Arm's Stage 1 uses percentage of cycles rather than Intel's slot-based accounting.
182
182
183
183
You can continue to use the `topdown-tool` for additional microarchitecture exploration.
Both Arm Neoverse and modern x86 cores expose hardware PMU events that enable equivalent top-down analysis, despite different counter names and calculation methods. Intel x86 processors use a four-level hierarchical methodology based on slot-based pipeline accounting, relying on PMU counters such as `UOPS_RETIRED.RETIRE_SLOTS`, `IDQ_UOPS_NOT_DELIVERED.CORE`, and `CPU_CLK_UNHALTED.THREAD` to break down performance into retiring, bad speculation, frontend bound, and backend bound categories. Linux Perf serves as the standard collection tool, using commands like `perf stat --topdown` and the `-M topdownl1` option for detailed breakdowns.
266
+
Both Arm Neoverse and modern x86 cores expose hardware PMU events that enable equivalent top-down analysis, despite different counter names and calculation methods.
267
+
268
+
Intel x86 processors use a four-level hierarchical methodology based on slot-based pipeline accounting, relying on PMU counters such as `UOPS_RETIRED.RETIRE_SLOTS`, `IDQ_UOPS_NOT_DELIVERED.CORE`, and `CPU_CLK_UNHALTED.THREAD` to break down performance into retiring, bad speculation, frontend bound, and backend bound categories. Linux Perf serves as the standard collection tool, using commands like `perf stat --topdown` and the `-M topdownl1` option for detailed breakdowns.
267
269
268
270
Arm Neoverse platforms implement a complementary two-stage methodology where Stage 1 focuses on topdown categories using counters such as `STALL_SLOT_BACKEND`, `STALL_SLOT_FRONTEND`, `OP_RETIRED`, and `OP_SPEC` to analyze pipeline stalls and instruction retirement. Stage 2 evaluates resource effectiveness, including cache and operation mix metrics through `topdown-tool`, which accepts the desired metric group via the `-m` argument.
269
271
270
-
Both architectures identify the same performance bottleneck categories, enabling similar optimization strategies across Intel and Arm platforms while accounting for methodological differences in measurement depth and analysis approach.
272
+
Both architectures identify the same performance bottleneck categories, enabling similar optimization strategies across Intel and Arm platforms while accounting for methodological differences in measurement depth and analysis approach.
Copy file name to clipboardExpand all lines: content/learning-paths/cross-platform/topdown-compare/_index.md
+1-5Lines changed: 1 addition & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,16 +1,12 @@
1
1
---
2
2
title: Compare Arm Neoverse and Intel x86 top-down performance analysis with PMU counters
3
3
4
-
draft: true
5
-
cascade:
6
-
draft: true
7
-
8
4
minutes_to_complete: 30
9
5
10
6
who_is_this_for: This is an advanced topic for software developers and performance engineers who want to understand the similarities and differences between Arm Neoverse and Intel x86 top-down performance analysis using PMU counters, Linux Perf, and the topdown-tool.
11
7
12
8
learning_objectives:
13
-
- Compare Intel x86 4-level hierarchical top-down methodology with Arm Neoverse 2-stage approach using PMU counters
9
+
- Compare Intel x86 multi-level hierarchical methodology with Arm Neoverse micro-architecture exploration methodology
14
10
- Execute performance analysis using Linux Perf on x86 and topdown-tool on Arm systems
15
11
- Analyze Backend Bound, Frontend Bound, Bad Speculation, and Retiring categories across both architectures
0 commit comments