You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: .github/copilot-instructions.md
+23-6Lines changed: 23 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -45,6 +45,8 @@ Read the files in the directory `content/learning-paths/cross-platform/_example-
45
45
46
46
Each Learning Path must have an _index.md file and a _next-steps.md file. The _index.md file contains the main content of the Learning Path. The _next-steps.md file contains links to related content and is included at the end of the Learning Path.
47
47
48
+
Additional resources and 'next steps' content should be placed in the `further_reading` section of `_index.md`, NOT in `_next-steps.md`. The `_next-steps.md` file should remain minimal and unmodified as indicated by "FIXED, DO NOT MODIFY" comments in the template.
49
+
48
50
The _index.md file should contain the following front matter and content sections:
49
51
50
52
Front Matter (YAML format):
@@ -60,6 +62,16 @@ Front Matter (YAML format):
60
62
-`skilllevels`: Skill levels allowed are only Introductory and Advanced
61
63
-`operatingsystems`: Operating systems used, must match the closed list on https://learn.arm.com/learning-paths/cross-platform/_example-learning-path/write-2-metadata/
62
64
65
+
### Further Reading Curation
66
+
67
+
Limit further_reading resources to 4-6 essential links. Prioritize:
68
+
- Direct relevance to the topic
69
+
- Arm-specific Learning Paths over generic external resources
70
+
- Foundation knowledge for target audience
71
+
- Required tools (install guides)
72
+
- Logical progression from basic to advanced
73
+
74
+
Avoid overwhelming readers with too many links, which can cause them to leave the platform.
@@ -205,18 +217,23 @@ Some links are useful in content, but too many links can be distracting and read
205
217
206
218
### Internal links
207
219
208
-
Use a relative path format for internal links that are on learn.arm.com.
209
-
For example, use: descriptive link text pointing to a relative path like learning-paths/category/path-name/
220
+
Use the full path format for internal links: `/learning-paths/category/path-name/` (e.g., `/learning-paths/cross-platform/docker/`). Do NOT use relative paths like `../path-name/`.
Copy file name to clipboardExpand all lines: content/learning-paths/cross-platform/topdown-compare/1-top-down.md
+7-4Lines changed: 7 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,17 +14,20 @@ Both Intel x86 and Arm Neoverse CPUs provide sophisticated Performance Monitorin
14
14
15
15
While the specific counter names and formulas differ between architectures, both Intel x86 and Arm Neoverse have converged on top-down performance analysis methodologies that categorize performance bottlenecks into four key areas:
16
16
17
-
**Retiring** represents pipeline slots that successfully complete useful work, while **Bad Speculation** accounts for slots wasted on mispredicted branches. Additionally, **Frontend Bound** identifies slots stalled due to instruction fetch and decode limitations, and **Backend Bound** covers slots stalled by execution resource constraints.
17
+
- Retiring
18
+
- Bad Speculation
19
+
- Frontend Bound
20
+
- Backend Bound
18
21
19
-
This Learning Path provides a comparison of how x86 processors implement four-level hierarchical top-down analysis compared to Arm Neoverse's two-stage methodology, highlighting the similarities in approach while explaining the architectural differences in PMU counter events and formulas.
22
+
This Learning Path provides a comparison of how x86 processors implement multi-level hierarchical top-down analysis compared to Arm Neoverse's methodology, highlighting the similarities in approach while explaining the architectural differences in PMU counter events and formulas.
20
23
21
24
## Introduction to top-down performance analysis
22
25
23
-
The top-down methodology makes performance analysis easier by shifting focus from individual PMU counters to pipeline slot utilization. Instead of trying to interpret dozens of seemingly unrelated metrics, you can systematically identify bottlenecks by attributing each CPU pipeline slot to one of four categories.
26
+
The top-down methodology makes performance analysis easier by shifting focus from individual PMU counters to pipeline slot utilization. Instead of trying to interpret dozens of seemingly unrelated metrics, you can systematically identify bottlenecks by attributing each CPU pipeline slot to one of the four categories.
24
27
25
28
**Retiring** represents pipeline slots that successfully complete useful work, while **Bad Speculation** accounts for slots wasted on mispredicted branches and pipeline flushes. **Frontend Bound** identifies slots stalled due to instruction fetch and decode limitations, whereas **Backend Bound** covers slots stalled by execution resource constraints such as cache misses or arithmetic unit availability.
26
29
27
-
The methodology uses a hierarchical approach that allows you to drill down only into the dominant bottleneck category, and avoid the complexity of analyzing all possible performance issues at the same time.
30
+
The methodology allows you to drill down only into the dominant bottleneck category, avoiding the complexity of analyzing all possible performance issues at the same time.
28
31
29
32
The next sections compare the Intel x86 methodology with the Arm top-down methodology.
Copy file name to clipboardExpand all lines: content/learning-paths/cross-platform/topdown-compare/1a-intel.md
+12-10Lines changed: 12 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
---
2
-
title: "Implement Intel x86 4-level hierarchical top-down analysis"
2
+
title: "Understand Intel x86 multi-level hierarchical top-down analysis"
3
3
weight: 4
4
4
5
5
### FIXED, DO NOT MODIFY
@@ -8,9 +8,9 @@ layout: learningpathall
8
8
9
9
## Configure slot-based accounting with Intel x86 PMU counters
10
10
11
-
Intel uses a slot-based accounting model where each CPU cycle provides multiple issue slots. A slot is a hardware resource needed to process micro-operations (uops). More slots means more work can be done per cycle. The number of slots depends on the microarchitecture design but current Intel processor designs typically have four issue slots per cycle.
11
+
Intel uses a slot-based accounting model where each CPU cycle provides multiple issue slots. A slot is a hardware resource needed to process micro-operations (uops). More slots means more work can be done per cycle. The number of slots depends on the microarchitecture design, but current Intel processor designs typically have four issue slots per cycle.
12
12
13
-
Intel's methodology uses a multi-level hierarchy that extends to 4 levels of detail. Each level provides progressively more granular analysis, allowing you to drill down from high-level categories to specific microarchitecture events.
13
+
Intel's methodology uses a multi-level hierarchy that typically extends to 3-4 levels of detail. Each level provides progressively more granular analysis, allowing you to drill down from high-level categories to specific microarchitecture events.
@@ -27,18 +27,20 @@ Where `SLOTS = 4 * CPU_CLK_UNHALTED.THREAD` on most Intel cores.
27
27
28
28
Once you've identified the dominant Level 1 category, Level 2 drills into each area to identify broader causes. This level distinguishes between frontend latency and bandwidth limits, or between memory and core execution stalls in the backend.
29
29
30
-
- Frontend Bound covers frontend latency in comparison with frontend bandwidth
31
-
- Backend Bound covers memory bound in comparison with core bound
32
-
- Bad Speculation covers branch mispredicts in comparison with machine clears
33
-
- Retiring covers base in comparison with microcode sequencer
30
+
- Frontend Bound covers frontend latency compared with frontend bandwidth
31
+
- Backend Bound covers memory bound compared with core bound
32
+
- Bad Speculation covers branch mispredicts compared with machine clears
33
+
- Retiring covers base compared with microcode sequencer
34
34
35
35
## Level 3: Target specific microarchitecture bottlenecks
36
36
37
-
After identifying broader cause categories in Level 2, Level 3 provides fine-grained attribution that pinpoints specific bottlenecks like DRAM latency, cache misses, or port contention. This precision makes it possible to identify the exact root cause and apply targeted optimizations. Memory Bound expands into detailed cache hierarchy analysis including L1 Bound, L2 Bound, L3 Bound, DRAM Bound, and Store Bound categories, while Core Bound breaks down into execution unit constraints such as Divider and Ports Utilization, along with many other specific microarchitecture-level categories that enable precise performance tuning.
37
+
After identifying broader cause categories in Level 2, Level 3 provides fine-grained attribution that pinpoints specific bottlenecks like DRAM latency, cache misses, or port contention. This precision makes it possible to identify the exact root cause and apply targeted optimizations.
38
+
39
+
Memory Bound expands into detailed cache hierarchy analysis including L1 Bound, L2 Bound, L3 Bound, DRAM Bound, and Store Bound categories. Core Bound breaks down into execution unit constraints such as Divider and Ports Utilization, along with many other specific microarchitecture-level categories that enable precise performance tuning.
38
40
39
41
## Level 4: Access specific PMU counter events
40
42
41
-
The final level provides direct access to the specific microarchitecture events that cause the inefficiencies. At this level, you work directly with raw PMU counter values to understand the underlying hardware behavior causing performance bottlenecks. This enables precise tuning by identifying exactly which execution units, cache levels, or pipeline stages are limiting performance, allowing you to apply targeted code optimizations or hardware configuration changes.
43
+
Level 4 provides direct access to the specific microarchitecture events that cause the inefficiencies. At this level, you work directly with raw PMU counter values to understand the underlying hardware behavior causing performance bottlenecks. This enables precise tuning by identifying exactly which execution units, cache levels, or pipeline stages are limiting performance, allowing you to apply targeted code optimizations or hardware configuration changes.
42
44
43
45
## Apply essential Intel x86 PMU counters for analysis
44
46
@@ -63,5 +65,5 @@ Intel processors expose hundreds of performance events, but top-down analysis re
63
65
|`OFFCORE_RESPONSE.*`| Detailed classification of off-core responses (L3 vs. DRAM, local vs. remote socket) |
64
66
65
67
66
-
Using the above levels of metrics you can find out which of the four top-level categories are causing bottlenecks.
68
+
Using the above levels of metrics, you can determine which of the four top-level categories are causing bottlenecks.
Copy file name to clipboardExpand all lines: content/learning-paths/cross-platform/topdown-compare/1b-arm.md
+7-5Lines changed: 7 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
---
2
-
title: "Implement Arm Neoverse 2-stage top-down analysis"
2
+
title: "Understand Arm Neoverse top-down analysis"
3
3
weight: 5
4
4
5
5
### FIXED, DO NOT MODIFY
@@ -9,15 +9,15 @@ layout: learningpathall
9
9
10
10
After understanding Intel's comprehensive 4-level hierarchy, you can explore how Arm approached the same performance analysis challenge with a different philosophy. Arm developed a complementary top-down methodology specifically for Neoverse server cores that prioritizes practical usability while maintaining analysis effectiveness.
11
11
12
-
The Arm Neoverse architecture uses an 8-slot rename unit for pipeline bandwidth accounting, differing from Intel's issue-slot model. Unlike Intel's hierarchical model, Arm employs a streamlined two-stage methodology that balances analysis depth with practical usability.
12
+
The Arm Neoverse architecture uses an 8-slot rename unit for pipeline bandwidth accounting, which differs from Intel's issue-slot model. Unlike Intel's hierarchical model, Arm employs a streamlined two-stage methodology that balances analysis depth with practical usability.
Stage 1 identifies high-level bottlenecks using the same four categories as Intel but with Arm-specific PMU events and formulas. This stage uses slot-based accounting similar to Intel's approach while employing Arm event names and calculations tailored to the Neoverse architecture.
16
+
Stage 1 identifies high-level bottlenecks using the same four categories as Intel, but with Arm-specific PMU events and formulas. This stage uses slot-based accounting similar to Intel's approach while employing Arm event names and calculations tailored to the Neoverse architecture.
17
17
18
18
#### Configure Arm-specific PMU counter formulas
19
19
20
-
Arm uses different top-down metrics based on different events but the concept remains similar to Intel's approach. The key difference lies in the formula calculations and slot accounting methodology:
20
+
Arm uses different top-down metrics based on different events, but the concept remains similar to Intel's approach. The key difference lies in the formula calculations and slot accounting methodology:
21
21
22
22
| Metric | Formula | Purpose |
23
23
| :-- | :-- | :-- |
@@ -32,7 +32,9 @@ Stage 2 focuses on resource-specific effectiveness metrics grouped by CPU compon
32
32
33
33
#### Navigate resource groups without hierarchical constraints
34
34
35
-
Instead of Intel's hierarchical levels, Arm organizes detailed metrics into effectiveness groups that can be explored independently. **Branch Effectiveness** provides misprediction rates and MPKI, while **ITLB/DTLB Effectiveness** measures translation lookaside buffer efficiency. **Cache Effectiveness** groups (L1I/L1D/L2/LL) deliver cache hit ratios and MPKI across the memory hierarchy. Additionally, **Operation Mix** breaks down instruction types (SIMD, integer, load/store), and **Cycle Accounting** tracks frontend versus backend stall percentages.
35
+
Instead of Intel's hierarchical levels, Arm organizes detailed metrics into effectiveness groups that can be explored independently.
36
+
37
+
**Branch Effectiveness** provides misprediction rates and MPKI, while **ITLB/DTLB Effectiveness** measures translation lookaside buffer efficiency. **Cache Effectiveness** groups (L1I/L1D/L2/LL) deliver cache hit ratios and MPKI across the memory hierarchy. Additionally, **Operation Mix** breaks down instruction types (SIMD, integer, load/store), and **Cycle Accounting** tracks frontend versus backend stall percentages.
36
38
37
39
## Apply essential Arm Neoverse PMU counters for analysis
0 commit comments