Skip to content

Commit 6f7d319

Browse files
authored
Merge pull request #2407 from ArmDeveloperEcosystem/main
Production update
2 parents d89814c + 175144f commit 6f7d319

File tree

37 files changed

+1213
-416
lines changed

37 files changed

+1213
-416
lines changed

.github/copilot-instructions.md

Lines changed: 23 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -45,6 +45,8 @@ Read the files in the directory `content/learning-paths/cross-platform/_example-
4545

4646
Each Learning Path must have an _index.md file and a _next-steps.md file. The _index.md file contains the main content of the Learning Path. The _next-steps.md file contains links to related content and is included at the end of the Learning Path.
4747

48+
Additional resources and 'next steps' content should be placed in the `further_reading` section of `_index.md`, NOT in `_next-steps.md`. The `_next-steps.md` file should remain minimal and unmodified as indicated by "FIXED, DO NOT MODIFY" comments in the template.
49+
4850
The _index.md file should contain the following front matter and content sections:
4951

5052
Front Matter (YAML format):
@@ -60,6 +62,16 @@ Front Matter (YAML format):
6062
- `skilllevels`: Skill levels allowed are only Introductory and Advanced
6163
- `operatingsystems`: Operating systems used, must match the closed list on https://learn.arm.com/learning-paths/cross-platform/_example-learning-path/write-2-metadata/
6264

65+
### Further Reading Curation
66+
67+
Limit further_reading resources to 4-6 essential links. Prioritize:
68+
- Direct relevance to the topic
69+
- Arm-specific Learning Paths over generic external resources
70+
- Foundation knowledge for target audience
71+
- Required tools (install guides)
72+
- Logical progression from basic to advanced
73+
74+
Avoid overwhelming readers with too many links, which can cause them to leave the platform.
6375

6476
All Learning Paths should generally include:
6577
Title: [Imperative verb] + [technology/tool] + [outcome]
@@ -205,18 +217,23 @@ Some links are useful in content, but too many links can be distracting and read
205217

206218
### Internal links
207219

208-
Use a relative path format for internal links that are on learn.arm.com.
209-
For example, use: descriptive link text pointing to a relative path like learning-paths/category/path-name/
220+
Use the full path format for internal links: `/learning-paths/category/path-name/` (e.g., `/learning-paths/cross-platform/docker/`). Do NOT use relative paths like `../path-name/`.
210221

211222
Examples:
212-
- learning-paths/servers-and-cloud-computing/csp/ (Arm-based instance)
213-
- learning-paths/cross-platform/docker/ (Docker learning path)
223+
- /learning-paths/servers-and-cloud-computing/csp/ (Arm-based instance)
224+
- /learning-paths/cross-platform/docker/ (Docker learning path)
214225

215226
### External links
216227

217228
Use the full URL for external links that are not on learn.arm.com, these open in a new tab.
218229

219-
This instruction set enables high-quality Arm Learning Paths content while maintaining consistency and technical accuracy.
220-
230+
### Link Verification Process
221231

232+
When creating Learning Path content:
233+
- Verify internal links exist before adding them
234+
- Use semantic search or website browsing to confirm Learning Path availability
235+
- Prefer verified external authoritative sources over speculative internal links
236+
- Test link formats against existing Learning Path examples
237+
- Never assume Learning Paths exist without verification
222238

239+
This instruction set enables high-quality Arm Learning Paths content while maintaining consistency and technical accuracy.

.wordlist.txt

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4976,4 +4976,5 @@ StatefulSets
49764976
codemia
49774977
multidisks
49784978
testsh
4979-
uops
4979+
uops
4980+
subgraph

assets/contributors.csv

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -102,5 +102,6 @@ Ker Liu,,,,,
102102
Rui Chang,,,,,
103103
Alejandro Martinez Vicente,Arm,,,,
104104
Mohamad Najem,Arm,,,,
105+
Ruifeng Wang,Arm,,,,
105106
Zenon Zhilong Xiu,Arm,,zenon-zhilong-xiu-491bb398,,
106-
Zbynek Roubalik,Kedify,,,,
107+
Zbynek Roubalik,Kedify,,,,

content/learning-paths/cross-platform/topdown-compare/1-top-down.md

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -14,17 +14,20 @@ Both Intel x86 and Arm Neoverse CPUs provide sophisticated Performance Monitorin
1414

1515
While the specific counter names and formulas differ between architectures, both Intel x86 and Arm Neoverse have converged on top-down performance analysis methodologies that categorize performance bottlenecks into four key areas:
1616

17-
**Retiring** represents pipeline slots that successfully complete useful work, while **Bad Speculation** accounts for slots wasted on mispredicted branches. Additionally, **Frontend Bound** identifies slots stalled due to instruction fetch and decode limitations, and **Backend Bound** covers slots stalled by execution resource constraints.
17+
- Retiring
18+
- Bad Speculation
19+
- Frontend Bound
20+
- Backend Bound
1821

19-
This Learning Path provides a comparison of how x86 processors implement four-level hierarchical top-down analysis compared to Arm Neoverse's two-stage methodology, highlighting the similarities in approach while explaining the architectural differences in PMU counter events and formulas.
22+
This Learning Path provides a comparison of how x86 processors implement multi-level hierarchical top-down analysis compared to Arm Neoverse's methodology, highlighting the similarities in approach while explaining the architectural differences in PMU counter events and formulas.
2023

2124
## Introduction to top-down performance analysis
2225

23-
The top-down methodology makes performance analysis easier by shifting focus from individual PMU counters to pipeline slot utilization. Instead of trying to interpret dozens of seemingly unrelated metrics, you can systematically identify bottlenecks by attributing each CPU pipeline slot to one of four categories.
26+
The top-down methodology makes performance analysis easier by shifting focus from individual PMU counters to pipeline slot utilization. Instead of trying to interpret dozens of seemingly unrelated metrics, you can systematically identify bottlenecks by attributing each CPU pipeline slot to one of the four categories.
2427

2528
**Retiring** represents pipeline slots that successfully complete useful work, while **Bad Speculation** accounts for slots wasted on mispredicted branches and pipeline flushes. **Frontend Bound** identifies slots stalled due to instruction fetch and decode limitations, whereas **Backend Bound** covers slots stalled by execution resource constraints such as cache misses or arithmetic unit availability.
2629

27-
The methodology uses a hierarchical approach that allows you to drill down only into the dominant bottleneck category, and avoid the complexity of analyzing all possible performance issues at the same time.
30+
The methodology allows you to drill down only into the dominant bottleneck category, avoiding the complexity of analyzing all possible performance issues at the same time.
2831

2932
The next sections compare the Intel x86 methodology with the Arm top-down methodology.
3033

content/learning-paths/cross-platform/topdown-compare/1a-intel.md

Lines changed: 12 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: "Implement Intel x86 4-level hierarchical top-down analysis"
2+
title: "Understand Intel x86 multi-level hierarchical top-down analysis"
33
weight: 4
44

55
### FIXED, DO NOT MODIFY
@@ -8,9 +8,9 @@ layout: learningpathall
88

99
## Configure slot-based accounting with Intel x86 PMU counters
1010

11-
Intel uses a slot-based accounting model where each CPU cycle provides multiple issue slots. A slot is a hardware resource needed to process micro-operations (uops). More slots means more work can be done per cycle. The number of slots depends on the microarchitecture design but current Intel processor designs typically have four issue slots per cycle.
11+
Intel uses a slot-based accounting model where each CPU cycle provides multiple issue slots. A slot is a hardware resource needed to process micro-operations (uops). More slots means more work can be done per cycle. The number of slots depends on the microarchitecture design, but current Intel processor designs typically have four issue slots per cycle.
1212

13-
Intel's methodology uses a multi-level hierarchy that extends to 4 levels of detail. Each level provides progressively more granular analysis, allowing you to drill down from high-level categories to specific microarchitecture events.
13+
Intel's methodology uses a multi-level hierarchy that typically extends to 3-4 levels of detail. Each level provides progressively more granular analysis, allowing you to drill down from high-level categories to specific microarchitecture events.
1414

1515
## Level 1: Identify top-level performance categories
1616

@@ -27,18 +27,20 @@ Where `SLOTS = 4 * CPU_CLK_UNHALTED.THREAD` on most Intel cores.
2727

2828
Once you've identified the dominant Level 1 category, Level 2 drills into each area to identify broader causes. This level distinguishes between frontend latency and bandwidth limits, or between memory and core execution stalls in the backend.
2929

30-
- Frontend Bound covers frontend latency in comparison with frontend bandwidth
31-
- Backend Bound covers memory bound in comparison with core bound
32-
- Bad Speculation covers branch mispredicts in comparison with machine clears
33-
- Retiring covers base in comparison with microcode sequencer
30+
- Frontend Bound covers frontend latency compared with frontend bandwidth
31+
- Backend Bound covers memory bound compared with core bound
32+
- Bad Speculation covers branch mispredicts compared with machine clears
33+
- Retiring covers base compared with microcode sequencer
3434

3535
## Level 3: Target specific microarchitecture bottlenecks
3636

37-
After identifying broader cause categories in Level 2, Level 3 provides fine-grained attribution that pinpoints specific bottlenecks like DRAM latency, cache misses, or port contention. This precision makes it possible to identify the exact root cause and apply targeted optimizations. Memory Bound expands into detailed cache hierarchy analysis including L1 Bound, L2 Bound, L3 Bound, DRAM Bound, and Store Bound categories, while Core Bound breaks down into execution unit constraints such as Divider and Ports Utilization, along with many other specific microarchitecture-level categories that enable precise performance tuning.
37+
After identifying broader cause categories in Level 2, Level 3 provides fine-grained attribution that pinpoints specific bottlenecks like DRAM latency, cache misses, or port contention. This precision makes it possible to identify the exact root cause and apply targeted optimizations.
38+
39+
Memory Bound expands into detailed cache hierarchy analysis including L1 Bound, L2 Bound, L3 Bound, DRAM Bound, and Store Bound categories. Core Bound breaks down into execution unit constraints such as Divider and Ports Utilization, along with many other specific microarchitecture-level categories that enable precise performance tuning.
3840

3941
## Level 4: Access specific PMU counter events
4042

41-
The final level provides direct access to the specific microarchitecture events that cause the inefficiencies. At this level, you work directly with raw PMU counter values to understand the underlying hardware behavior causing performance bottlenecks. This enables precise tuning by identifying exactly which execution units, cache levels, or pipeline stages are limiting performance, allowing you to apply targeted code optimizations or hardware configuration changes.
43+
Level 4 provides direct access to the specific microarchitecture events that cause the inefficiencies. At this level, you work directly with raw PMU counter values to understand the underlying hardware behavior causing performance bottlenecks. This enables precise tuning by identifying exactly which execution units, cache levels, or pipeline stages are limiting performance, allowing you to apply targeted code optimizations or hardware configuration changes.
4244

4345
## Apply essential Intel x86 PMU counters for analysis
4446

@@ -63,5 +65,5 @@ Intel processors expose hundreds of performance events, but top-down analysis re
6365
| `OFFCORE_RESPONSE.*` | Detailed classification of off-core responses (L3 vs. DRAM, local vs. remote socket) |
6466

6567

66-
Using the above levels of metrics you can find out which of the four top-level categories are causing bottlenecks.
68+
Using the above levels of metrics, you can determine which of the four top-level categories are causing bottlenecks.
6769

content/learning-paths/cross-platform/topdown-compare/1b-arm.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: "Implement Arm Neoverse 2-stage top-down analysis"
2+
title: "Understand Arm Neoverse top-down analysis"
33
weight: 5
44

55
### FIXED, DO NOT MODIFY
@@ -9,15 +9,15 @@ layout: learningpathall
99

1010
After understanding Intel's comprehensive 4-level hierarchy, you can explore how Arm approached the same performance analysis challenge with a different philosophy. Arm developed a complementary top-down methodology specifically for Neoverse server cores that prioritizes practical usability while maintaining analysis effectiveness.
1111

12-
The Arm Neoverse architecture uses an 8-slot rename unit for pipeline bandwidth accounting, differing from Intel's issue-slot model. Unlike Intel's hierarchical model, Arm employs a streamlined two-stage methodology that balances analysis depth with practical usability.
12+
The Arm Neoverse architecture uses an 8-slot rename unit for pipeline bandwidth accounting, which differs from Intel's issue-slot model. Unlike Intel's hierarchical model, Arm employs a streamlined two-stage methodology that balances analysis depth with practical usability.
1313

1414
### Execute Stage 1: Calculate top-down performance categories
1515

16-
Stage 1 identifies high-level bottlenecks using the same four categories as Intel but with Arm-specific PMU events and formulas. This stage uses slot-based accounting similar to Intel's approach while employing Arm event names and calculations tailored to the Neoverse architecture.
16+
Stage 1 identifies high-level bottlenecks using the same four categories as Intel, but with Arm-specific PMU events and formulas. This stage uses slot-based accounting similar to Intel's approach while employing Arm event names and calculations tailored to the Neoverse architecture.
1717

1818
#### Configure Arm-specific PMU counter formulas
1919

20-
Arm uses different top-down metrics based on different events but the concept remains similar to Intel's approach. The key difference lies in the formula calculations and slot accounting methodology:
20+
Arm uses different top-down metrics based on different events, but the concept remains similar to Intel's approach. The key difference lies in the formula calculations and slot accounting methodology:
2121

2222
| Metric | Formula | Purpose |
2323
| :-- | :-- | :-- |
@@ -32,7 +32,9 @@ Stage 2 focuses on resource-specific effectiveness metrics grouped by CPU compon
3232

3333
#### Navigate resource groups without hierarchical constraints
3434

35-
Instead of Intel's hierarchical levels, Arm organizes detailed metrics into effectiveness groups that can be explored independently. **Branch Effectiveness** provides misprediction rates and MPKI, while **ITLB/DTLB Effectiveness** measures translation lookaside buffer efficiency. **Cache Effectiveness** groups (L1I/L1D/L2/LL) deliver cache hit ratios and MPKI across the memory hierarchy. Additionally, **Operation Mix** breaks down instruction types (SIMD, integer, load/store), and **Cycle Accounting** tracks frontend versus backend stall percentages.
35+
Instead of Intel's hierarchical levels, Arm organizes detailed metrics into effectiveness groups that can be explored independently.
36+
37+
**Branch Effectiveness** provides misprediction rates and MPKI, while **ITLB/DTLB Effectiveness** measures translation lookaside buffer efficiency. **Cache Effectiveness** groups (L1I/L1D/L2/LL) deliver cache hit ratios and MPKI across the memory hierarchy. Additionally, **Operation Mix** breaks down instruction types (SIMD, integer, load/store), and **Cycle Accounting** tracks frontend versus backend stall percentages.
3638

3739
## Apply essential Arm Neoverse PMU counters for analysis
3840

content/learning-paths/cross-platform/topdown-compare/1c-compare-arch.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ After understanding each architecture's methodology individually, you can now ex
1313
- Hierarchical analysis: broad classification followed by drill-down into dominant bottlenecks
1414
- Resource attribution: map performance issues to specific CPU micro-architectural components
1515

16-
## Compare 4-level hierarchical and 2-stage methodologies
16+
## Compare multi-level hierarchical and resource groups methodologies
1717

1818
| Aspect | Intel x86 | Arm Neoverse |
1919
| :-- | :-- | :-- |

0 commit comments

Comments
 (0)