You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/false-sharing-arm-spe/_index.md
+3-7Lines changed: 3 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,16 +1,12 @@
1
1
---
2
-
title: Analyze cache behavior with Perf C2C on Arm
3
-
4
-
draft: true
5
-
cascade:
6
-
draft: true
2
+
title: Analyze cache behavior with perf c2c on Arm
7
3
8
4
minutes_to_complete: 15
9
5
10
-
who_is_this_for: This topic is for developers who want to optimize cache access patterns on Arm servers using Perf C2C.
6
+
who_is_this_for: This topic is for performance-oriented developers working on Arm-based cloud or server systems who want to optimize memory access patterns and investigate cache inefficiencies using Perf C2C and Arm SPE.
11
7
12
8
learning_objectives:
13
-
- Avoid false sharing in C++ using memory alignment.
9
+
- Identify and fix false sharing issues using Perf C2C, a cache line analysis tool.
14
10
- Enable and use the Arm Statistical Profiling Extension (SPE) on Linux systems.
15
11
- Investigate cache line performance with Perf C2C.
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/false-sharing-arm-spe/how-to-1.md
+47-16Lines changed: 47 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,36 +1,61 @@
1
1
---
2
-
title: Introduction to Arm SPE and false sharing
2
+
title: Arm Statistical Profiling Extension and false sharing
3
3
weight: 2
4
4
5
5
### FIXED, DO NOT MODIFY
6
6
layout: learningpathall
7
7
---
8
8
9
-
## Introduction to the Arm Statistical Profiling Extension (SPE)
9
+
## What is the Arm Statistical Profiling Extension (SPE), and what does it do?
10
10
11
-
Standard performance tracing relies on counting completed instructions, capturing only architectural instructions without revealing the actual memory addresses, pipeline latencies, or considering micro-operations in flight. Moreover, the “skid” phenomenon where events are falsely attributed to later instructions can mislead developers.
11
+
{{% notice Learning goal%}}
12
+
In this section, you’ll learn how to use SPE to gain low-level insight into how your applications interact with the CPU. You’ll explore how to detect and resolve false sharing. By combining cache line alignment techniques with `perf c2c`, you can identify inefficient memory access patterns and significantly boost CPU performance on Arm-based systems.
13
+
{{% /notice %}}
12
14
13
-
SPE integrates sampling directly into the CPU pipeline, triggering on individual micro-operations rather than retired instructions, thereby eliminating skid and blind spots. Each SPE sample record includes relevant metadata, such as data addresses, per-µop pipeline latency, triggered PMU event masks, and the memory hierarchy source, enabling fine-grained and precise cache analysis.
15
+
Arm’s Statistical Profiling Extension (SPE) gives you a powerful way to understand what’s really happening inside your applications at the microarchitecture level.
14
16
15
-
This enables software developers to tune user-space software for characteristics such as memory latency and cache accesses. Importantly, cache statistics are enabled with the Linux Perf cache-to-cache (C2C) utility.
17
+
Introduced in Armv8.2, SPE captures a statistical view of how instructions move through the CPU, which allows you to dig into issues like memory access latency, cache misses, and pipeline behavior.
16
18
17
-
Please refer to the [Arm SPE white paper](https://developer.arm.com/documentation/109429/latest/) for more details.
19
+
Most Linux profiling tools focus on retired instruction counts, which means they miss key details like memory addresses, cache latency, and micro-operation behavior. This can lead to misleading results, especially due to a phenomenon called “skid,” where events are falsely attributed to later instructions.
18
20
19
-
In this Learning Path, you will use SPE and Perf C2C to diagnose a cache issue for an application running on a Neoverse server.
21
+
SPE integrates sampling directly into the CPU pipeline, triggering on individual micro-operations instead of retired instructions. This approach eliminates skid and blind spots. Each SPE sample record includes relevant metadata, such as:
20
22
21
-
## False sharing within the cache
23
+
* Data addresses
24
+
* Per-µop pipeline latency
25
+
* Triggered PMU event masks
26
+
* Memory hierarchy source
22
27
23
-
Even when two threads touch entirely separate variables, modern processors move data in fixed-size cache lines (nominally 64-bytes). If those distinct variables happen to occupy bytes within the same line, every time one thread writes its variable the core’s cache must gain exclusive ownership of the whole line, forcing the other core’s copy to be invalidated. The second thread, still working on its own variable, then triggers a coherence miss to fetch the line back, and the ping-pong pattern repeats. Please see the illustration below, taken from the Arm SPE white paper, for a visual explanation.
28
+
This enables fine-grained, precise cache analysis.
SPE helps developers optimize user-space applications by showing where cache latency or memory access delays are happening. Importantly, cache statistics are enabled with the Linux `perf` cache-to-cache (C2C) utility.
26
31
27
-
Because false sharing hides behind ordinary writes, the easiest time to eliminate it is while reading or refactoring the source code by padding or realigning the offending variables before compilation. In large, highly concurrent codebases, however, data structures are often accessed through several layers of abstraction, and many threads touch memory via indirection, so the subtle cache-line overlap may not surface until profiling or performance counters reveal unexpected coherence misses.
32
+
For more information, see the [*Arm Statistical Profiling Extension: Performance Analysis Methodology White Paper*](https://developer.arm.com/documentation/109429/latest/).
33
+
34
+
In this Learning Path, you will use SPE and `perf c2c` to diagnose a cache issue for an application running on a Neoverse server.
35
+
36
+
## What is false sharing and why should I care about it?
37
+
38
+
In large-scale, multithreaded applications, false sharing can degrade performance by introducing hundreds of unnecessary cache line invalidations per second - often with no visible red flags in the source code.
39
+
40
+
Even when two threads touch entirely separate variables, modern processors move data in fixed-size cache lines, which is typically 64 bytes. If those distinct variables happen to occupy bytes within the same line, every time one thread writes its variable the core’s cache must gain exclusive ownership of the whole line, forcing the other core’s copy to be invalidated.
41
+
42
+
The second thread, still working on its own variable, then triggers a coherence miss to fetch the line back, and the ping-pong pattern repeats.
43
+
44
+
The diagram below, taken from the Arm SPE white paper, provides a visual representation of two threads on separate cores alternately gaining exclusive access to the same cache line.
45
+
46
+

47
+
48
+
## Why false sharing is hard to spot and fix
49
+
50
+
False sharing often hides behind seemingly ordinary writes, making it tricky to catch without tooling. The best time to eliminate it is early, while reading or refactoring code, by padding or realigning variables before compilation. But in large, highly concurrent C++ codebases, memory is frequently accessed through multiple layers of abstraction. Threads may interact with shared data indirectly, causing subtle cache line overlaps that don’t become obvious until performance profiling reveals unexpected coherence misses. Tools like `perf c2c` can help uncover these issues by tracing cache-to-cache transfers and identifying hot memory locations affected by false sharing.
28
51
29
52
From a source-code perspective nothing is “shared,” but at the hardware level both variables are implicitly coupled by their physical location.
30
53
31
54
## Alignment to cache lines
32
55
33
-
In C++11, you can manually specify the alignment of an object with the `alignas` specifier. For example, the C++11 source code below manually aligns the the `struct` every 64 bytes (typical cache line size on a modern processor). This ensures that each instance of `AlignedType` is on a separate cache line.
56
+
In C++11, you can manually specify the alignment of an object with the `alignas` specifier.
57
+
58
+
For example, the C++11 source code below manually aligns the `struct` every 64 bytes (typical cache line size on a modern processor). This ensures that each instance of `AlignedType` is on a separate cache line.
// If we create four atomic integers like this, there's a high probability
71
+
// If you create four atomic integers like this, there's a high probability
47
72
// they'll wind up next to each other in memory
48
73
std::atomic<int> a;
49
74
std::atomic<int> b;
@@ -74,9 +99,9 @@ int main() {
74
99
}
75
100
```
76
101
77
-
The example output below shows the variables e, f, g and h occur at least 64-bytes apart in the byte-addressable architecture. Whereas variables a, b, c and d occur 8 bytes apart, occupying the same cache line.
102
+
The output below shows that the variables e, f, g and h occur at least 64bytes apart in the byte-addressable architecture. Whereas variables a, b, c, and d occur 8 bytes apart, occupying the same cache line.
78
103
79
-
Although this is a contrived example, in a production workload there may be several layers of indirection that unintentionally result in false sharing. For these complex cases, to understand the root cause you will use Perf C2C.
104
+
Although this is a simplified example, in a production workload there might be several layers of indirection that unintentionally result in false sharing. For these complex cases, use `perf c2c`to trace cache line interactions and pinpoint the root cause of performance issues.
80
105
81
106
```output
82
107
Without Alignment can occupy same cache line
@@ -96,4 +121,10 @@ Address of AlignedType g - 0xffffeb6c60c0
96
121
Address of AlignedType h - 0xffffeb6c6080
97
122
```
98
123
99
-
Continue to the next section to learn how to set up a system to run Perf C2C.
124
+
## Summary
125
+
126
+
In this section, you explored what Arm SPE is and why it offers a deeper, more accurate view of application performance. You also examined how a subtle issue like false sharing can impact multithreaded code, and how to mitigate it using data alignment techniques in C++.
127
+
128
+
Next, you'll set up your environment and use `perf c2c` to capture and analyze real-world cache behavior on an Arm Neoverse system.
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/false-sharing-arm-spe/how-to-2.md
+24-19Lines changed: 24 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,20 +1,23 @@
1
1
---
2
-
title: Configure your environment for Arm SPE profiling
2
+
title: Set up your environment for Arm SPE and perf c2c profiling
3
3
weight: 3
4
4
5
5
### FIXED, DO NOT MODIFY
6
6
layout: learningpathall
7
7
---
8
-
9
8
## Select a system with SPE support
10
9
11
-
SPE requires both hardware and operating system support. Many cloud instances running Linux do not enable SPE-based profiling.
10
+
{{% notice Learning goal%}}
11
+
Before you can start profiling cache behavior with Arm SPE and `perf c2c`, your system needs to meet a few requirements. In this section, you’ll learn how to check whether your hardware and kernel support Arm SPE, install the necessary tools, and validate that Linux perf can access the correct performance monitoring events. By the end, your environment will be ready to record and analyze memory access patterns using `perf c2c` on an Arm Neoverse system.
12
+
{{% /notice %}}
13
+
14
+
SPE requires support from both your hardware and the operating system. Many cloud instances running Linux do not enable SPE-based profiling.
12
15
13
16
You need to identify a system that supports SPE using the information below.
14
17
15
18
If you are looking for an AWS system, you can use a `c6g.metal` instance running Amazon Linux 2023 (AL2023).
16
19
17
-
Check the underlying Neoverse processor and operating system kernel version with the following commands.
20
+
Check the underlying Neoverse processor and operating system kernel version with the following commands:
18
21
19
22
```bash
20
23
lscpu | grep -i "model name"
@@ -23,7 +26,7 @@ uname -r
23
26
24
27
The output includes the CPU type and kernel release version:
Linux Perf is a userspace process and SPE is a hardware feature. The Linux kernel must be compiled with SPE support or the kernel module named `arm_spe_pmu` must be loaded.
41
+
Linux perf is a userspace process and SPE is a hardware feature. The Linux kernel must be compiled with SPE support or the kernel module named `arm_spe_pmu` must be loaded.
39
42
40
43
Run the following command to confirm if the SPE kernel module is loaded:
41
44
42
45
```bash
43
46
sudo modprobe arm_spe_pmu
44
47
```
45
48
46
-
If the module is not loaded (blank output), SPE may still be available.
49
+
If the module is not loaded (and there is blank output), SPE might still be available.
47
50
48
51
Run this command to check if SPE is included in the kernel:
49
52
50
53
```bash
51
54
ls /sys/bus/event_source/devices/ | grep arm_spe
52
55
```
53
56
54
-
If SPE is available, the output is:
57
+
If SPE is available, the output you will see is:
55
58
56
59
```output
57
60
arm_spe_0
@@ -63,11 +66,11 @@ If the output is blank then SPE is not available.
63
66
64
67
You can install and run a Python script named Sysreport to summarize your system's performance profiling capabilities.
65
68
66
-
Refer to[Get ready for performance analysis with Sysreport](https://learn.arm.com/learning-paths/servers-and-cloud-computing/sysreport/) to learn how to install and run it.
69
+
See the Learning Path[Get ready for performance analysis with Sysreport](https://learn.arm.com/learning-paths/servers-and-cloud-computing/sysreport/) to learn how to install and run it.
67
70
68
71
Look at the Sysreport output and confirm SPE is available by checking the `perf sampling` field.
69
72
70
-
If the printed value is SPE then SPE is available.
73
+
If the printed value is SPE, then SPE is available.
71
74
72
75
```output
73
76
...
@@ -83,9 +86,9 @@ Performance features:
83
86
perf in userspace: disabled
84
87
```
85
88
86
-
## Confirm Arm SPE is available to Perf
89
+
## Confirm Arm SPE is available to perf
87
90
88
-
Run the following command to confirm SPE is available to Perf:
91
+
Run the following command to confirm SPE is available to `perf`:
89
92
90
93
```bash
91
94
sudo perf list "arm_spe*"
@@ -99,32 +102,34 @@ List of pre-defined events (to be used in -e or -M):
If `arm_spe`is not available because of your system configuration or if you don't have PMU permission, the `perf c2c` command will fail.
111
+
If `arm_spe`isn’t available due to your system configuration or limited PMU access, the `perf c2c` command will fail.
109
112
110
-
To confirm Perf can access SPE run:
113
+
To confirm `perf` can access SPE, run:
111
114
112
115
```bash
113
116
perf c2c record
114
117
```
115
118
116
-
The output showing the failure is:
119
+
If SPE access is blocked, you’ll see output like this:
117
120
118
121
```output
119
122
failed: memory events not supported
120
123
```
121
124
122
125
{{% notice Note %}}
123
-
If you are unable to use SPE it may be a restriction based on your cloud instance size or operating system.
126
+
If you are unable to use SPE it might be a restriction based on your cloud instance size or operating system.
124
127
125
-
Generally, access to a full server (also known as metal instances) with a relatively new kernel is needed for Arm SPE support.
128
+
Generally, access to a full server (also known as metal instances) with a relatively new kernel is required for Arm SPE support.
126
129
127
130
For more information about enabling SPE, see the [perf-arm-spe manual page](https://man7.org/linux/man-pages/man1/perf-arm-spe.1.html)
128
131
{{% /notice %}}
129
132
130
-
Continue to learn how to use Perf C2C on an example application.
133
+
## Summary
134
+
135
+
You've confirmed that your system supports Arm SPE, installed the necessary tools, and verified that `perf` can access SPE events. You're now ready to start collecting detailed performance data using `perf c2c`. In the next section, you’ll run a real application and use `perf c2c` to capture cache sharing behavior and uncover memory performance issues.
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/false-sharing-arm-spe/how-to-3.md
+9-4Lines changed: 9 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
---
2
-
title: False Sharing Example
2
+
title: False sharing example
3
3
weight: 4
4
4
5
5
### FIXED, DO NOT MODIFY
@@ -8,6 +8,10 @@ layout: learningpathall
8
8
9
9
## Example code
10
10
11
+
{{% notice Learning Goal%}}
12
+
The example code in this section demonstrates how false sharing affects performance by comparing two multithreaded programs; one with cache-aligned data structures, and one without. You’ll compile and run both versions, observe the runtime difference, and learn how memory layout affects cache behavior. This sets the stage for analyzing performance with `perf c2c` in the next section.
13
+
{{% /notice %}}
14
+
11
15
Use a text editor to copy and paste the C example code below into a file named `false_sharing_example.c`
12
16
13
17
The code is adapted from [Joe Mario](https://github.com/joemario/perf-c2c-usage-files) and is discussed thoroughly in the Arm Statistical Profiling Extension Whitepaper.
@@ -285,7 +289,7 @@ int main ( int argc, char *argv[] )
285
289
286
290
### Code explanation
287
291
288
-
The key data structure that occupies the cache is `struct Buf`. With a 64-byte cache line size, each line can hold 8, 8-byte `long` integers.
292
+
The key data structure that occupies the cache is `struct _buf`. With a 64-byte cache line size, each line can hold 8, 8-byte `long` integers.
289
293
290
294
If you do not pass in the `NO_FALSE_SHARING` macro during compilation the `Buf` data structure will contain the elements below. Each structure neatly occupies the entire 64-byte cache line.
291
295
@@ -306,7 +310,7 @@ typedef struct _buf {
306
310
307
311
Alternatively if you pass in the `NO_FALSE_SHARING` macro during compilation, the `Buf` structure has a different shape.
308
312
309
-
The 40 bytes of padding pushes the reader variables onto a different cache line. However, notice that this is with the tradeoff the new `Buf` structures occupies multiple cache lines (12 long integers). Therefore it leaves unused cache space of 25% per `Buf` structure.
313
+
The 40 bytes of padding pushes the reader variables onto a different cache line. However, notice that this is with the tradeoff the new `Buf` structures occupies multiple cache lines (12 long integers). Therefore it leaves unused cache space of 25% per `Buf` structure. This trade-off uses more memory but eliminates false sharing, improving performance by reducing cache line contention.
310
314
311
315
```output
312
316
typedef struct _buf {
@@ -345,5 +349,6 @@ user 0m8.869s
345
349
sys 0m0.000s
346
350
```
347
351
348
-
Continue to the next section to learn how to use Perf C2C to analyze the example code.
352
+
## Summary
353
+
In this section, you ran a hands-on C example to see how false sharing can significantly degrade performance in multithreaded applications. By comparing two versions of the same program, one with aligned memory access and one without, you saw how something as subtle as cache line layout can result in a 2x difference in runtime. This practical example sets the foundation for using Perf C2C to capture and analyze real cache line sharing behavior in the next section.
0 commit comments