You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/false-sharing-arm-spe/how-to-1.md
+20-16Lines changed: 20 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,34 +1,36 @@
1
1
---
2
-
title: Introduction to Arm_SPE and False Sharing
2
+
title: Introduction to Arm SPE and false sharing
3
3
weight: 2
4
4
5
5
### FIXED, DO NOT MODIFY
6
6
layout: learningpathall
7
7
---
8
8
9
-
## Introduction to Arm Statistical Profiling Extension (SPE)
9
+
## Introduction to the Arm Statistical Profiling Extension (SPE)
10
10
11
-
Standard performance tracing relies on counting whole instructions, capturing only architectural instructions without revealing the actual memory addresses, pipeline latencies, or considering micro-operations in flight. Moreover, the “skid” phenomenon where events are falsely attributed to later instructions can mislead developers.
11
+
Standard performance tracing relies on counting completed instructions, capturing only architectural instructions without revealing the actual memory addresses, pipeline latencies, or considering micro-operations in flight. Moreover, the “skid” phenomenon where events are falsely attributed to later instructions can mislead developers.
12
12
13
-
The Arm Statistical Profiling Extension (SPE) integrates sampling directly into the CPU pipeline, triggering on individual micro-operations rather than retired instructions, thereby eliminating skid and blind spots. Each SPE sample record includes relevant meta data, such as data addresses, per-µop pipeline latency, triggered PMU event masks, and the memory hierarchy source, enabling fine-grained and precise cache analysis.
13
+
SPE integrates sampling directly into the CPU pipeline, triggering on individual micro-operations rather than retired instructions, thereby eliminating skid and blind spots. Each SPE sample record includes relevant metadata, such as data addresses, per-µop pipeline latency, triggered PMU event masks, and the memory hierarchy source, enabling fine-grained and precise cache analysis.
14
14
15
-
This enables software developers to tune user-space software for characteristics such as memory latency and cache accesses. Importantly, it is the mechanism on Arm to enable cache statistics with the Linux `perf` cache-to-cache utility, referred to as `perf c2c`. Please refer to the [Arm_SPE whitepaper](https://developer.arm.com/documentation/109429/latest/) for more details.
15
+
This enables software developers to tune user-space software for characteristics such as memory latency and cache accesses. Importantly, cache statistics are enabled with the Linux Perf cache-to-cache (C2C) utility.
16
16
17
-
In this learning path we will use the `arm_spe` and `perf c2c` to diagnose a cache issue for an application running on a Neoverse server.
17
+
Please refer to the [Arm SPE whitepaper](https://developer.arm.com/documentation/109429/latest/)for more details.
18
18
19
-
## False Sharing within the Cache
19
+
In this Learning Path, you will use SPE and Perf C2C to diagnose a cache issue for an application running on a Neoverse server.
20
20
21
-
Even when two threads touch entirely separate variables, modern processors move data in fixed-size cache lines (nominally 64-bytes). If those distinct variables happen to occupy bytes within the same line, every time one thread writes its variable the core’s cache must gain exclusive ownership of the whole line, forcing the other core’s copy to be invalidated. The second thread, still working on its own variable, then triggers a coherence miss to fetch the line back, and the ping-pong repeats. Please see the illustration below, taken from the [Arm_SPE whitepaper](https://developer.arm.com/documentation/109429/latest/), for a visual explanation.
21
+
## False sharing within the cache
22
+
23
+
Even when two threads touch entirely separate variables, modern processors move data in fixed-size cache lines (nominally 64-bytes). If those distinct variables happen to occupy bytes within the same line, every time one thread writes its variable the core’s cache must gain exclusive ownership of the whole line, forcing the other core’s copy to be invalidated. The second thread, still working on its own variable, then triggers a coherence miss to fetch the line back, and the ping-pong pattern repeats. Please see the illustration below, taken from the Arm SPE whitepaper, for a visual explanation.
Because false sharing hides behind ordinary writes, the easiest time to eliminate it is while reading or refactoring the source code padding or realigning the offending variables before compilation. In large, highly concurrent code-bases, however, data structures are often accessed through several layers of abstraction, and many threads touch memory via indirection, so the subtle cache-line overlap may not surface until profiling or performance counters reveal unexpected coherence misses.
27
+
Because false sharing hides behind ordinary writes, the easiest time to eliminate it is while reading or refactoring the source code by padding or realigning the offending variables before compilation. In large, highly concurrent codebases, however, data structures are often accessed through several layers of abstraction, and many threads touch memory via indirection, so the subtle cache-line overlap may not surface until profiling or performance counters reveal unexpected coherence misses.
26
28
27
-
From a source-code perspective nothing is “shared,” but at the hardware level both variables are implicitly coupled by their physical colocation.
29
+
From a source-code perspective nothing is “shared,” but at the hardware level both variables are implicitly coupled by their physical location.
28
30
29
-
## Alignment to Cache Lines
31
+
## Alignment to cache lines
30
32
31
-
In C++11, we can manually specify the alignment of an object using the `alignas`function. For example, in the C++11 source code below, we manually align the the `struct` every 64 bytes (typical cache line size on a modern processor). This ensures that each instance of `AlignedType` is on a separate cache line.
33
+
In C++11, you can manually specify the alignment of an object with the `alignas`specifier. For example, the C++11 source code belowmanually aligns the the `struct` every 64 bytes (typical cache line size on a modern processor). This ensures that each instance of `AlignedType` is on a separate cache line.
32
34
33
35
```cpp
34
36
#include<atomic>
@@ -48,7 +50,7 @@ int main() {
48
50
std::atomic<int> c;
49
51
std::atomic<int> d;
50
52
51
-
std::cout << "\n\nWithout Alignment can occupy same cache line\n\n";
53
+
std::cout << "\n\nWithout alignment, variables can occupy the same cache line\n\n";
52
54
// Print out the addresses
53
55
std::cout << "Address of atomic<int> a - " << &a << '\n';
54
56
std::cout << "Address of atomic<int> b - " << &b << '\n';
@@ -72,9 +74,9 @@ int main() {
72
74
}
73
75
```
74
76
75
-
Example output below shows the variables e, f, g and h occur at least 64-bytes addreses apart in our byte-addressable architecture. Whereas variables a, b, c and d occur 8 bytes apart (i.e. occupy the same cache line).
77
+
The example output below shows the variables e, f, g and h occur at least 64-bytes apart in the byte-addressable architecture. Whereas variables a, b, c and d occur 8 bytes apart, occupying the same cache line.
76
78
77
-
Although this is a contrived example, in a production workload there may be several layers of indirection that unintentionally result in false sharing. For these complex cases, to understand the root cause we will use `perf c2c`.
79
+
Although this is a contrived example, in a production workload there may be several layers of indirection that unintentionally result in false sharing. For these complex cases, to understand the root cause you will use Perf C2C.
78
80
79
81
```output
80
82
Without Alignment can occupy same cache line
@@ -92,4 +94,6 @@ Address of AlignedType e - 0xffffeb6c6140
92
94
Address of AlignedType f - 0xffffeb6c6100
93
95
Address of AlignedType g - 0xffffeb6c60c0
94
96
Address of AlignedType h - 0xffffeb6c6080
95
-
```
97
+
```
98
+
99
+
Continue to the next section to learn how to set up a system to run Perf C2C.
title: Configure your environment for Arm SPE profiling
3
3
weight: 3
4
4
5
5
### FIXED, DO NOT MODIFY
6
6
layout: learningpathall
7
7
---
8
8
9
-
## Setup
9
+
## Select a system with SPE support
10
10
11
-
For this tutorial, I will use a `c6g.metal` instances running Amazon linux 2023 (AL23). Since `SPE` requires support both in hardware and the operating system, instances running specific distributions or kernels may not allow SPE-based profiling.
11
+
SPE requires both hardware and operating system support. Many cloud instances running Linux do not enable SPE-based profiling.
12
12
13
-
We can check the underlying Neoverse IP and operating system kernel version with the following commands.
13
+
You need to identify a system that supports SPE using the information below.
14
+
15
+
If you are looking for an AWS system, you can use a `c6g.metal` instance running Amazon Linux 2023 (AL2023).
16
+
17
+
Check the underlying Neoverse processor and operating system kernel version with the following commands.
14
18
15
19
```bash
16
20
lscpu | grep -i "model name"
17
21
uname -r
18
22
```
19
23
20
-
Here we observe
24
+
The output includes the CPU type and kernel release version:
21
25
22
26
```ouput
23
27
Model name: Neoverse-N1
24
-
6.1.134-150.224.amzn2023.aarch64
28
+
6.1.134-152.225.amzn2023.aarch64
25
29
```
26
30
27
-
Next install the prerequisite packages with the following command.
31
+
Next, install the prerequisite packages using the package manager:
Since the `linux` perf utility is a userspace process and SPE is a hardware feature in silicon, we use a built-in kernel module `arm_spe_pmu` to interact. Run the following command.
38
+
Linux Perf is a userspace process and SPE is a hardware feature. The Linux kernel must be compiled with SPE support or the kernel module named `arm_spe_pmu` must be loaded.
39
+
40
+
Run the following command to confirm if the SPE kernel module is loaded:
35
41
36
42
```bash
37
43
sudo modprobe arm_spe_pmu
38
44
```
39
45
46
+
If the module is not loaded (blank output), SPE may still be available.
47
+
48
+
Run this command to check if SPE is included in the kernel:
49
+
50
+
```bash
51
+
ls /sys/bus/event_source/devices/ | grep arm_spe
52
+
```
53
+
54
+
If SPE is available, the output is:
55
+
56
+
```output
57
+
arm_spe_0
58
+
```
59
+
60
+
If the output is blank then SPE is not available.
61
+
40
62
## Run Sysreport
41
63
42
-
A handy python script is available to summarise your systems capabilities with regard to performance profiling. Install and run System Report python script (`sysreport`) using the [instructions in the learning path](https://learn.arm.com/learning-paths/servers-and-cloud-computing/sysreport/).
64
+
You can install and run a Python script named Sysreport to summarize your system's performance profiling capabilities.
65
+
66
+
Refer to [Get ready for performance analysis with Sysreport](https://learn.arm.com/learning-paths/servers-and-cloud-computing/sysreport/) to learn how to install and run it.
67
+
68
+
Look at the Sysreport output and confirm SPE is available by checking the `perf sampling` field.
43
69
44
-
To check SPE is available on your system look at the `perf sampling` field. It should read `SPE` highlighted in green.
70
+
If the printed value is SPE then SPE is available.
45
71
46
72
```output
47
73
...
@@ -57,29 +83,48 @@ Performance features:
57
83
perf in userspace: disabled
58
84
```
59
85
60
-
## Confirm Arm_SPE Availability
86
+
## Confirm Arm SPE is available to Perf
61
87
62
-
Running the following command will confirm the availability of `arm_spe`.
88
+
Run the following command to confirm SPE is available to Perf:
63
89
64
-
```output
90
+
```bash
65
91
sudo perf list "arm_spe*"
66
92
```
67
93
68
-
You should observe the following.
94
+
You should see the output below indicating the PMU event is available.
69
95
70
96
```output
71
97
List of pre-defined events (to be used in -e or -M):
72
98
73
99
arm_spe_0// [Kernel PMU event]
74
100
```
75
101
76
-
If `arm_spe` is not available on your configuration, the `perf c2c` workload without `SPE` will fail. For example you will observe the following.
If `arm_spe` is not available because of your system configuration or if you don't have PMU permission, the `perf c2c` command will fail.
109
+
110
+
To confirm Perf can access SPE run:
111
+
112
+
```bash
113
+
perf c2c record
114
+
```
115
+
116
+
The output showing the failure is:
77
117
78
118
```output
79
-
$ perf c2c record
80
119
failed: memory events not supported
81
120
```
82
121
83
122
{{% notice Note %}}
84
-
If you are unable to use Arm SPE. It may be a restriction based on your cloud instance size or operating system. Generally, access to a full server (also known as metal instances) with a relatively new kernel is needed for Arm_SPE support. For more information, see the [perf-arm-spe manual page](https://man7.org/linux/man-pages/man1/perf-arm-spe.1.html)
123
+
If you are unable to use SPE it may be a restriction based on your cloud instance size or operating system.
124
+
125
+
Generally, access to a full server (also known as metal instances) with a relatively new kernel is needed for Arm SPE support.
126
+
127
+
For more information about enabling SPE, see the [perf-arm-spe manual page](https://man7.org/linux/man-pages/man1/perf-arm-spe.1.html)
85
128
{{% /notice %}}
129
+
130
+
Continue to learn how to use Perf C2C on an example application.
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/false-sharing-arm-spe/how-to-3.md
+16-7Lines changed: 16 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,10 +6,11 @@ weight: 4
6
6
layout: learningpathall
7
7
---
8
8
9
-
## Example
9
+
## Example code
10
10
11
-
Copy and paste the `C++/C` example below into a file named `false_sharing_example.cpp`. The code example below has been adapted from [Joe Mario](https://github.com/joemario/perf-c2c-usage-files) and is discussed thoroughly in the [Arm Statistical Profiling Extension Whitepaper](https://developer.arm.com/documentation/109429/latest/).
11
+
Use a text editor to copy and paste the C example code below into a file named `false_sharing_example.c`
12
12
13
+
The code is adapted from [Joe Mario](https://github.com/joemario/perf-c2c-usage-files) and is discussed thoroughly in the Arm Statistical Profiling Extension Whitepaper.
13
14
14
15
```cpp
15
16
/*
@@ -282,9 +283,13 @@ int main ( int argc, char *argv[] )
282
283
}
283
284
```
284
285
285
-
### Code Explanation
286
+
### Code explanation
286
287
287
-
The key data structure that occupies the cache is the `struct Buf`. With our system using a 64-byte cache line, each line can hold 8, 8-byte `long` integers. If we do **not** pass in the `NO_FALSE_SHARING` macro during compilation our `Buf` data structure will contain the elements below. Where each structure neatly occupies the entire 64-byte cache line. However, the 4 readers and 2 locks are now on the same cache line.
288
+
The key data structure that occupies the cache is `struct Buf`. With a 64-byte cache line size, each line can hold 8, 8-byte `long` integers.
289
+
290
+
If you do not pass in the `NO_FALSE_SHARING` macro during compilation the `Buf` data structure will contain the elements below. Each structure neatly occupies the entire 64-byte cache line.
291
+
292
+
However, the 4 readers and 2 locks are now accessing the same cache line.
288
293
289
294
```output
290
295
typedef struct _buf {
@@ -299,7 +304,9 @@ typedef struct _buf {
299
304
} buf __attribute__((aligned (64)));
300
305
```
301
306
302
-
Alternatively if we pass in the `NO_FALSE_SHARING` macro during compilation, our `Buf` structure has a different shape. The `(5*8-byte)` padding pushes the reader variables onto a different cache line. However, notice that this is with the tradeoff that our new `Buf` structures occupies 1 and a half cache lines (12 `long`s). Therefore we have unused cache space of 25% per `Buf` structure.
307
+
Alternatively if you pass in the `NO_FALSE_SHARING` macro during compilation, the `Buf` structure has a different shape.
308
+
309
+
The 40 bytes of padding pushes the reader variables onto a different cache line. However, notice that this is with the tradeoff the new `Buf` structures occupies multiple cache lines (12 long integers). Therefore it leaves unused cache space of 25% per `Buf` structure.
Running both binaries with the command like argument of 1 will show the following, with both binaries successfully return a 0 exit status but the `false_sharing`binary runs almost 2x slower!
331
+
Run both binaries with the command line argument of 1. Both binaries successfully return a 0 exit status but the binary with the false sharing runs almost 2x slower!
325
332
326
333
```bash
327
334
time ./false_sharing 1
@@ -338,3 +345,5 @@ user 0m8.869s
338
345
sys 0m0.000s
339
346
```
340
347
348
+
Continue to the next section to learn how to use Perf C2C to analyze the example code.
0 commit comments