You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/Optimised-libraries-on-Arm/1.md
+16-6Lines changed: 16 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,26 +8,36 @@ layout: learningpathall
8
8
9
9
## Introduction to Performance Libraries
10
10
11
-
Performance libraries for Arm CPUs, such as the Arm Performance Libraries (APL), provide highly optimized mathematical functions for scientific computing, similar to how cuBLAS serves GPUs and Intel's MKL serves x86 architectures. These libraries can be linked dynamically at runtime or statically during compilation, offering flexibility in deployment. Generally, minimal source code changes are required to support these libraries, making them easy to integrate. They are designed to support multiple versions of the Arm architecture, including those with NEON and SVE extensions. Performance libraries are crafted through extensive benchmarking and optimization, and can be domain-specific, such as genomics libraries, or produced by Arm for general-purpose computing.
11
+
The C++ Standard Library provides a collection of classes and functions that are essential for everyday programming tasks, such as data structures, algorithms, and input/output operations. It is designed to be versatile and easy to use, ensuring compatibility and portability across different platforms. However as a result of this portability, standard libraries introduces some limitations. Performance sensitive applications may wish to take maximum advantage of the hardware's capabilities. This is where performance libraries come in.
12
12
13
-
ILP64 use 64 bits for representing integers, which are often used for indexing large arrays in scentific computing. In C++ source code we use the `long long` type to specify 64-bit integers. Alternatively, LP64 use 32 bits to present integers which are more common in general purpose applications.
13
+
Performance libraries like OpenRNG are specialized for high-performance computing tasks and are often tailored to the microarchitecture of a specific processor. These libraries are optimized for speed and efficiency, often leveraging hardware-specific features such as vector units to achieve maximum performance. Performance libraries are crafted through extensive benchmarking and optimization, and can be domain-specific, such as genomics libraries, or produced by Arm for general-purpose computing. For example, OpenRNG focuses on generating random numbers quickly and efficiently, which is crucial for simulations and scientific computations, whereas the C++ Standard Library offers a more general-purpose approach with functions like std::mt19937 for random number generation.
14
14
15
-
Open Multi-process is a programming interface for paralleling workloads across many CPU cores on shared memory across multiple platforms (i.e. x86, AArch64 etc.). Programmers would interact primarily through compiler directives, such as `#pragma omp parallel` indicating which section of source code can be run on parallel and which require synchronisation. This learning path does not serve to teach you about OpenMP but presumes the reader is familiar.
15
+
Performance libraries for Arm CPUs, such as the Arm Performance Libraries (APL), provide highly optimized mathematical functions for scientific computing, similar to how cuBLAS are a set of optimised libaries specifically for NVIDIA GPUs. These libraries can be linked dynamically at runtime or statically during compilation, offering flexibility in deployment. They are designed to support multiple versions of the Arm architecture, including those with NEON and SVE extensions. Generally, minimal source code changes are required to support these libraries, making them easy to integrate.
16
+
17
+
### Common Versions of performance libraries
18
+
19
+
Performance libraries are often distributed with the following formats to support various use cases.
20
+
21
+
-**ILP64** use 64 bits for representing integers, which are often used for indexing large arrays in scentific computing. In C++ source code we use the `long long` type to specify 64-bit integers.
22
+
23
+
-**LP64** use 32 bits to present integers which are more common in general purpose applications.
24
+
25
+
-**Open Multi-process** (OpenMP) is a programming interface for paralleling workloads across many CPU cores on shared memory across multiple platforms (i.e. x86, AArch64 etc.). Programmers would interact primarily through compiler directives, such as `#pragma omp parallel` indicating which section of source code can be run on parallel and which sections require synchronisation. This learning path does not serve to teach you about OpenMP but presumes the reader is familiar.
16
26
17
27
Arm performance libraries like the x86 equivalent, Open Math Kernel Library (MKL) provide optimised functions for both ILP64 and LP64 as well as OpenMP or single threaded implementations. Further, the interface libraries are available as shared libraries for dynamic linking (i.e. `*.so`) or static linking (i.e. `*.a`).
18
28
19
29
## Why Multiple Performance Libraries Exist
20
30
21
-
A natural source of confusion stems from the plethora of similar seeming performance libraries, for example OpenBLAS, NVIDIA Performance Libraries (NVPL) which have their own implementations for specific functions, for example basic linear algebra subprograms (BLAS). This begs the question which one should a developer use.
31
+
A natural source of confusion stems from the plethora of similar seeming performance libraries, for example OpenBLAS, NVIDIA Performance Libraries (NVPL) which have their own implementations for specific functions, for example basic linear algebra subprograms (BLAS). This begs the question which one should a developer use?
22
32
23
-
Multiple performance libraries exist to cater to the diverse needs of different hardware architectures and applications. For instance, Arm performance libraries are optimized for Arm CPUs, leveraging their unique instruction sets and power efficiency. On the other hand, NVIDIA performance libraries for Grace CPU are tailored to maximize the performance of NVIDIA's Grace hardware features specific to their own Neoverse implementation.
33
+
Multiple performance libraries coexist to cater to the diverse needs of different hardware architectures and applications. For instance, Arm performance libraries are optimized for Arm CPUs, leveraging their unique instruction sets and power efficiency. On the other hand, NVIDIA performance libraries for Grace CPU are tailored to maximize the performance of NVIDIA's Grace hardware features specific to their own Neoverse implementation.
24
34
25
35
-**Hardware Specialization** Some libraries are designed to be cross-platform, supporting multiple hardware architectures to provide flexibility and broader usability. For example, the OpenBLAS library supports both Arm and x86 architectures, allowing developers to use the same library across different systems.
26
36
27
37
-**Domain-Specific Libraries**: Libraries are often created to handle specific domains or types of computations more efficiently. For instance, libraries like cuDNN are optimized for deep learning tasks, providing specialized functions that significantly speed up neural network training and inference.
28
38
These factors contribute to the existence of multiple performance libraries, each tailored to meet the specific demands of various hardware and applications.
29
39
30
-
-**Commercial Libraries**: Alternatively, highly performant libraries require a license to use. This is more common in domain specific libraries such as computations chemistry or fluid dynamics.
40
+
-**Commercial Libraries**: Alternatively, some highly performant libraries require a license to use. This is more common in domain specific libraries such as computations chemistry or fluid dynamics.
31
41
32
42
For a directory of optimised libraries produced externally we recommend looking at the [Arm Ecosystem Dashboard](https://www.arm.com/developer-hub/ecosystem-dashboard/?utm_source=google&utm_medium=cpc&utm_content=text_txt_na_ecodash&utm_term=ecodash&utm_campaign=mk24_developer_devhub_keyword_traffic_na&utm_term=arm%20software&gad_source=1&gclid=Cj0KCQiAwOe8BhCCARIsAGKeD56NbfrF3zq4fw5inKdGQMUZFgPqpfLjupj3KVgBsYu4ko7abMI0ePMaAkHNEALw_wcB). There are useful filtres for open-source and commercial implementations.
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/Optimised-libraries-on-Arm/2.md
+24-4Lines changed: 24 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,39 +8,59 @@ layout: learningpathall
8
8
9
9
## Setting Up Your Environment
10
10
11
+
In this initial example we will use an Arm-based AWS `t4g.2xlarge` instance along with the Arm Performance Libraries. For instructions to connect to an AWS instance, please see our [getting started guide](https://learn.arm.com/learning-paths/servers-and-cloud-computing/intro/).
11
12
12
-
- Run on Arm CPUs,
13
+
Once connected via `ssh`, install the required packages with the following commands.
13
14
14
15
```bash
15
16
sudo apt update
16
17
sudo apt install gcc make
17
18
```
18
-
Install Arm performance libraries using the following [installation guide](https://learn.arm.com/install-guides/armpl/)
19
+
Next, install Arm performance libraries using the following [installation guide](https://learn.arm.com/install-guides/armpl/). Alternatively, use the commands below.
Navigate to the `lp64` C source code examples and compile.
50
+
40
51
```bash
41
52
cd$ARMPL_DIR
42
53
cd /examples_lp64/
43
54
sudo -E make c_examples // -E is to preserve environment variables
44
55
```
45
56
57
+
Your terminal output should show the examples being compiled, ending with.
58
+
59
+
```output
60
+
...
61
+
Test passed OK
62
+
```
63
+
64
+
For more information on all the available function, please refer to the [Arm Performance Libraries Reference Guide](https://developer.arm.com/documentation/101004/latest/).
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/Optimised-libraries-on-Arm/3.md
+27-47Lines changed: 27 additions & 47 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,9 +8,9 @@ layout: learningpathall
8
8
9
9
## Example using Optimised Math library
10
10
11
-
The libamath library from Arm is an optimized subset of the standard library math functions, providing both scalar and vector functions at different levels of precision. It includes vectorized versions (Neon and SVE) of common math functions found in the standard library, such as those in the <cmath> header.
11
+
The libamath library from Arm is an optimized subset of the standard library math functions, providing both scalar and vector functions at different levels of precision. It includes vectorized versions (Neon and SVE) of common math functions found in the standard library, such as those in the <cmath> header.
12
12
13
-
The trivial snippet below uses the `<cmath>` standard cmath header. Copy and paste the code sample below into a file named `basic_math.cpp`.
13
+
The trivial snippet below uses the `<cmath>` standard cmath header to calculate the base 2 exponential of a scalar value. Copy and paste the code sample below into a file named `basic_math.cpp`.
14
14
15
15
```c++
16
16
#include<iostream>
@@ -20,17 +20,21 @@ The trivial snippet below uses the `<cmath>` standard cmath header. Copy and pas
double result = exp(random_number); // Use the optimized exp function from libamath
23
+
double result = exp(random_number); // Use the standard exponential function
24
24
std::cout << "Exponential of " << random_number << " is " << result << std::endl;
25
25
return 0;
26
26
}
27
27
```
28
28
29
29
Compiling using the following g++ command. We can use the `ldd` command to print the shared objects for dynamic linking. Here we observe the superset `libm.so` is linked.
To use the optimised math library `libamath` requires minimal source code changes, just modifying the include statements to point to the correct header file and additional compiler flags.
48
+
To use the optimised math library `libamath` requires minimal source code changes for our scalar example, just modifying the include statements to point to the correct header file and additional compiler flags.
49
+
50
+
Libamath routines have maximum errors inferior to 4 ULPs, where ULP stands for Unit in the Last Place, which is the smallest difference between two consecutive floating-point numbers at a specific precision. These routines only support the default rounding mode (round-to-nearest, ties to even). Therefore, switching from libm to libamath results in a small accuracy loss on a range of routines, similar to other vectorized implementations of these functions.
45
51
46
52
Copy and paste the following C++ snippet into a file named `optimised_math.cpp`.
47
53
@@ -61,9 +67,14 @@ int main() {
61
67
62
68
Compiling using the following g++ command. Again we can use the `ldd` command to print the shared objects for dynamic linking. Now we can opbserve the `libamath.so` shared object is linked.
The naming convention of the Arm Performance Library for scalar operations follows that of `libm`. Hence, we are able to simply update the header file and recompile. For vector operations, we can either rely on the compiler autovectorisation, whereby the compiler generates the vector code for us. This is used in the Arm Compiler for Linux (ACfL). Alternatively, we can use vector routines, which uses name mangling. Mangling is a technique used in computer programming to modify the names of vector functions to ensure uniqueness and avoid conflicts. This is particularly important in compiled languages like C++ and in environments where multiple libraries or modules may be used together.
std::cout << "Exponential of " << random_number << " is " << result << std::endl;
88
-
return 0;
89
-
}
90
-
```
91
-
92
-
```bash
93
-
g++ x.cpp -o x -lamath -lm
94
-
```
91
+
In the context of Arm's AArch64 architecture, vector name mangling follows the specific convention below to differentiate between scalar and vector versions of functions.
-**Mask** : 'M' for masked/predicated version, 'N' for unmasked. Only masked routines are defined for SVE, and only unmasked for Neon.
100
+
-**vlen** : integer number representing vector length expressed as number of lanes. For Neon <vlen>='2' in double-precision and <vlen>='4' in single-precision. For SVE, <vlen>='x'.
101
+
-**signature** : 'v' for 1 input floating point or integer argument, 'vv' for 2. More details in AArch64's vector function ABI.
123
102
103
+
Please refer to the [Arm Performance Library reference guide](https://developer.arm.com/documentation/101004/latest/) for more information.
0 commit comments