You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/learning-paths/cross-platform/floating-point-behavior/_index.md
+6-5Lines changed: 6 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,22 +3,23 @@ title: Understand floating-point behavior across x86 and Arm architectures
3
3
4
4
minutes_to_complete: 30
5
5
6
-
who_is_this_for: This is an introductory topic for developers who are porting applications from x86 to Arm and want to understand floating-point behavior across these architectures. Both architectures provide reliable and consistent floating-point computation following the IEEE 754 standard.
6
+
who_is_this_for: This is a topic for developers who are porting applications from x86 to Arm and want to understand floating-point behavior across these architectures. Both architectures provide reliable and consistent floating-point computation following the IEEE 754 standard.
7
7
8
8
learning_objectives:
9
9
- Understand that Arm and x86 produce identical results for all well-defined floating-point operations.
10
10
- Recognize that differences only occur in special undefined cases permitted by IEEE 754.
11
-
- Learn best practices for writing portable floating-point code across architectures.
12
-
- Apply appropriate precision levels for portable results.
11
+
- Learn to recognize floating-point differences and make your code portable across architectures.
title: Precision and floating-point instruction considerations
3
3
weight: 4
4
4
5
5
### FIXED, DO NOT MODIFY
6
6
layout: learningpathall
7
7
---
8
8
9
-
## Understanding numerical precision differences in single vs double precision
9
+
When moving from x86 to Arm you may see differences in floating-point behavior. Understanding these differences may require digging deeper into the details, including the precision and the floating-point instructions.
10
10
11
-
This section explores how different levels of floating-point precision can affect numerical results. The differences shown here are not architecture-specific issues, but demonstrate the importance of choosing appropriate precision levels for numerical computations.
11
+
This section explores an example with minor differences in floating-point results, particularly focused on Fused Multiply-Add (FMAC) operations. You can run the example to learn more about how the same C code can produce different results on different platforms.
12
12
13
-
###Single precision limitations
13
+
## Single precision and FMAC differences
14
14
15
-
Consider two mathematically equivalent functions, `f1()` and `f2()`. While they should theoretically produce the same result, small differences can arise due to the limited precision of floating-point arithmetic.
15
+
Consider two mathematically equivalent functions, `f1()` and `f2()`. While they should theoretically produce the same result, small differences can arise due to the limited precision of floating-point arithmetic and the instructions used.
16
16
17
-
The differences shown in this example are due to using single precision (float) arithmetic, not due to architectural differences between Arm and x86. Both architectures handle single precision arithmetic according to IEEE 754.
17
+
When these small differences are amplified, you can observe how Arm and x86 architectures handle floating-point operations differently, particularly with respect to FMAC (Fused Multiply-Add) operations. The example shows the Clang compiler on Arm using FMAC instructions by default, which can lead to slightly different results compared to x86, which is not using FMAC instructions.
18
18
19
19
Functions `f1()` and `f2()` are mathematically equivalent. You would expect them to return the same value given the same input.
20
20
21
-
Use an editor to copy and paste the C++ code below into a file named `single-precision.cpp`
21
+
Use an editor to copy and paste the C code below into a file named `example.c`
22
22
23
-
```cpp
23
+
```c
24
24
#include<stdio.h>
25
25
#include<math.h>
26
26
@@ -42,74 +42,109 @@ int main() {
42
42
43
43
// Theoretically, result1 and result2 should be the same
44
44
float difference = result1 - result2;
45
-
// Multiply by a large number to amplify the error
45
+
46
+
// Multiply by a large number to amplify the error - using single precision (float)
47
+
// This is where architecture differences occur due to FMAC instructions
printf("Final result after magnification: %.10f\n", final_result);
57
+
printf("Final result after magnification (float): %.10f\n", final_result);
58
+
printf("Final result after magnification (double): %.10f\n", final_result_double);
53
59
54
60
return 0;
55
61
}
56
62
```
57
63
64
+
You need access to an Arm and x86 Linux computer to compare the results. The output below is from Ubuntu 24.04 using Clang. The Clang version is 18.1.3.
65
+
58
66
Compile and run the code on both x86 and Arm with the following command:
59
67
60
68
```bash
61
-
g++ -g single-precision.cpp -o single-precision
62
-
./single-precision
69
+
clang -g example.c -o example -lm
70
+
./example
63
71
```
64
72
65
-
Output running on x86:
73
+
The output running on x86:
66
74
67
75
```output
68
76
f1(1.000000e-08) = 0.0000000000
69
77
f2(1.000000e-08) = 0.0000000050
70
78
Difference (f1 - f2) = -4.9999999696e-09
71
-
Final result after magnification: -0.4999000132
79
+
Final result after magnification (float): -0.4999000132
80
+
Final result after magnification (double): -0.4998999970
72
81
```
73
82
74
-
Output running on Arm:
83
+
The output running on Arm:
75
84
76
85
```output
77
86
f1(1.000000e-08) = 0.0000000000
78
87
f2(1.000000e-08) = 0.0000000050
79
88
Difference (f1 - f2) = -4.9999999696e-09
80
-
Final result after magnification: -0.4998999834
89
+
Final result after magnification (float): -0.4998999834
90
+
Final result after magnification (double): -0.4998999970
81
91
```
82
92
83
-
Depending on your compiler and library versions, you may get the same output on both systems. You can also use the `clang` compiler and see if the output matches.
93
+
Notice that the double precision results are identical across platforms, while the single precision results differ.
94
+
95
+
You can disable the fused multiply-add on Arm with a compiler flag:
1. Different square root algorithms: x86 and Arm use different hardware and library implementations for `sqrtf(1 + 1e-8)`
126
+
{{% notice Note %}}
127
+
On Ubuntu 24.04 the GNU Compiler, `gcc`, produces the same result as x86 and does not use the `fmadd` instruction. Be aware that corner case examples like this may change in future compiler versions.
128
+
{{% /notice %}}
95
129
96
-
2. Tiny implementation differences get amplified. The difference between the two `sqrtf()`results is only about 3e-10, but this gets multiplied by 100,000,000, making it visible in the final result.
130
+
## Techniques for consistent results
97
131
98
-
3. Both `f1()` and `f2()` use `sqrtf()`. Even though `f2()` is more numerically stable, both functions call `sqrtf()` with the same input, so they both inherit the same architecture-specific square root result.
132
+
You can make the results consistent across platforms in several ways:
99
133
100
-
4. Compiler and library versions may produce different output due to different implementations of library functions such as `sqrtf()`.
134
+
- Use double precision for critical calculations by changing `100000000.0f` to `100000000.0` (double precision).
101
135
102
-
The final result is that x86 and Arm libraries compute `sqrtf(1.00000001)` with tiny differences in the least significant bits. This is normal and expected behavior and IEEE 754 allows for implementation variations in transcendental functions like square root, as long as they stay within specified error bounds.
136
+
- Disable fused multiply-add operations using the `-ffp-contract=off` compiler flag.
103
137
104
-
The very small difference you see is within acceptable floating-point precision limits.
138
+
- Use the compiler flag `-ffp-contract=fast` to enable fused multiply-add on x86.
105
139
106
-
###Key takeaways
140
+
## Key takeaways
107
141
108
-
- The small differences shown are due to library implementations in single-precision mode, not fundamental architectural differences.
109
-
- Single-precision arithmetic has inherent limitations that can cause small numerical differences.
110
-
- Using numerically stable algorithms, like `f2()`, can minimize error propagation.
111
-
- Understanding [numerical stability](https://en.wikipedia.org/wiki/Numerical_stability) is important for writing portable code.
142
+
- Different floating-point behavior between architectures can often be traced to specific hardware features or instructions such as Fused Multiply-Add (FMAC) operations.
143
+
- FMAC performs multiplication and addition with a single rounding step, which can lead to different results compared to separate multiply and add operations.
144
+
- Compilers may use FMAC instructions on Arm by default, but not on x86.
145
+
- To ensure consistent results across platforms, consider using double precision for critical calculations and controlling compiler optimizations with flags like `-ffp-contract=off` and `-ffp-contract=fast`.
146
+
- Understanding [numerical stability](https://en.wikipedia.org/wiki/Numerical_stability) remains important for writing portable code.
112
147
113
-
By adopting best practices and appropriate precision levels, developers can ensure consistent results across platforms.
148
+
If you see differences in floating-point results, it typically means you need to look a little deeper to find the causes.
114
149
115
-
Continue to the next section to see how precision impacts the results.
150
+
These situations are not common, but it is good to be aware of them as a software developer migrating to the Arm architecture. You can be confident that floating-point on Arm behaves predictably and that you can get consistent results across multiple architectures.
Copy file name to clipboardExpand all lines: content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/1-overview.md
+9-7Lines changed: 9 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,32 +6,34 @@ weight: 2
6
6
layout: learningpathall
7
7
---
8
8
9
-
## TinyML
9
+
## Overview
10
10
11
11
This Learning Path is about TinyML. It is a starting point for learning how innovative AI technologies can be used on even the smallest of devices, making Edge AI more accessible and efficient. You will learn how to set up your host machine to facilitate compilation and ensure smooth integration across devices.
12
12
13
13
This section provides an overview of the domain with real-life use cases and available devices.
14
+
## What is TinyML?
15
+
14
16
15
17
TinyML represents a significant shift in Machine Learning deployment. Unlike traditional Machine Learning, which typically depends on cloud-based servers or high-performance hardware, TinyML is tailored to function on devices with limited resources, constrained memory, low power, and fewer processing capabilities.
16
18
17
19
TinyML has gained popularity because it enables AI applications to operate in real-time, directly on the device, with minimal latency, enhanced privacy, and the ability to work offline. This shift opens up new possibilities for creating smarter and more efficient embedded systems.
18
20
19
-
###Benefits and applications
21
+
## Benefits and applications
20
22
21
23
The benefits of TinyML align well with the Arm architecture, which is widely used in IoT, mobile devices, and edge AI deployments.
22
24
23
25
Here are some of the key benefits of TinyML on Arm:
24
26
25
27
26
-
-**Power Efficiency**: TinyML models are designed to be extremely power-efficient, making them ideal for battery-operated devices like sensors, wearables, and drones.
28
+
- Power efficiency: TinyML models are designed to be extremely power-efficient, making them ideal for battery-operated devices like sensors, wearables, and drones.
27
29
28
-
-**Low Latency**: AI processing happens on-device, so there is no need to send data to the cloud, which reduces latency and enables real-time decision-making.
30
+
- Low latency: AI processing happens on-device, so there is no need to send data to the cloud, which reduces latency and enables real-time decision-making.
29
31
30
-
-**Data Privacy**: With on-device computation, sensitive data remains local, providing enhanced privacy and security. This is a priority in healthcare and personal devices.
32
+
- Data privacy: with on-device computation, sensitive data remains local, providing enhanced privacy and security. This is a priority in healthcare and personal devices.
31
33
32
-
-**Cost-Effective**: Arm devices, which are cost-effective and scalable, can now handle sophisticated Machine Learning tasks, reducing the need for expensive hardware or cloud services.
34
+
- Cost-effective: Arm devices, which are cost-effective and scalable, can now handle sophisticated machine learning tasks, reducing the need for expensive hardware or cloud services.
33
35
34
-
-**Scalability**: With billions of Arm devices in the market, TinyML is well-suited for scaling across industries, enabling widespread adoption of AI at the edge.
36
+
- Scalability: with billions of Arm devices in the market, TinyML is well-suited for scaling across industries, enabling widespread adoption of AI at the edge.
35
37
36
38
TinyML is being deployed across multiple industries, enhancing everyday experiences and enabling groundbreaking solutions. The table below shows some examples of TinyML applications.
0 commit comments