From 9c1b605e1bb1a58862058c77f0f41463b807af67 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Tue, 1 Apr 2025 21:15:52 +0000 Subject: [PATCH 01/14] Starting content review --- .../floating-point-rounding-errors/_index.md | 8 ++++---- .../floating-point-rounding-errors/how-to-1.md | 3 +++ 2 files changed, 7 insertions(+), 4 deletions(-) diff --git a/content/learning-paths/cross-platform/floating-point-rounding-errors/_index.md b/content/learning-paths/cross-platform/floating-point-rounding-errors/_index.md index f27cf6146b..15142de6fd 100644 --- a/content/learning-paths/cross-platform/floating-point-rounding-errors/_index.md +++ b/content/learning-paths/cross-platform/floating-point-rounding-errors/_index.md @@ -1,5 +1,5 @@ --- -title: Learn about floating point rounding on Arm +title: Debugging floating point differences between x86 and Arm draft: true cascade: @@ -10,9 +10,9 @@ minutes_to_complete: 30 who_is_this_for: Developers porting applications from x86 to Arm who observe different floating point values on each platform. learning_objectives: - - Understand the differences between floating point numbers on x86 and Arm. - - Understand factors that affect floating point behavior. - - How to use compiler flags to produce predictable behavior. + - Identify the key differences in floating point behavior between x86 and Arm. + - Recognize the impact of compiler optimizations and instruction sets on floating point results. + - Add compiler flags to make floating point behavior more predictable across platforms. prerequisites: - Access to an x86 and an Arm Linux machine. diff --git a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md index 3d847e785a..8e774a948e 100644 --- a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md +++ b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md @@ -5,6 +5,9 @@ weight: 2 ### FIXED, DO NOT MODIFY layout: learningpathall --- +When porting applications from x86 to Arm, developers often run into unexpected floating point differences. These aren’t bugs — they’re side effects of platform-specific optimizations, precision handling, and compiler behavior. In this Learning Path, you’ll learn why this happens and how to control it using compiler flags and best practices. + + ## Review of floating point numbers From 3e1818ac1ddce734fe20dbfaf9d74428f05b6420 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Wed, 2 Apr 2025 10:09:22 +0000 Subject: [PATCH 02/14] Content review --- .../floating-point-rounding-errors/_index.md | 14 +++++++------- .../floating-point-rounding-errors/how-to-1.md | 6 ++++++ 2 files changed, 13 insertions(+), 7 deletions(-) diff --git a/content/learning-paths/cross-platform/floating-point-rounding-errors/_index.md b/content/learning-paths/cross-platform/floating-point-rounding-errors/_index.md index 15142de6fd..20d18eb446 100644 --- a/content/learning-paths/cross-platform/floating-point-rounding-errors/_index.md +++ b/content/learning-paths/cross-platform/floating-point-rounding-errors/_index.md @@ -1,5 +1,5 @@ --- -title: Debugging floating point differences between x86 and Arm +title: Explore floating-point differences between x86-64 and AArch64 draft: true cascade: @@ -7,16 +7,16 @@ cascade: minutes_to_complete: 30 -who_is_this_for: Developers porting applications from x86 to Arm who observe different floating point values on each platform. +who_is_this_for: This is an introductory topic for developers who are porting applications from x86-64 (also known as "x86") to AArch64 (also known as "Arm64; the Arm 64-bit architecture) and want to understand how floating-point behavior can differ between these architectures — particularly in the context of numerical consistency, performance, and debugging subtle bugs. learning_objectives: - - Identify the key differences in floating point behavior between x86 and Arm. - - Recognize the impact of compiler optimizations and instruction sets on floating point results. - - Add compiler flags to make floating point behavior more predictable across platforms. + - Identify key differences in floating-point behavior between x86-64 and AArch64. + - Recognize the impact of compiler optimizations and instruction sets on floating-point results. + - Apply compiler flags to ensure consistent floating-point behavior across platforms. prerequisites: - - Access to an x86 and an Arm Linux machine. - - Basic understanding of floating point numbers. + - Access to an x86-64 and an AArch64 Linux machine. + - Familiarity with floating-point numbers. author: Kieran Hejmadi diff --git a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md index 8e774a948e..6cd7c27b21 100644 --- a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md +++ b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md @@ -5,6 +5,10 @@ weight: 2 ### FIXED, DO NOT MODIFY layout: learningpathall --- + +Key summary: Understand the basics of IEEE 754 floating-point formats and rounding behavior to avoid surprises when moving code between architectures. + + When porting applications from x86 to Arm, developers often run into unexpected floating point differences. These aren’t bugs — they’re side effects of platform-specific optimizations, precision handling, and compiler behavior. In this Learning Path, you’ll learn why this happens and how to control it using compiler flags and best practices. @@ -42,3 +46,5 @@ Key takeaways: - Larger numbers have bigger ULPs due to wider spacing between values. - Smaller numbers have smaller ULPs, reducing quantization error. - ULP behavior impacts numerical stability and precision in computations. + +In the next section, you will explore how x86 and Arm differ in how they implement and optimize floating-point operations — and why this matters for portable, accurate software. From 478c3b3dfa183647e09b5a38ec24c6596796b640 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Wed, 2 Apr 2025 10:26:13 +0000 Subject: [PATCH 03/14] Content updates --- .../floating-point-rounding-errors/_index.md | 2 +- .../floating-point-rounding-errors/how-to-1.md | 7 ++++--- 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/content/learning-paths/cross-platform/floating-point-rounding-errors/_index.md b/content/learning-paths/cross-platform/floating-point-rounding-errors/_index.md index 20d18eb446..6ef840c48e 100644 --- a/content/learning-paths/cross-platform/floating-point-rounding-errors/_index.md +++ b/content/learning-paths/cross-platform/floating-point-rounding-errors/_index.md @@ -7,7 +7,7 @@ cascade: minutes_to_complete: 30 -who_is_this_for: This is an introductory topic for developers who are porting applications from x86-64 (also known as "x86") to AArch64 (also known as "Arm64; the Arm 64-bit architecture) and want to understand how floating-point behavior can differ between these architectures — particularly in the context of numerical consistency, performance, and debugging subtle bugs. +who_is_this_for: This is an introductory topic for developers who are porting applications from x86-64 (also known as "x86") to AArch64 (also known as "Arm64; the Arm 64-bit architecture) and want to understand how floating-point behavior can differ between these architectures - particularly in the context of numerical consistency, performance, and debugging subtle bugs. learning_objectives: - Identify key differences in floating-point behavior between x86-64 and AArch64. diff --git a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md index 6cd7c27b21..3f6d0386e4 100644 --- a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md +++ b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md @@ -1,13 +1,14 @@ --- -title: Floating Point Representations +title: Floating Point Representation weight: 2 ### FIXED, DO NOT MODIFY layout: learningpathall --- -Key summary: Understand the basics of IEEE 754 floating-point formats and rounding behavior to avoid surprises when moving code between architectures. - +{{% notice Key Summary%}} +Learn the basics of IEEE 754 floating-point formats and rounding behavior to avoid surprises when moving code between architectures. +{{% /notice %}} When porting applications from x86 to Arm, developers often run into unexpected floating point differences. These aren’t bugs — they’re side effects of platform-specific optimizations, precision handling, and compiler behavior. In this Learning Path, you’ll learn why this happens and how to control it using compiler flags and best practices. From e279d1cb737d94595bf3aac30f7b2aa54a6bc355 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Wed, 2 Apr 2025 10:29:57 +0000 Subject: [PATCH 04/14] fixing yaml error --- .../cross-platform/floating-point-rounding-errors/how-to-1.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md index 3f6d0386e4..161236a274 100644 --- a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md +++ b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md @@ -1,5 +1,5 @@ --- -title: Floating Point Representation +title: "Floating Point Representation" weight: 2 ### FIXED, DO NOT MODIFY From 674114219b58dcb540c4470533574038c20b0121 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Wed, 2 Apr 2025 15:18:44 +0000 Subject: [PATCH 05/14] Content review --- .../floating-point-rounding-errors/_index.md | 8 +++++--- .../floating-point-rounding-errors/how-to-1.md | 2 +- 2 files changed, 6 insertions(+), 4 deletions(-) diff --git a/content/learning-paths/cross-platform/floating-point-rounding-errors/_index.md b/content/learning-paths/cross-platform/floating-point-rounding-errors/_index.md index 6ef840c48e..0f8feebfbd 100644 --- a/content/learning-paths/cross-platform/floating-point-rounding-errors/_index.md +++ b/content/learning-paths/cross-platform/floating-point-rounding-errors/_index.md @@ -7,12 +7,14 @@ cascade: minutes_to_complete: 30 -who_is_this_for: This is an introductory topic for developers who are porting applications from x86-64 (also known as "x86") to AArch64 (also known as "Arm64; the Arm 64-bit architecture) and want to understand how floating-point behavior can differ between these architectures - particularly in the context of numerical consistency, performance, and debugging subtle bugs. +who_is_this_for: This is an introductory topic for developers who are porting applications from x86-64 (also known as x86) to AArch64 (also known as Arm64) and want to understand how floating-point behavior can differ between these architectures - particularly in the context of numerical consistency, performance, and debugging subtle bugs. learning_objectives: - - Identify key differences in floating-point behavior between x86-64 and AArch64. + - Identify key differences in floating-point behavior between the x86-64 and AArch64 + architectures. - Recognize the impact of compiler optimizations and instruction sets on floating-point results. - - Apply compiler flags to ensure consistent floating-point behavior across platforms. + - Apply compiler flags and best practices to ensure consistent floating-point behavior across + platforms. prerequisites: - Access to an x86-64 and an AArch64 Linux machine. diff --git a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md index 161236a274..b127dde669 100644 --- a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md +++ b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md @@ -7,7 +7,7 @@ layout: learningpathall --- {{% notice Key Summary%}} -Learn the basics of IEEE 754 floating-point formats and rounding behavior to avoid surprises when moving code between architectures. +This section describes the basics of IEEE 754 floating-point formats and rounding behavior so that you can avoid surprises when moving code between architectures. {{% /notice %}} When porting applications from x86 to Arm, developers often run into unexpected floating point differences. These aren’t bugs — they’re side effects of platform-specific optimizations, precision handling, and compiler behavior. In this Learning Path, you’ll learn why this happens and how to control it using compiler flags and best practices. From 91268d9976831a5225d2911e5b869ed61ad17ccb Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Wed, 2 Apr 2025 15:25:12 +0000 Subject: [PATCH 06/14] Content review --- .../cross-platform/floating-point-rounding-errors/_index.md | 3 +-- .../cross-platform/floating-point-rounding-errors/how-to-1.md | 4 ---- 2 files changed, 1 insertion(+), 6 deletions(-) diff --git a/content/learning-paths/cross-platform/floating-point-rounding-errors/_index.md b/content/learning-paths/cross-platform/floating-point-rounding-errors/_index.md index 0f8feebfbd..cb29024008 100644 --- a/content/learning-paths/cross-platform/floating-point-rounding-errors/_index.md +++ b/content/learning-paths/cross-platform/floating-point-rounding-errors/_index.md @@ -10,8 +10,7 @@ minutes_to_complete: 30 who_is_this_for: This is an introductory topic for developers who are porting applications from x86-64 (also known as x86) to AArch64 (also known as Arm64) and want to understand how floating-point behavior can differ between these architectures - particularly in the context of numerical consistency, performance, and debugging subtle bugs. learning_objectives: - - Identify key differences in floating-point behavior between the x86-64 and AArch64 - architectures. + - Identify key differences in floating-point behavior between the x86-64 and AArch64 architectures. - Recognize the impact of compiler optimizations and instruction sets on floating-point results. - Apply compiler flags and best practices to ensure consistent floating-point behavior across platforms. diff --git a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md index b127dde669..b1e28f1a2d 100644 --- a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md +++ b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md @@ -10,10 +10,6 @@ layout: learningpathall This section describes the basics of IEEE 754 floating-point formats and rounding behavior so that you can avoid surprises when moving code between architectures. {{% /notice %}} -When porting applications from x86 to Arm, developers often run into unexpected floating point differences. These aren’t bugs — they’re side effects of platform-specific optimizations, precision handling, and compiler behavior. In this Learning Path, you’ll learn why this happens and how to control it using compiler flags and best practices. - - - ## Review of floating point numbers If you are unfamiliar with floating point number representation, you can review [Learn about integer and floating-point conversions](/learning-paths/cross-platform/integer-vs-floats/introduction-integer-float-types/). It covers different data types and explains data type conversions. From eda2bd1c808ca8d3ea9110d2204dba0be34edc56 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Wed, 2 Apr 2025 15:57:04 +0000 Subject: [PATCH 07/14] Further updates --- .../how-to-1.md | 28 ++++++++++--------- .../how-to-2.md | 6 ++-- .../how-to-4.md | 2 +- 3 files changed, 19 insertions(+), 17 deletions(-) diff --git a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md index b1e28f1a2d..6e6de8c453 100644 --- a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md +++ b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md @@ -1,31 +1,28 @@ --- -title: "Floating Point Representation" +title: "Floating-Point Representation" weight: 2 ### FIXED, DO NOT MODIFY layout: learningpathall --- -{{% notice Key Summary%}} -This section describes the basics of IEEE 754 floating-point formats and rounding behavior so that you can avoid surprises when moving code between architectures. -{{% /notice %}} - -## Review of floating point numbers +## Review of floating-point numbers -If you are unfamiliar with floating point number representation, you can review [Learn about integer and floating-point conversions](/learning-paths/cross-platform/integer-vs-floats/introduction-integer-float-types/). It covers different data types and explains data type conversions. +If you are unfamiliar with floating-point number representation, you can review [Learn about integer and floating-point conversions](/learning-paths/cross-platform/integer-vs-floats/introduction-integer-float-types/). It covers different data types and explains data type conversions. -Floating-point numbers are a fundamental representation of real numbers in computer systems, enabling efficient storage and computation of decimal values with varying degrees of precision. In C/C++, floating point variables are created with keywords such as `float` or `double`. The IEEE 754 standard, established in 1985, is the most widely used format for floating-point arithmetic, ensuring consistency across different hardware and software implementations. +Floating-point numbers represent real numbers in computer systems, enabling efficient storage and computation of decimal values with varying degrees of precision. In C/C++, floating-point variables are created with keywords such as `float` or `double`. The IEEE 754 standard, established in 1985, is the most widely used format for floating-point arithmetic, ensuring consistency across different hardware and software implementations. IEEE 754 defines two primary formats: single-precision (32-bit) and double-precision (64-bit). Each floating-point number consists of three components: -- **sign bit**. (Determining positive or negative value) -- **exponent** (defining the scale or magnitude) -- **significand** (also called the mantissa, representing the significant digits of the number). + +- **Sign bit**: Determines the sign (positive or negative). +- **Exponent**: Sets the scale or magnitude of the number. +- **Significand** (or mantissa): Holds the significant digits in binary form. The standard uses a biased exponent to handle both large and small numbers efficiently, and it incorporates special values such as NaN (Not a Number), infinity, and subnormal numbers for robust numerical computation. A key feature of IEEE 754 is its support for rounding modes and exception handling, ensuring predictable behavior in mathematical operations. However, floating-point arithmetic is inherently imprecise due to limited precision, leading to small rounding errors. -The graphic below illustrates various forms of floating point representation supported by Arm, each with varying number of bits assigned to the exponent and mantissa. +The graphic below illustrates various forms of floating-point representation supported by Arm, each with varying number of bits assigned to the exponent and mantissa. ![floating-point](./floating-point-numbers.png) @@ -33,7 +30,7 @@ The graphic below illustrates various forms of floating point representation sup Since computers use a finite number of bits to store a continuous range of numbers, rounding errors are introduced. The unit in last place (ULP) is the smallest difference between two consecutive floating-point numbers. It measures floating-point rounding error, which arises because not all real numbers can be exactly represented. -When an operation is performed, the result is rounded to the nearest representable value, introducing a small error. This error, often measured in ULPs, indicates how close the computed value is to the exact result. For a simple example, if a floating-point schema with 3 bits for the mantissa (precision) and an exponent in the range of -1 to 2 is used, the possible values are represented in the graph below. +When an operation is performed, the result is rounded to the nearest representable value, introducing a small error. This rounding error, often measured in ULPs, reflects how far the computed value may deviate from the exact mathematical result. For a simple example, if a floating-point schema with 3 bits for the mantissa (precision) and an exponent in the range of -1 to 2 is used, the possible values are represented in the graph below. ![ulp](./ulp.png) @@ -44,4 +41,9 @@ Key takeaways: - Smaller numbers have smaller ULPs, reducing quantization error. - ULP behavior impacts numerical stability and precision in computations. +{{% notice Learning Tip %}} +Keep in mind that rounding and representation issues aren't bugs — they’re a consequence of how floating-point math works at the hardware level. Understanding these fundamentals is critical when porting numerical code across architectures like x86-64 and AArch64. +{{% /notice %}} + + In the next section, you will explore how x86 and Arm differ in how they implement and optimize floating-point operations — and why this matters for portable, accurate software. diff --git a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-2.md b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-2.md index fcaff6195f..58bef8202c 100644 --- a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-2.md +++ b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-2.md @@ -8,14 +8,14 @@ layout: learningpathall ## What are the differences in behavior between x86 and Arm floating point? -Architecture and standards define floating point overflows and truncations in different ways. +Architecture and standards define floating-point overflows and truncations in different ways. You can see this by comparing an example application on an x86 and an Arm Linux system. You can use any Linux systems for this example. If you are using AWS, you can use EC2 instance types `t3.micro` and `t4g.small` running Ubuntu 24.04. -To learn about floating point differences, use an editor to copy and paste the C++ code below into a new file named `converting-float.cpp`. +To learn about floating-point differences, use an editor to copy and paste the C++ code below into a new file named `converting-float.cpp`. ```cpp #include @@ -86,7 +86,7 @@ As you can see, there are several cases where different behavior is observed. Fo The above differences show that explicitly checking for specific values will lead to unportable code. -For example, consider the function below. The code checks if the value is 0. The value an x86 machine will convert a floating point number that exceeds the maximum 32-bit float value. This is different from Arm behavior leading to unportable code. +For example, consider the function below. The code checks if the value is 0. The value an x86 machine will convert a floating-point number that exceeds the maximum 32-bit float value. This is different from Arm behavior leading to unportable code. ```cpp void checkFloatToUint32(float num) { diff --git a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-4.md b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-4.md index bd89fe8dbd..4e942f1e2c 100644 --- a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-4.md +++ b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-4.md @@ -83,7 +83,7 @@ Final result after magnification: 0.0000999982 G++ provides several compiler flags to help balance accuracy and performance such as`-ffp-contract` which is useful when lossy, fused operations are used, such as fused-multiple. -Another example is `-ffloat-store` which prevents floating point variables from being stored in registers which can have different levels of precision and rounding. +Another example is `-ffloat-store` which prevents floating-point variables from being stored in registers which can have different levels of precision and rounding. You can refer to compiler documentation for more information about the available flags. From 556018dfcc3f10d7f76d5d070c160087f9668a2b Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Wed, 2 Apr 2025 16:15:57 +0000 Subject: [PATCH 08/14] Content dev --- .../floating-point-rounding-errors/how-to-2.md | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-2.md b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-2.md index 58bef8202c..55f6e40f88 100644 --- a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-2.md +++ b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-2.md @@ -8,12 +8,11 @@ layout: learningpathall ## What are the differences in behavior between x86 and Arm floating point? -Architecture and standards define floating-point overflows and truncations in different ways. +Although both x86 and Arm generally follow the IEEE 754 standard for floating-point representation, their behavior in edge cases — like overflow and truncation — can differ due to implementation details and instruction sets. -You can see this by comparing an example application on an x86 and an Arm Linux system. +You can see this by comparing an example application on both an x86-64 and an AArch64 (Arm64) Linux system. -You can use any Linux systems for this example. If you are using AWS, you can use EC2 instance types -`t3.micro` and `t4g.small` running Ubuntu 24.04. +You can run this example on any Linux system with x86-64 and AArch64 architecture. If you are using AWS, you can use EC2 instance types `t3.micro` and `t4g.small` running Ubuntu 24.04. To learn about floating-point differences, use an editor to copy and paste the C++ code below into a new file named `converting-float.cpp`. From b7cc9510d4f2c1d91e64ab50e9a56816e3d81d3f Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Wed, 2 Apr 2025 20:10:04 +0000 Subject: [PATCH 09/14] Content updates --- .../floating-point-rounding-errors/how-to-1.md | 7 +++---- .../floating-point-rounding-errors/how-to-2.md | 4 +++- .../floating-point-rounding-errors/how-to-4.md | 4 ++-- 3 files changed, 8 insertions(+), 7 deletions(-) diff --git a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md index 6e6de8c453..643ade2a8f 100644 --- a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md +++ b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md @@ -8,12 +8,11 @@ layout: learningpathall ## Review of floating-point numbers -If you are unfamiliar with floating-point number representation, you can review [Learn about integer and floating-point conversions](/learning-paths/cross-platform/integer-vs-floats/introduction-integer-float-types/). It covers different data types and explains data type conversions. +If you are unfamiliar with floating-point number representation, you can review [Learn about integer and floating-point conversions](/learning-paths/cross-platform/integer-vs-floats/introduction-integer-float-types/). It covers data types and conversions. Floating-point numbers represent real numbers in computer systems, enabling efficient storage and computation of decimal values with varying degrees of precision. In C/C++, floating-point variables are created with keywords such as `float` or `double`. The IEEE 754 standard, established in 1985, is the most widely used format for floating-point arithmetic, ensuring consistency across different hardware and software implementations. IEEE 754 defines two primary formats: single-precision (32-bit) and double-precision (64-bit). - Each floating-point number consists of three components: - **Sign bit**: Determines the sign (positive or negative). @@ -28,7 +27,7 @@ The graphic below illustrates various forms of floating-point representation sup ## Rounding errors -Since computers use a finite number of bits to store a continuous range of numbers, rounding errors are introduced. The unit in last place (ULP) is the smallest difference between two consecutive floating-point numbers. It measures floating-point rounding error, which arises because not all real numbers can be exactly represented. +Because computers use a finite number of bits to store a continuous range of numbers, rounding errors are introduced. The unit in last place (ULP) is the smallest difference between two consecutive floating-point numbers. It measures floating-point rounding error, which arises because not all real numbers can be exactly represented. When an operation is performed, the result is rounded to the nearest representable value, introducing a small error. This rounding error, often measured in ULPs, reflects how far the computed value may deviate from the exact mathematical result. For a simple example, if a floating-point schema with 3 bits for the mantissa (precision) and an exponent in the range of -1 to 2 is used, the possible values are represented in the graph below. @@ -42,7 +41,7 @@ Key takeaways: - ULP behavior impacts numerical stability and precision in computations. {{% notice Learning Tip %}} -Keep in mind that rounding and representation issues aren't bugs — they’re a consequence of how floating-point math works at the hardware level. Understanding these fundamentals is critical when porting numerical code across architectures like x86-64 and AArch64. +Keep in mind that rounding and representation issues aren't bugs — they’re a consequence of how floating-point math works at the hardware level. Understanding these fundamentals is critical when porting numerical code across architectures like x86 and Arm. {{% /notice %}} diff --git a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-2.md b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-2.md index 55f6e40f88..bee6d83188 100644 --- a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-2.md +++ b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-2.md @@ -85,7 +85,9 @@ As you can see, there are several cases where different behavior is observed. Fo The above differences show that explicitly checking for specific values will lead to unportable code. -For example, consider the function below. The code checks if the value is 0. The value an x86 machine will convert a floating-point number that exceeds the maximum 32-bit float value. This is different from Arm behavior leading to unportable code. +For example, consider the function below. In the example below, the code checks whether the casted result is `0`. This can be misleading — on x86, casting an out-of-range floating-point value to `uint32_t` may wrap to `0`, while on Arm it may behave differently. Relying on these results makes the code unportable. + + ```cpp void checkFloatToUint32(float num) { diff --git a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-4.md b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-4.md index 4e942f1e2c..19b9a7d023 100644 --- a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-4.md +++ b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-4.md @@ -1,12 +1,12 @@ --- -title: Minimizing variability across platforms +title: Minimizing floating-point variability across platforms weight: 5 ### FIXED, DO NOT MODIFY layout: learningpathall --- -## How can I minimize variability across x86 and Arm? +## How can I minimize floating-point variability across x86 and Arm? The line `#pragma STDC FENV_ACCESS ON` is a directive that informs the compiler to enable access to the floating-point environment. From 972bd6582a390609a238feb935eed95ffd3f8332 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Thu, 3 Apr 2025 10:03:43 +0000 Subject: [PATCH 10/14] Checked index and 1.md --- .../floating-point-rounding-errors/_index.md | 8 ++--- .../how-to-1.md | 33 ++++++++++--------- 2 files changed, 22 insertions(+), 19 deletions(-) diff --git a/content/learning-paths/cross-platform/floating-point-rounding-errors/_index.md b/content/learning-paths/cross-platform/floating-point-rounding-errors/_index.md index cb29024008..cbb6fcadf9 100644 --- a/content/learning-paths/cross-platform/floating-point-rounding-errors/_index.md +++ b/content/learning-paths/cross-platform/floating-point-rounding-errors/_index.md @@ -1,5 +1,5 @@ --- -title: Explore floating-point differences between x86-64 and AArch64 +title: Explore floating-point differences between x86 and Arm draft: true cascade: @@ -7,16 +7,16 @@ cascade: minutes_to_complete: 30 -who_is_this_for: This is an introductory topic for developers who are porting applications from x86-64 (also known as x86) to AArch64 (also known as Arm64) and want to understand how floating-point behavior can differ between these architectures - particularly in the context of numerical consistency, performance, and debugging subtle bugs. +who_is_this_for: This is an introductory topic for developers who are porting applications from x86 to Arm and want to understand how floating-point behavior differs between these architectures - particularly in the context of numerical consistency, performance, and debugging subtle bugs. learning_objectives: - - Identify key differences in floating-point behavior between the x86-64 and AArch64 architectures. + - Identify key differences in floating-point behavior between the x86 and Arm architectures. - Recognize the impact of compiler optimizations and instruction sets on floating-point results. - Apply compiler flags and best practices to ensure consistent floating-point behavior across platforms. prerequisites: - - Access to an x86-64 and an AArch64 Linux machine. + - Access to an x86 and an Arm Linux machine. - Familiarity with floating-point numbers. author: Kieran Hejmadi diff --git a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md index 643ade2a8f..a5af4325a0 100644 --- a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md +++ b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md @@ -8,41 +8,44 @@ layout: learningpathall ## Review of floating-point numbers -If you are unfamiliar with floating-point number representation, you can review [Learn about integer and floating-point conversions](/learning-paths/cross-platform/integer-vs-floats/introduction-integer-float-types/). It covers data types and conversions. +If you are unfamiliar with floating-point number representation, you can review the Learning Path [Learn about integer and floating-point conversions](/learning-paths/cross-platform/integer-vs-floats/introduction-integer-float-types/). It covers data types and conversions. -Floating-point numbers represent real numbers in computer systems, enabling efficient storage and computation of decimal values with varying degrees of precision. In C/C++, floating-point variables are created with keywords such as `float` or `double`. The IEEE 754 standard, established in 1985, is the most widely used format for floating-point arithmetic, ensuring consistency across different hardware and software implementations. +Floating-point numbers represent real numbers using limited precision, enabling efficient storage and computation of decimal values with varying degrees of precision. In C/C++, floating-point variables are created with keywords such as `float` or `double`. The IEEE 754 standard, established in 1985, defines the most widely used format for floating-point arithmetic, ensuring consistency across hardware and software. + +IEEE 754 specifies two primary formats: single-precision (32-bit) and double-precision (64-bit). -IEEE 754 defines two primary formats: single-precision (32-bit) and double-precision (64-bit). Each floating-point number consists of three components: - **Sign bit**: Determines the sign (positive or negative). -- **Exponent**: Sets the scale or magnitude of the number. -- **Significand** (or mantissa): Holds the significant digits in binary form. +- **Exponent**: Sets the scale or magnitude. +- **Significand** (or mantissa): Holds the significant digits in binary. -The standard uses a biased exponent to handle both large and small numbers efficiently, and it incorporates special values such as NaN (Not a Number), infinity, and subnormal numbers for robust numerical computation. A key feature of IEEE 754 is its support for rounding modes and exception handling, ensuring predictable behavior in mathematical operations. However, floating-point arithmetic is inherently imprecise due to limited precision, leading to small rounding errors. +The standard uses a biased exponent to handle both large and small numbers efficiently, and it incorporates special values such as NaN (Not a Number), infinity, and subnormal numbers. It supports rounding modes and exception handling, which help ensure predictable results. However, floating-point arithmetic is inherently imprecise, leading to small rounding errors. -The graphic below illustrates various forms of floating-point representation supported by Arm, each with varying number of bits assigned to the exponent and mantissa. +The graphic below shows various forms of floating-point representation supported by Arm, each with varying number of bits assigned to the exponent and mantissa. ![floating-point](./floating-point-numbers.png) ## Rounding errors -Because computers use a finite number of bits to store a continuous range of numbers, rounding errors are introduced. The unit in last place (ULP) is the smallest difference between two consecutive floating-point numbers. It measures floating-point rounding error, which arises because not all real numbers can be exactly represented. +Because computers use a finite number of bits to store a continuous range of numbers, rounding errors are introduced. The unit in last place (ULP) is the smallest difference between two consecutive floating-point numbers. It quantifies the rounding error, which arises because not all real values can be exactly represented. + +Operations round results to the nearest representable value, introducing small discrepancies. This rounding error, often measured in ULPs, reflects how far the computed value may deviate from the exact mathematical result. -When an operation is performed, the result is rounded to the nearest representable value, introducing a small error. This rounding error, often measured in ULPs, reflects how far the computed value may deviate from the exact mathematical result. For a simple example, if a floating-point schema with 3 bits for the mantissa (precision) and an exponent in the range of -1 to 2 is used, the possible values are represented in the graph below. +For example, with 3 bits for the significand (mantissa) and an exponent range of -1 to 2, only a limited set of values can be represented.The diagram below illustrates these values. ![ulp](./ulp.png) Key takeaways: -- ULP size varies with the number’s magnitude. -- Larger numbers have bigger ULPs due to wider spacing between values. -- Smaller numbers have smaller ULPs, reducing quantization error. -- ULP behavior impacts numerical stability and precision in computations. +- ULP size increases with magnitude. +- Larger numbers have wider spacing between values (larger ULPs). +- Smaller numbers have tighter spacing (smaller ULPs), reducing quantization error. +- ULP behavior impacts numerical stability and precision. {{% notice Learning Tip %}} -Keep in mind that rounding and representation issues aren't bugs — they’re a consequence of how floating-point math works at the hardware level. Understanding these fundamentals is critical when porting numerical code across architectures like x86 and Arm. +Keep in mind that rounding and representation issues aren't bugs — they’re a consequence of how floating-point math works at the hardware level. Understanding these fundamentals is essential when porting numerical code across architectures like x86 and Arm. {{% /notice %}} -In the next section, you will explore how x86 and Arm differ in how they implement and optimize floating-point operations — and why this matters for portable, accurate software. +In the next section, you'll explore how x86 and Arm differ in how they implement and optimize floating-point operations — and why this matters for writing portable, accurate software. From 1fc1f3fbaf0a8e64462e116bf693a7abef433bbb Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Thu, 3 Apr 2025 10:40:33 +0000 Subject: [PATCH 11/14] content updates --- .../cross-platform/floating-point-rounding-errors/how-to-2.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-2.md b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-2.md index bee6d83188..f0377331ca 100644 --- a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-2.md +++ b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-2.md @@ -12,7 +12,7 @@ Although both x86 and Arm generally follow the IEEE 754 standard for floating-po You can see this by comparing an example application on both an x86-64 and an AArch64 (Arm64) Linux system. -You can run this example on any Linux system with x86-64 and AArch64 architecture. If you are using AWS, you can use EC2 instance types `t3.micro` and `t4g.small` running Ubuntu 24.04. +Run this example on any Linux system with x86 and Arm architecture; on AWS, use EC2 instance types `t3.micro` and `t4g.small` with Ubuntu 24.04. To learn about floating-point differences, use an editor to copy and paste the C++ code below into a new file named `converting-float.cpp`. From b143c257ea645d8e52b22b246168c5e824b53c36 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Thu, 3 Apr 2025 12:20:29 +0000 Subject: [PATCH 12/14] Content dev --- .../cross-platform/floating-point-rounding-errors/how-to-1.md | 2 +- .../cross-platform/floating-point-rounding-errors/how-to-2.md | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md index a5af4325a0..068f8374a5 100644 --- a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md +++ b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md @@ -8,7 +8,7 @@ layout: learningpathall ## Review of floating-point numbers -If you are unfamiliar with floating-point number representation, you can review the Learning Path [Learn about integer and floating-point conversions](/learning-paths/cross-platform/integer-vs-floats/introduction-integer-float-types/). It covers data types and conversions. +If you are new to floating-point numbers, for some background information, see the Learning Path [Learn about integer and floating-point conversions](/learning-paths/cross-platform/integer-vs-floats/introduction-integer-float-types/). It covers data types and conversions. Floating-point numbers represent real numbers using limited precision, enabling efficient storage and computation of decimal values with varying degrees of precision. In C/C++, floating-point variables are created with keywords such as `float` or `double`. The IEEE 754 standard, established in 1985, defines the most widely used format for floating-point arithmetic, ensuring consistency across hardware and software. diff --git a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-2.md b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-2.md index f0377331ca..6cdd2d6c6a 100644 --- a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-2.md +++ b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-2.md @@ -10,7 +10,7 @@ layout: learningpathall Although both x86 and Arm generally follow the IEEE 754 standard for floating-point representation, their behavior in edge cases — like overflow and truncation — can differ due to implementation details and instruction sets. -You can see this by comparing an example application on both an x86-64 and an AArch64 (Arm64) Linux system. +You can see this by comparing an example application on both an x86 and an Arm Linux system. Run this example on any Linux system with x86 and Arm architecture; on AWS, use EC2 instance types `t3.micro` and `t4g.small` with Ubuntu 24.04. @@ -85,7 +85,7 @@ As you can see, there are several cases where different behavior is observed. Fo The above differences show that explicitly checking for specific values will lead to unportable code. -For example, consider the function below. In the example below, the code checks whether the casted result is `0`. This can be misleading — on x86, casting an out-of-range floating-point value to `uint32_t` may wrap to `0`, while on Arm it may behave differently. Relying on these results makes the code unportable. +For example, the function below checks if the casted result is `0`. This can be misleading — on x86, casting an out-of-range floating-point value to `uint32_t` may wrap to `0`, while on Arm it may behave differently. Relying on these results makes the code unportable. From 3e26f9411b854aa86487f9310ea0c6549adb81a7 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Thu, 3 Apr 2025 14:11:46 +0000 Subject: [PATCH 13/14] Final content dev changes --- .../floating-point-rounding-errors/_index.md | 2 +- .../floating-point-rounding-errors/how-to-1.md | 15 +++++++++------ .../floating-point-rounding-errors/how-to-2.md | 4 ++-- .../floating-point-rounding-errors/how-to-3.md | 10 +++++----- .../floating-point-rounding-errors/how-to-4.md | 16 +++++++--------- 5 files changed, 24 insertions(+), 23 deletions(-) diff --git a/content/learning-paths/cross-platform/floating-point-rounding-errors/_index.md b/content/learning-paths/cross-platform/floating-point-rounding-errors/_index.md index cbb6fcadf9..923b028e69 100644 --- a/content/learning-paths/cross-platform/floating-point-rounding-errors/_index.md +++ b/content/learning-paths/cross-platform/floating-point-rounding-errors/_index.md @@ -39,7 +39,7 @@ shared_between: further_reading: - resource: - title: G++ Optimisation Flags + title: G++ Optimization Flags link: https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html type: documentation - resource: diff --git a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md index 068f8374a5..c2855ae849 100644 --- a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md +++ b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-1.md @@ -8,9 +8,12 @@ layout: learningpathall ## Review of floating-point numbers -If you are new to floating-point numbers, for some background information, see the Learning Path [Learn about integer and floating-point conversions](/learning-paths/cross-platform/integer-vs-floats/introduction-integer-float-types/). It covers data types and conversions. +{{% notice Learning tip%}} +If you are new to floating-point numbers, and would like some further information, see +the Learning Path [Learn about integer and floating-point conversions](/learning-paths/cross-platform/integer-vs-floats/introduction-integer-float-types/). It covers data types and conversions. +{{% /notice %}} -Floating-point numbers represent real numbers using limited precision, enabling efficient storage and computation of decimal values with varying degrees of precision. In C/C++, floating-point variables are created with keywords such as `float` or `double`. The IEEE 754 standard, established in 1985, defines the most widely used format for floating-point arithmetic, ensuring consistency across hardware and software. +Floating-point numbers represent real numbers using limited precision, enabling efficient storage and computation of decimal values. In C/C++, floating-point variables are created with keywords such as `float` or `double`. The IEEE 754 standard, established in 1985, defines the most widely used format for floating-point arithmetic, ensuring consistency across hardware and software. IEEE 754 specifies two primary formats: single-precision (32-bit) and double-precision (64-bit). @@ -18,11 +21,11 @@ Each floating-point number consists of three components: - **Sign bit**: Determines the sign (positive or negative). - **Exponent**: Sets the scale or magnitude. -- **Significand** (or mantissa): Holds the significant digits in binary. +- **Significand**: Holds the significant digits in binary. The standard uses a biased exponent to handle both large and small numbers efficiently, and it incorporates special values such as NaN (Not a Number), infinity, and subnormal numbers. It supports rounding modes and exception handling, which help ensure predictable results. However, floating-point arithmetic is inherently imprecise, leading to small rounding errors. -The graphic below shows various forms of floating-point representation supported by Arm, each with varying number of bits assigned to the exponent and mantissa. +The graphic below shows various forms of floating-point representation supported by Arm, each with varying number of bits assigned to the exponent and significand. ![floating-point](./floating-point-numbers.png) @@ -32,7 +35,7 @@ Because computers use a finite number of bits to store a continuous range of num Operations round results to the nearest representable value, introducing small discrepancies. This rounding error, often measured in ULPs, reflects how far the computed value may deviate from the exact mathematical result. -For example, with 3 bits for the significand (mantissa) and an exponent range of -1 to 2, only a limited set of values can be represented.The diagram below illustrates these values. +For example, with 3 bits for the significand and an exponent range of -1 to 2, only a limited set of values can be represented. The diagram below illustrates these values. ![ulp](./ulp.png) @@ -43,7 +46,7 @@ Key takeaways: - Smaller numbers have tighter spacing (smaller ULPs), reducing quantization error. - ULP behavior impacts numerical stability and precision. -{{% notice Learning Tip %}} +{{% notice Learning tip %}} Keep in mind that rounding and representation issues aren't bugs — they’re a consequence of how floating-point math works at the hardware level. Understanding these fundamentals is essential when porting numerical code across architectures like x86 and Arm. {{% /notice %}} diff --git a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-2.md b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-2.md index 6cdd2d6c6a..744a904fc2 100644 --- a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-2.md +++ b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-2.md @@ -79,7 +79,7 @@ For easy comparison, the image below shows the x86 output (left) and Arm output ![differences](./differences.png) -As you can see, there are several cases where different behavior is observed. For example when trying to convert a signed number to a unsigned number or dealing with out-of-bounds numbers. +As you can see, there are several cases where different behavior is observed. For example when trying to convert a signed number to an unsigned number or dealing with out-of-bounds numbers. ## Removing hardcoded values with macros @@ -93,7 +93,7 @@ For example, the function below checks if the casted result is `0`. This can be void checkFloatToUint32(float num) { uint32_t castedNum = static_cast(num); if (castedNum == 0) { - std::cout << "The casted number is 0, indicating the float could out of bounds for uint32_t." << std::endl; + std::cout << "The casted number is 0, indicating that the float is out of bounds for uint32_t." << std::endl; } else { std::cout << "The casted number is: " << castedNum << std::endl; } diff --git a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-3.md b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-3.md index 3b031656fa..40f4f964ce 100644 --- a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-3.md +++ b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-3.md @@ -10,13 +10,13 @@ layout: learningpathall One cause of different outputs between x86 and Arm stems from the order of instructions and how errors are propagated. As a hypothetical example, an Arm system may decide to reorder the instructions that each have a different rounding error so that subtle changes are observed. -It is possible that 2 functions that are mathematically equivalent will propagate errors differently on a computer. +It is possible that two functions that are mathematically equivalent will propagate errors differently on a computer. Functions `f1` and `f2` are mathematically equivalent. You would expect them to return the same value given the same input. - If the input is a very small number, `1e-8`, the error is different due to the loss in precision caused by different operations. Specifically, `f2` avoids the subtraction of nearly equal number. For a full description look into the topic of [numerical stability](https://en.wikipedia.org/wiki/Numerical_stability). + If the input is a very small number, `1e-8`, the error is different due to the loss in precision caused by different operations. Specifically, `f2` avoids subtracting nearly equal numbers for clarity. For a full description look into the topic of [numerical stability](https://en.wikipedia.org/wiki/Numerical_stability). -Use an editor to copy and paste the C++ code below into a file named `error-propagation.cpp`. +Use an editor to copy and paste the C++ code below into a file named `error-propagation.cpp`: ```cpp #include @@ -53,13 +53,13 @@ int main() { } ``` -Compile the code on both x86 and Arm with the following command. +Compile the code on both x86 and Arm with the following command: ```bash g++ -g error-propagation.cpp -o error-propagation ``` -Running the 2 binaries shows that the second function, `f2`, has a small rounding error on both architectures. Additionally, there is a further rounding difference when run on x86 compared to Arm. +Running the two binaries shows that the second function, `f2`, has a small rounding error on both architectures. Additionally, there is a further rounding difference when run on x86 compared to Arm. Running on x86: diff --git a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-4.md b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-4.md index 19b9a7d023..12a625f261 100644 --- a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-4.md +++ b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-4.md @@ -8,15 +8,13 @@ layout: learningpathall ## How can I minimize floating-point variability across x86 and Arm? -The line `#pragma STDC FENV_ACCESS ON` is a directive that informs the compiler to enable access to the floating-point environment. +The line `#pragma STDC FENV_ACCESS ON` is a directive that informs the compiler to enable access to the floating-point environment. This is part of the C++11 standard and ensures that the program can properly handle floating-point exceptions and rounding modes, enabling your program to continue running if an exception is thrown. -This is part of the C++11 standard and is used to ensure that the program can properly handle floating-point exceptions and rounding modes enabling your program to continue running if an exception is thrown. - -In the context below, enabling floating-point environment access is crucial because the functions you are working with involve floating-point arithmetic, which can be prone to precision errors and exceptions such as overflow, underflow, division by zero, and invalid operations. This is not necessary for this example, but is included because it may be relevant for your own application. +In the context below, enabling floating-point environment access is crucial because the functions in this example involve floating-point arithmetic, which can be prone to precision errors and exceptions such as overflow, underflow, division by zero, and invalid operations. Although not strictly necessary for this example, the directive is included because it may be relevant for your own applications. This directive is particularly important when performing operations that require high numerical stability and precision, such as the square root calculations in functions below. It allows the program to manage the floating-point state and handle any anomalies that might occur during these calculations, thereby improving the robustness and reliability of your numerical computations. -Use an editor to copy and paste the C++ file below into a file named `error-propagation-min.cpp`. +Use an editor to copy and paste the C++ file below into a file named `error-propagation-min.cpp`: ```cpp #include @@ -63,13 +61,13 @@ int main() { Compile on both computers, using the C++ flag, `-frounding-math`. -You should use this flat when your program dynamically changes the floating-point rounding mode or needs to run correctly under different rounding modes. In this example, it results in a predictable rounding mode on function `f1` across x86 and Arm. +You should use this flag when your program dynamically changes the floating-point rounding mode or needs to run correctly under different rounding modes. In this example, it ensures that `f1` uses a predictable rounding mode across both x86 and Arm. ```bash g++ -o error-propagation-min error-propagation-min.cpp -frounding-math ``` -Running the new binary on both systems leads to function, `f1` having a similar value to `f2`. Further the difference is now identical across both Arm64 and x86. +Running the new binary on both systems shows that function `f1` produces a value nearly identical to `f2`, and the difference between them is now identical across both Arm64 and x86. Here is the output on both systems: @@ -81,9 +79,9 @@ Difference (f1 - f2) = -1.7887354748e-17 Final result after magnification: 0.0000999982 ``` -G++ provides several compiler flags to help balance accuracy and performance such as`-ffp-contract` which is useful when lossy, fused operations are used, such as fused-multiple. +G++ provides several compiler flags to help balance accuracy and performance. For example, `-ffp-contract` is useful when lossy, fused operations are used, such as fused-multiple. Another example is `-ffloat-store` which prevents floating-point variables from being stored in registers which can have different levels of precision and rounding. -You can refer to compiler documentation for more information about the available flags. +You can refer to compiler documentation for more information on the flags available. From 5ae8a884e522c57397fc64ea43d7d1cdcad0e565 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Thu, 3 Apr 2025 14:13:46 +0000 Subject: [PATCH 14/14] Final tweaks --- .../floating-point-rounding-errors/how-to-2.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-2.md b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-2.md index 744a904fc2..e165e911f6 100644 --- a/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-2.md +++ b/content/learning-paths/cross-platform/floating-point-rounding-errors/how-to-2.md @@ -14,7 +14,7 @@ You can see this by comparing an example application on both an x86 and an Arm L Run this example on any Linux system with x86 and Arm architecture; on AWS, use EC2 instance types `t3.micro` and `t4g.small` with Ubuntu 24.04. -To learn about floating-point differences, use an editor to copy and paste the C++ code below into a new file named `converting-float.cpp`. +To learn about floating-point differences, use an editor to copy and paste the C++ code below into a new file named `converting-float.cpp`: ```cpp #include @@ -60,7 +60,7 @@ int main() { } ``` -If you need to install the `g++` compiler, run the commands below. +If you need to install the `g++` compiler, run the commands below: ```bash sudo apt update @@ -75,7 +75,7 @@ The compile command is the same on both systems. g++ converting-float.cpp -o converting-float ``` -For easy comparison, the image below shows the x86 output (left) and Arm output (right). The highlighted lines show the difference in output. +For easy comparison, the image below shows the x86 output (left) and Arm output (right). The highlighted lines show the difference in output: ![differences](./differences.png)