Skip to content
Closed
65 changes: 65 additions & 0 deletions explain.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
The unit test for the `add.cc` operator is located in `tensorflow/lite/micro/kernels/add_test.cc`. This file contains several test cases to ensure the correctness of the Add operator for different data types, shapes, and activation functions.

Here is a summary of the tests covered:

1. **FloatAddNoActivation**:
- Tests floating-point addition.
- Inputs are 4D tensors with shape {1, 2, 2, 1}.
- No activation function is applied.

2. **FloatAddActivationRelu1**:
- Tests floating-point addition with ReLU1 activation.
- The output is clamped between -1 and 1.

3. **FloatAddActivationRelu**:
- Tests floating-point addition with ReLU activation.
- The output is clamped to be non-negative (>= 0).

4. **FloatAddVariousInputShapes**:
- Tests floating-point addition with various input shapes.
- It iterates through a set of predefined shapes (1D, 2D, 3D, 4D) to ensure the operator handles different dimensions correctly.

5. **FloatAddWithScalarBroadcast**:
- Tests floating-point addition where one input is a scalar.
- This verifies the broadcasting capability where a scalar value is added to every element of a tensor.

6. **Int32AddNoActivation**:
- Tests 32-bit integer addition.
- It checks for correct handling of large integer values and potential overflows/underflows (though standard add behavior is expected).

7. **QuantizedAddNoActivationInt8**:
- Tests 8-bit quantized integer addition.
- It involves quantization parameters (scales and zero points) for inputs and output.
- No activation function.

8. **QuantizedAddActivationRelu1Int8**:
- Tests 8-bit quantized integer addition with ReLU1 activation.
- Verifies that the quantized output is correctly clamped.

9. **QuantizedAddVariousInputShapesInt8**:
- Tests 8-bit quantized integer addition across various input shapes, similar to the float version.

10. **QuantizedAddWithScalarBroadcastFloat**:
- Despite the name, this test appears to verify broadcasting logic using float values against golden results that are used for broadcast tests. It calls `TestAddFloat`. It iterates through complex broadcast shapes.

11. **QuantizedAddWithScalarBroadcastInt8**:
- Tests 8-bit quantized integer addition with scalar broadcasting.

12. **QuantizedAddWithMixedBroadcastInt8**:
- Tests 8-bit quantized integer addition with mixed broadcasting (where inputs have different shapes but are broadcast-compatible).

13. **QuantizedAddNoActivationInt16**:
- Tests 16-bit quantized integer addition.
- Similar to Int8 tests but with `int16_t` data type.

14. **QuantizedAddActivationRelu1Int16**:
- Tests 16-bit quantized integer addition with ReLU1 activation.

15. **QuantizedAddVariousInputShapesInt16**:
- Tests 16-bit quantized integer addition with various input shapes.

16. **QuantizedAddWithScalarBroadcastInt16**:
- Tests 16-bit quantized integer addition with scalar broadcasting.

17. **QuantizedAddWithMixedBroadcastInt16**:
- Tests 16-bit quantized integer addition with mixed broadcasting.
36 changes: 36 additions & 0 deletions golden_data_calculation.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
Explanation of Golden Data Calculation for FloatAddActivationRelu

The `FloatAddActivationRelu` test case verifies the addition operator with a ReLU activation function.
The expected "golden" values were calculated manually based on the mathematical definition of the operation.

**Operation:**
Output = ReLU(Input1 + Input2)
ReLU(x) = max(0, x)

**Inputs:**
Input 1: {-2.0, 0.2, 0.7, 0.8}
Input 2: { 0.1, 0.2, 0.3, 0.5}

**Step-by-Step Calculation:**

1. **Index 0:**
* Addition: -2.0 + 0.1 = -1.9
* Activation: max(0, -1.9) = 0.0

2. **Index 1:**
* Addition: 0.2 + 0.2 = 0.4
* Activation: max(0, 0.4) = 0.4

3. **Index 2:**
* Addition: 0.7 + 0.3 = 1.0
* Activation: max(0, 1.0) = 1.0

4. **Index 3:**
* Addition: 0.8 + 0.5 = 1.3
* Activation: max(0, 1.3) = 1.3

**Resulting Golden Values:**
{0.0, 0.4, 1.0, 1.3}

These are the values used in the test case:
`const float golden_values[] = {0.0, 0.4, 1.0, 1.3};`
18 changes: 18 additions & 0 deletions result/result_add.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
Testing FloatAddNoActivation
Testing FloatAddActivationRelu1
Testing FloatAddVariousInputShapes
Testing FloatAddWithScalarBroadcast
Testing Int32AddNoActivation
Testing QuantizedAddNoActivationInt8
Testing QuantizedAddActivationRelu1Int8
Testing QuantizedAddVariousInputShapesInt8
Testing QuantizedAddWithScalarBroadcastFloat
Testing QuantizedAddWithScalarBroadcastInt8
Testing QuantizedAddWithMixedBroadcastInt8
Testing QuantizedAddNoActivationInt16
Testing QuantizedAddActivationRelu1Int16
Testing QuantizedAddVariousInputShapesInt16
Testing QuantizedAddWithScalarBroadcastInt16
Testing QuantizedAddWithMixedBroadcastInt16
16/16 tests passed
~~~ALL TESTS PASSED~~~
19 changes: 19 additions & 0 deletions result/result_add_1.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
Testing FloatAddNoActivation
Testing FloatAddActivationRelu1
Testing FloatAddActivationRelu
Testing FloatAddVariousInputShapes
Testing FloatAddWithScalarBroadcast
Testing Int32AddNoActivation
Testing QuantizedAddNoActivationInt8
Testing QuantizedAddActivationRelu1Int8
Testing QuantizedAddVariousInputShapesInt8
Testing QuantizedAddWithScalarBroadcastFloat
Testing QuantizedAddWithScalarBroadcastInt8
Testing QuantizedAddWithMixedBroadcastInt8
Testing QuantizedAddNoActivationInt16
Testing QuantizedAddActivationRelu1Int16
Testing QuantizedAddVariousInputShapesInt16
Testing QuantizedAddWithScalarBroadcastInt16
Testing QuantizedAddWithMixedBroadcastInt16
17/17 tests passed
~~~ALL TESTS PASSED~~~
19 changes: 19 additions & 0 deletions result/result_add_1_01.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
Testing FloatAddNoActivation
Testing FloatAddActivationRelu1
Testing FloatAddActivationRelu
Testing FloatAddVariousInputShapes
Testing FloatAddWithScalarBroadcast
Testing Int32AddNoActivation
Testing QuantizedAddNoActivationInt8
Testing QuantizedAddActivationRelu1Int8
Testing QuantizedAddVariousInputShapesInt8
Testing QuantizedAddWithScalarBroadcastFloat
Testing QuantizedAddWithScalarBroadcastInt8
Testing QuantizedAddWithMixedBroadcastInt8
Testing QuantizedAddNoActivationInt16
Testing QuantizedAddActivationRelu1Int16
Testing QuantizedAddVariousInputShapesInt16
Testing QuantizedAddWithScalarBroadcastInt16
Testing QuantizedAddWithMixedBroadcastInt16
17/17 tests passed
~~~ALL TESTS PASSED~~~
13 changes: 13 additions & 0 deletions tensorflow/lite/micro/kernels/add_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -192,6 +192,19 @@ TF_LITE_MICRO_TEST(FloatAddActivationRelu1) {
kTfLiteActReluN1To1, output_data);
}

TF_LITE_MICRO_TEST(FloatAddActivationRelu) {
int inout_shape[] = {4, 1, 2, 2, 1};
const float input1_values[] = {-2.0, 0.2, 0.7, 0.8};
const float input2_values[] = {0.1, 0.2, 0.3, 0.5};
const float golden_values[] = {0.0, 0.4, 1.0, 1.3};

constexpr int kOutputDimsCount = 4;
float output_data[kOutputDimsCount];
tflite::testing::TestAddFloat(inout_shape, input1_values, inout_shape,
input2_values, inout_shape, golden_values,
kTfLiteActRelu, output_data);
}

TF_LITE_MICRO_TEST(FloatAddVariousInputShapes) {
constexpr int kOutputDimsCount = 6;
float output_data[kOutputDimsCount];
Expand Down