Skip to content

Added new test cases covering all edge scenarios for calc_std_and_verify#386

Open
weknowthecalmat wants to merge 4 commits intoseafloor-geodesy:mainfrom
weknowthecalmat:add-quality-control-pytest-coverage
Open

Added new test cases covering all edge scenarios for calc_std_and_verify#386
weknowthecalmat wants to merge 4 commits intoseafloor-geodesy:mainfrom
weknowthecalmat:add-quality-control-pytest-coverage

Conversation

@weknowthecalmat
Copy link
Contributor

@weknowthecalmat weknowthecalmat commented Oct 31, 2025

Added 15 test cases for -

Zero values - test_zero_values
Negative values - test_negative_values_variance_mode
NaN/Inf handling - test_nan_values, test_inf_values
Boundary testing - test_exact_sigma_limit_matches, test_extreme_limits
Error scenarios - test_invalid_data_types, test_different_column_counts, test_empty_series
Parameter matrix - test_parameter_matrix_combinations
Data validation - test_empty_series, test_different_column_counts
Numerical precision - test_numerical_precision_small_values, test_numerical_precision_large_values, test_floating_point_edge_cases
Additional robustness - test_single_axis_dominance, test_variance_vs_std_dev_consistency

Relates to the issue - #196

@codecov
Copy link

codecov bot commented Oct 31, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 76.35%. Comparing base (b04866b) to head (8f7d89e).

Additional details and impacted files
@@           Coverage Diff           @@
##             main     #386   +/-   ##
=======================================
  Coverage   76.35%   76.35%           
=======================================
  Files          29       29           
  Lines        1793     1793           
=======================================
  Hits         1369     1369           
  Misses        424      424           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Comment on lines 249 to 267
# Should fail when slightly over limit
over_limit_gps = pd.Series({
"ant_cov_XX1": individual_std * 1.001,
"ant_cov_YY1": individual_std * 1.001,
"ant_cov_ZZ1": individual_std * 1.001,
})

with pytest.raises(ValueError, match=r"3D Standard Deviation.*exceeds GPS Sigma Limit"):
calc_std_and_verify(over_limit_gps, std_dev=True, sigma_limit=0.05, verify=True)

# Should pass when slightly under limit
under_limit_gps = pd.Series({
"ant_cov_XX1": individual_std * 0.999,
"ant_cov_YY1": individual_std * 0.999,
"ant_cov_ZZ1": individual_std * 0.999,
})

result = calc_std_and_verify(under_limit_gps, std_dev=True, sigma_limit=0.05, verify=True)
assert result < 0.05
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Redundant, remove.

Comment on lines 269 to 283
def test_extreme_limits(self):
"""Test with extremely small and large sigma limits."""
normal_gps = pd.Series({
"ant_cov_XX1": 0.01,
"ant_cov_YY1": 0.01,
"ant_cov_ZZ1": 0.01,
})

# Extremely small limit should fail
with pytest.raises(ValueError):
calc_std_and_verify(normal_gps, std_dev=True, sigma_limit=1e-10, verify=True)

# Extremely large limit should pass
result = calc_std_and_verify(normal_gps, std_dev=True, sigma_limit=1e10, verify=True)
assert result < 1e10
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Redundant, remove.

Comment on lines 324 to 333
# Test with all numeric data (should work)
numeric_gps = pd.Series({
"ant_cov_XX1": 0.01,
"ant_cov_YY1": 0.01,
"ant_cov_ZZ1": 0.01,
})

result = calc_std_and_verify(numeric_gps, std_dev=True, sigma_limit=0.05, verify=True)
expected = np.sqrt(0.01**2 + 0.01**2 + 0.01**2)
assert abs(result - expected) < 1e-10
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Redundant, remove.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The intention was to test, invalid data fails, valid data works, within the same test. But as rightly said, this already is covered in the first test case, so I will remove it.

Comment on lines 343 to 376
def test_parameter_matrix_combinations(self):
"""Test all combinations of std_dev × verify × sigma_limit parameters."""
test_gps = pd.Series({
"ant_cov_XX1": 0.01,
"ant_cov_YY1": 0.01,
"ant_cov_ZZ1": 0.01,
})

variance_gps = pd.Series({
"ant_cov_XX1": 0.0001, # 0.01²
"ant_cov_YY1": 0.0001,
"ant_cov_ZZ1": 0.0001,
})

# Test matrix: std_dev=[True, False] × verify=[True, False] × sigma_limit=[strict, lenient]
combinations = [
(True, True, 0.02), # std_dev=True, verify=True, strict limit (should pass)
(True, True, 0.01), # std_dev=True, verify=True, very strict (should fail)
(True, False, 0.01), # std_dev=True, verify=False, strict (should return value)
(False, True, 0.02), # std_dev=False, verify=True, strict (should pass)
(False, True, 0.01), # std_dev=False, verify=True, very strict (should fail)
(False, False, 0.01), # std_dev=False, verify=False, strict (should return value)
]

for std_dev, verify, sigma_limit in combinations:
data = test_gps if std_dev else variance_gps

if verify and sigma_limit < 0.015: # Will fail verification
with pytest.raises(ValueError):
calc_std_and_verify(data, std_dev=std_dev, sigma_limit=sigma_limit, verify=verify)
else: # Should succeed
result = calc_std_and_verify(data, std_dev=std_dev, sigma_limit=sigma_limit, verify=verify)
assert isinstance(result, float)
assert result > 0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was already more or less done in #384, remove.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think I over engineered here a bit

Comment on lines 378 to 388
def test_numerical_precision_small_values(self):
"""Test numerical precision with very small values."""
tiny_gps = pd.Series({
"ant_cov_XX1": 1e-10,
"ant_cov_YY1": 1e-10,
"ant_cov_ZZ1": 1e-10,
})

result = calc_std_and_verify(tiny_gps, std_dev=True, sigma_limit=1e-8, verify=True)
expected = np.sqrt(3 * (1e-10)**2)
assert abs(result - expected) < 1e-15
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is addressed with the floating point edge cases below, remove.

Comment on lines 390 to 400
def test_numerical_precision_large_values(self):
"""Test numerical precision with very large values."""
large_gps = pd.Series({
"ant_cov_XX1": 1e6,
"ant_cov_YY1": 1e6,
"ant_cov_ZZ1": 1e6,
})

result = calc_std_and_verify(large_gps, std_dev=True, sigma_limit=2e6, verify=True)
expected = np.sqrt(3 * (1e6)**2)
assert abs(result - expected) < 1e3 # Allow for some floating point error
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test is unnecessary, remove.

Comment on lines 406 to 407
"ant_cov_XX1": 0.1 + 0.2 - 0.3, # Should be 0, but floating point...
"ant_cov_YY1": 1.0 / 3.0 * 3.0 - 1.0, # Should be 0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Split these up into separate tests so that if one fails we know which one.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @johnbdesanto , I was wondering if this is actually required as we are testing it implicitly.

Comment on lines 416 to 426
def test_single_axis_dominance(self):
"""Test cases where one axis dominates the 3D calculation."""
dominant_axis_gps = pd.Series({
"ant_cov_XX1": 0.05, # Large value
"ant_cov_YY1": 1e-10, # Tiny value
"ant_cov_ZZ1": 1e-10, # Tiny value
})

result = calc_std_and_verify(dominant_axis_gps, std_dev=True, sigma_limit=0.1, verify=True)
# Result should be dominated by the X component
assert abs(result - 0.05) < 1e-6
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test seems unnecessary, remove.

Copy link
Collaborator

@johnbdesanto johnbdesanto left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've left a number of comments. Many of the tests are similar to the point of testing the same things, so I have requested that those which are redundant be removed.

@lsetiawan
Copy link
Member

@weknowthecalmat Were these tests AI Generated?

@weknowthecalmat
Copy link
Contributor Author

weknowthecalmat commented Nov 10, 2025

@lsetiawan A part of it. So I wanted to verify that we cover all the possible combinations and we don't miss out on any. The 4 new test cases that I added as per suggestion from AI - test_parameter_matrix_combinations, test_numerical_precision_small_values / large_values, test_single_axis_dominance, test_extreme_limits. Others I validated my approach. I used GPT model to brainstorm a bit for this. I think in this process this got a bit over-engineered, which I will avoid.

@weknowthecalmat
Copy link
Contributor Author

Hi @johnbdesanto thanks for reviewing it. I think I over engineered it a bit. Thanks for all the comments. I was looking through a much more simple lens now and was wondering if "test_floating_point_edge_cases" was required, since numpy handles this use case.

@weknowthecalmat weknowthecalmat changed the title Added 15 new test cases covering all edge scenarios for calc_std_and_verify Added new test cases covering all edge scenarios for calc_std_and_verify Nov 11, 2025
weknowthecalmat and others added 3 commits November 13, 2025 11:02
…o values, negative values, NaN/Inf handling - Boundary testing: exact sigma_limit matches, extreme limits - Error scenarios: invalid data types, missing columns, empty data - Parameter matrix: all combinations of std_dev × verify × sigma_limit - Numerical precision: small/large values, floating-point edge cases - Additional robustness tests for single axis dominance and consistency
… (parameter matrix, numerical precision, floating-point) - Simplify boundary and invalid data types
@weknowthecalmat weknowthecalmat force-pushed the add-quality-control-pytest-coverage branch from 2600c80 to 374ab8d Compare November 13, 2025 19:04
@weknowthecalmat
Copy link
Contributor Author

@johnbdesanto I have added the changes as per the review feedback

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants