-
Notifications
You must be signed in to change notification settings - Fork 142
Description
On two different Machines with exactly the same virtual environment the results from that method differ quite significantly.
Steps to reproduce
python salib_demo.py
(with salib_demo.py as in salib_demo.py.txt)
has one of the following outputs, each one consistent on the respective machine:
Machine 1 (my computer, apparently GitHub action runners):
{'S1': [0.1089541115226481, 0.7049211232268789, 0.015075727433210298, 0.07117904961243285], 'S1_conf': [0.15260859689652703, 0.07956660882184943, 0.07877370534246698, 0.13270860395201967], 'names': ['x_exp', 'x_paa', 'x_mdd', 'x_haz']}
Machine 2 (jenkins server):
{'S1': [0.12359332237072101, 0.7077530937494074, -0.009256312995937502, 0.05828787358778384], 'S1_conf': [0.14814694067816694, 0.08041428188470545, 0.08202268215002247, 0.12927511001940797], 'names': ['x_exp', 'x_paa', 'x_mdd', 'x_haz']}
On both machines the conda packages are exactly the same (see env-spec.txt).
The consequence of this discrepancy is that the unit test climada.engine.unsequa.test.test_unsequa.TestCalcImpact.test_calc_sensitivity_all_pass
started to fail around July 24, 2025.
It's suspicious that the problem started on Machine 2 with a whole bunch of other issues. Possibly a system update went bad. However - everything else runs fine again.