Suppress 1e-5 discrepancy warnings for detection probability#286
Suppress 1e-5 discrepancy warnings for detection probability#286chiarabellenghi merged 4 commits intomasterfrom
Conversation
There was a problem hiding this comment.
Pull request overview
This PR addresses issue #274 by adjusting how detection probabilities derived from the public-data effective area are normalized, aiming to avoid small numerical deviations from unity that were triggering warnings in the signal energy PDF construction.
Changes:
- Update
PDAeff.get_detection_prob_for_decnuto normalize detection probabilities by renormalizing the per-bin integrals to sum to 1. - Remove the warning in
PDSignalEnergyPDFSetinitialization that logged when detection probabilities didn’t sum exactly to unity.
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
skyllh/analyses/i3/publicdata_ps/signalpdf.py |
Removes the warning about det_prob not summing to 1 during signal energy PDF setup. |
skyllh/analyses/i3/publicdata_ps/aeff.py |
Changes normalization approach for detection probabilities to enforce unity by construction. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| self._logger.debug(f'det_prob = {det_prob}, sum = {np.sum(det_prob)}') | ||
|
|
There was a problem hiding this comment.
Now that the “sum not unity” warning is removed, it would be good to explicitly guard against degenerate det_prob results (e.g., all zeros / non-finite). Downstream true_e_prob = p / np.sum(p) will produce NaNs if np.sum(p) == 0, so consider raising a clear error (or at least logging) when np.sum(det_prob) <= 0 or ~np.isfinite(det_prob).all() after computing det_prob.
| self._logger.debug(f'det_prob = {det_prob}, sum = {np.sum(det_prob)}') | |
| # Guard against degenerate or non-finite detection probabilities. | |
| if not np.isfinite(det_prob).all(): | |
| msg = ( | |
| f'Non-finite detection probabilities encountered for src_dec={src_dec}. ' | |
| f'det_prob={det_prob}' | |
| ) | |
| self._logger.error(msg) | |
| raise ValueError(msg) | |
| total_det_prob = np.sum(det_prob) | |
| if total_det_prob <= 0: | |
| msg = ( | |
| f'Non-positive total detection probability (sum={total_det_prob}) ' | |
| f'for src_dec={src_dec}. det_prob={det_prob}' | |
| ) | |
| self._logger.error(msg) | |
| raise ValueError(msg) | |
| self._logger.debug(f'det_prob = {det_prob}, sum = {total_det_prob}') |
This reverts commit a50a2e4.
|
I realized that it makes sense to compute the detection probability normalized in each bin. It is more general. I am still not sure whether it makes sense to have arbitrary limits for the normalization range, as this should probably be the minimum and maximum energy given in the tabulated effA. In any case, for the moment I will simply loosen the tolerance of |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 1 out of 1 changed files in this pull request and generated 2 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| self._logger.debug(f'det_prob = {det_prob}, sum = {np.sum(det_prob)}') | ||
|
|
||
| if not np.isclose(np.sum(det_prob), 1): | ||
| if not np.isclose(np.sum(det_prob), 1, rtol=1e-3, atol=0): |
There was a problem hiding this comment.
The new tolerance is very loose: with rtol=1e-3 this check will no longer warn even for a 0.1% normalization error (e.g. sum=0.999). If the intent is only to suppress ~1e-5 rounding discrepancies, consider tightening this to something like rtol=1e-4 (as suggested in #274) so genuinely mis-normalized inputs still trigger the warning.
| if not np.isclose(np.sum(det_prob), 1, rtol=1e-3, atol=0): | |
| if not np.isclose(np.sum(det_prob), 1, rtol=1e-4, atol=0): |
There was a problem hiding this comment.
I think numerical drifts smaller than 0.1% are fine for the analysis.
| self._logger.debug(f'det_prob = {det_prob}, sum = {np.sum(det_prob)}') | ||
|
|
||
| if not np.isclose(np.sum(det_prob), 1): | ||
| if not np.isclose(np.sum(det_prob), 1, rtol=1e-3, atol=0): | ||
| self._logger.warning(f'The sum of the detection probabilities is not unity! It is {np.sum(det_prob)}.') |
There was a problem hiding this comment.
np.sum(det_prob) is computed multiple times (debug log, isclose check, and warning). Consider storing the sum in a local variable and reusing it to keep the code DRY and ensure the logged value is exactly the one being checked.
Fixes #274