Skip to content

Commit 72edd10

Browse files
eschiblieschiblibrunnedudennisbader
authored
Fix/smape (#2984)
* Eliminated spurious error on sape/smape when pred and actual are 0 * Removed test for s(m)ape value error when pred and acutal are zero Test for 0 s(m)ape for identical serieses including zeros * Removed identical lines in test_ape_zero * removed unnecessary commented-out code * updated smape docstring * Updated changelog * formatting * Update darts/metrics/metrics.py typo Co-authored-by: Dustin Brunner <[email protected]> * Clarify docstrings as suggested Co-authored-by: Dustin Brunner <[email protected]> * lint * Fixed docstring * Fixed updated docstring * Formatting * Fixed doc build? * minor updates * update changelog --------- Co-authored-by: eschibli <[email protected]> Co-authored-by: Dustin Brunner <[email protected]> Co-authored-by: dennisbader <[email protected]>
1 parent fe51221 commit 72edd10

File tree

3 files changed

+29
-28
lines changed

3 files changed

+29
-28
lines changed

CHANGELOG.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,8 @@ but cannot always guarantee backwards compatibility. Changes that may **break co
1313

1414
**Fixed**
1515

16+
- Updated s(m)ape to not raise a ValueError when actuals and predictions are zero for the same timestep. [#2984](https://github.com/unit8co/darts/pull/2984) by [eschibli](https://github.com/eschibli).
17+
1618
**Dependencies**
1719

1820
- We set an upper version cap on `pandas<3.0.0` until we officially support it. [#2995](https://github.com/unit8co/darts/pull/2995) by [Dennis Bader](https://github.com/dennisbader).

darts/metrics/metrics.py

Lines changed: 12 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -1744,8 +1744,8 @@ def sape(
17441744
.. math::
17451745
200 \\cdot \\frac{\\left| y_t - \\hat{y}_t \\right|}{\\left| y_t \\right| + \\left| \\hat{y}_t \\right|}
17461746
1747-
Note that it will raise a `ValueError` if :math:`\\left| y_t \\right| + \\left| \\hat{y}_t \\right| = 0` for some
1748-
:math:`t`. Consider using the Absolute Scaled Error (:func:`~darts.metrics.metrics.ase`) in these cases.
1747+
When :math:`\\left| y_t \\right| + \\left| \\hat{y}_t \\right| = 0` for some :math:`t` (i.e., both actual and
1748+
prediction are zero), the error for that time step is defined as 0.
17491749
17501750
If :math:`\\hat{y}_t` are stochastic (contains several samples) or quantile predictions, use parameter `q` to
17511751
specify on which quantile(s) to compute the metric on. By default, it uses the median 0.5 quantile
@@ -1785,11 +1785,6 @@ def sape(
17851785
verbose
17861786
Optionally, whether to print operations progress.
17871787
1788-
Raises
1789-
------
1790-
ValueError
1791-
If `actual_series` and `pred_series` contain some zeros at the same time index.
1792-
17931788
Returns
17941789
-------
17951790
float
@@ -1822,14 +1817,14 @@ def sape(
18221817
remove_nan_union=True,
18231818
q=q,
18241819
)
1825-
if not np.logical_or(y_true != 0, y_pred != 0).all():
1826-
raise_log(
1827-
ValueError(
1828-
"`actual_series` must be strictly positive to compute the sMAPE."
1829-
),
1830-
logger=logger,
1831-
)
1832-
return 200.0 * np.abs(y_true - y_pred) / (np.abs(y_true) + np.abs(y_pred))
1820+
numerator = 200 * np.abs(y_true - y_pred)
1821+
denominator = np.abs(y_true) + np.abs(y_pred)
1822+
return np.divide(
1823+
numerator,
1824+
denominator,
1825+
out=np.zeros_like(numerator, dtype=y_true.dtype),
1826+
where=denominator != 0,
1827+
)
18331828

18341829

18351830
@multi_ts_support
@@ -1854,9 +1849,8 @@ def smape(
18541849
200 \\cdot \\frac{1}{T}
18551850
\\sum_{t=1}^{T}{\\frac{\\left| y_t - \\hat{y}_t \\right|}{\\left| y_t \\right| + \\left| \\hat{y}_t \\right|} }
18561851
1857-
Note that it will raise a `ValueError` if :math:`\\left| y_t \\right| + \\left| \\hat{y}_t \\right| = 0`
1858-
for some :math:`t`. Consider using the Mean Absolute Scaled Error (:func:`~darts.metrics.metrics.mase`) in these
1859-
cases.
1852+
When :math:`\\left| y_t \\right| + \\left| \\hat{y}_t \\right| = 0` for some :math:`t` (i.e., both actual and
1853+
prediction are zero), the error for that time step is 0.
18601854
18611855
If :math:`\\hat{y}_t` are stochastic (contains several samples) or quantile predictions, use parameter `q` to
18621856
specify on which quantile(s) to compute the metric on. By default, it uses the median 0.5 quantile
@@ -1891,11 +1885,6 @@ def smape(
18911885
verbose
18921886
Optionally, whether to print operations progress.
18931887
1894-
Raises
1895-
------
1896-
ValueError
1897-
If the `actual_series` and the `pred_series` contain some zeros at the same time index.
1898-
18991888
Returns
19001889
-------
19011890
float

darts/tests/metrics/test_metrics.py

Lines changed: 15 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -207,25 +207,35 @@ class TestMetrics:
207207
"metric",
208208
[
209209
metrics.ape,
210-
metrics.sape,
211210
metrics.mape,
212-
metrics.smape,
213211
],
214212
)
215213
def test_ape_zero(self, metric):
216214
with pytest.raises(ValueError):
217215
metric(self.series1, self.series1)
218216

219-
with pytest.raises(ValueError):
220-
metric(self.series1, self.series1)
221-
222217
def test_ope_zero(self):
223218
with pytest.raises(ValueError):
224219
metrics.ope(
225220
self.series1 - self.series1.to_series().mean(),
226221
self.series1 - self.series1.to_series().mean(),
227222
)
228223

224+
@pytest.mark.parametrize(
225+
"metric",
226+
[
227+
metrics.sape,
228+
metrics.smape,
229+
],
230+
)
231+
def test_sape_zero_denom(self, metric):
232+
assert np.allclose(metric(self.series0, self.series0), 0.0), (
233+
"Expected SAPE to be 0.0 when both series are identical"
234+
)
235+
assert np.allclose(metric(self.series1, self.series1), 0.0), (
236+
"Expected SAPE to be 0.0 when both series are identical"
237+
)
238+
229239
@pytest.mark.parametrize(
230240
"config",
231241
[

0 commit comments

Comments
 (0)