Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions monai/metrics/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@
from .panoptic_quality import PanopticQualityMetric, compute_panoptic_quality
from .regression import (
MAEMetric,
MAPEMetric,
MSEMetric,
MultiScaleSSIMMetric,
PSNRMetric,
Expand Down
49 changes: 49 additions & 0 deletions monai/metrics/regression.py
Original file line number Diff line number Diff line change
Expand Up @@ -143,6 +143,38 @@ def _compute_metric(self, y_pred: torch.Tensor, y: torch.Tensor) -> torch.Tensor
return compute_mean_error_metrics(y_pred, y, func=self.abs_func)


class MAPEMetric(RegressionMetric):
r"""Compute Mean Absolute Percentage Error between two tensors using function:

.. math::
\operatorname {MAPE}\left(Y, \hat{Y}\right) =\frac {100}{n}\sum _{i=1}^{n}\left|\frac{y_i-\hat{y_i}}{y_i}\right|.

More info: https://en.wikipedia.org/wiki/Mean_absolute_percentage_error

Input `y_pred` is compared with ground truth `y`.
Both `y_pred` and `y` are expected to be real-valued, where `y_pred` is output from a regression model.
Note: Tackling the undefined error, a tiny epsilon value is added to the denominator part.

Example of the typical execution steps of this metric class follows :py:class:`monai.metrics.metric.Cumulative`.
Args:
reduction: define the mode to reduce metrics, will only execute reduction on `not-nan` values,
available reduction modes: {``"none"``, ``"mean"``, ``"sum"``, ``"mean_batch"``, ``"sum_batch"``,
``"mean_channel"``, ``"sum_channel"``}, default to ``"mean"``. if "none", will not do reduction.
get_not_nans: whether to return the `not_nans` count, if True, aggregate() returns (metric, not_nans).
epsilonDefaults to 1e-7.

"""

def __init__(
self, reduction: MetricReduction | str = MetricReduction.MEAN, get_not_nans: bool = False, epsilon: float = 1e-7
) -> None:
super().__init__(reduction=reduction, get_not_nans=get_not_nans)
self.epsilon = epsilon

def _compute_metric(self, y_pred: torch.Tensor, y: torch.Tensor) -> torch.Tensor:
return compute_mape_metric(y_pred, y, epsilon=self.epsilon)
Comment on lines +146 to +175
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Check if MAPE tests exist anywhere in the repository
rg "MAPEMetric|compute_mape" --type py

Repository: Project-MONAI/MONAI

Length of output: 395


🏁 Script executed:

# Find test files for regression metrics
fd -type f -name "*test*regression*" -o -name "*regression*test*" | head -20

Repository: Project-MONAI/MONAI

Length of output: 233


🏁 Script executed:

# Check if there are tests for other metrics like MAEMetric or MSEMetric
rg "MAEMetric|MSEMetric|test.*metric" --type py -l | grep -i test | head -10

Repository: Project-MONAI/MONAI

Length of output: 485


🏁 Script executed:

cat tests/metrics/test_compute_regression_metrics.py

Repository: Project-MONAI/MONAI

Length of output: 8197


🏁 Script executed:

# Search for any test files that specifically test MAPE
rg "MAPEMetric" --type py tests/

Repository: Project-MONAI/MONAI

Length of output: 45


🏁 Script executed:

# Check if there are any other test files that might import MAPE
rg "compute_mape" --type py tests/

Repository: Project-MONAI/MONAI

Length of output: 45


🏁 Script executed:

# Verify that MAPEMetric is exported as a public API
rg "from monai.metrics import" --type py tests/ | head -5

Repository: Project-MONAI/MONAI

Length of output: 582


🏁 Script executed:

# Check the docstring format and completeness in the regression.py file
sed -n '146,175p' monai/metrics/regression.py | head -40

Repository: Project-MONAI/MONAI

Length of output: 1670


🏁 Script executed:

# Check the full implementation of compute_mape_metric function
rg "def compute_mape_metric" -A 15 monai/metrics/regression.py

Repository: Project-MONAI/MONAI

Length of output: 610


Add unit tests for MAPEMetric.

No tests exist for the new MAPEMetric class or compute_mape_metric function. Per coding guidelines, new definitions must be covered by unit tests. Extend tests/metrics/test_compute_regression_metrics.py to include MAPEMetric alongside the existing MSEMetric, MAEMetric, RMSEMetric, and PSNRMetric tests.

Also fix the docstring malformation on line 167: "epsilonDefaults to 1e-7." should be "epsilon: float. Defaults to 1e-7."

🤖 Prompt for AI Agents
In @monai/metrics/regression.py around lines 146 - 175, Add unit tests for
MAPEMetric and compute_mape_metric and fix the docstring typo: change
"epsilonDefaults to 1e-7." to "epsilon: float. Defaults to 1e-7." For tests,
extend the existing regression metrics test module to include MAPEMetric by: (1)
adding direct tests of compute_mape_metric with simple tensors (including cases
with zeros in y to verify epsilon is applied), (2) adding Cumulative-style tests
that instantiate MAPEMetric (exercise reduction modes like "mean" and "none" and
get_not_nans=True) and compare results to expected scalar/tensor values, and (3)
ensuring behavior matches other metrics' patterns (MSEMetric, MAEMetric) in that
test file so CI picks it up.



class RMSEMetric(RegressionMetric):
r"""Compute Root Mean Squared Error between two tensors using function:

Expand Down Expand Up @@ -220,6 +252,23 @@ def compute_mean_error_metrics(y_pred: torch.Tensor, y: torch.Tensor, func: Call
return torch.mean(flt(func(y - y_pred)), dim=-1, keepdim=True)


def compute_mape_metric(y_pred: torch.Tensor, y: torch.Tensor, epsilon: float = 1e-7) -> torch.Tensor:
"""
Compute Mean Absolute Percentage Error.

Args:
y_pred: predicted values
y: ground truth values
epsilon: small value to avoid division by zero

Returns:
MAPE value as percentage
"""
flt = partial(torch.flatten, start_dim=1)
percentage_error = torch.abs(y - y_pred) / torch.clamp(torch.abs(y), min=epsilon) * 100.0
return torch.mean(flt(percentage_error), dim=-1, keepdim=True)


class KernelType(StrEnum):
GAUSSIAN = "gaussian"
UNIFORM = "uniform"
Expand Down
Loading