-
Notifications
You must be signed in to change notification settings - Fork 1.4k
feat(metrics): Add MAPEMetric for regression evaluation. #8686
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -143,6 +143,38 @@ def _compute_metric(self, y_pred: torch.Tensor, y: torch.Tensor) -> torch.Tensor | |
| return compute_mean_error_metrics(y_pred, y, func=self.abs_func) | ||
|
|
||
|
|
||
| class MAPEMetric(RegressionMetric): | ||
| r"""Compute Mean Absolute Percentage Error between two tensors using function: | ||
|
|
||
| .. math:: | ||
| \operatorname {MAPE}\left(Y, \hat{Y}\right) =\frac {100}{n}\sum _{i=1}^{n}\left|\frac{y_i-\hat{y_i}}{y_i}\right|. | ||
|
|
||
| More info: https://en.wikipedia.org/wiki/Mean_absolute_percentage_error | ||
|
|
||
| Input `y_pred` is compared with ground truth `y`. | ||
| Both `y_pred` and `y` are expected to be real-valued, where `y_pred` is output from a regression model. | ||
| Note: Tackling the undefined error, a tiny epsilon value is added to the denominator part. | ||
|
|
||
| Example of the typical execution steps of this metric class follows :py:class:`monai.metrics.metric.Cumulative`. | ||
| Args: | ||
| reduction: define the mode to reduce metrics, will only execute reduction on `not-nan` values, | ||
| available reduction modes: {``"none"``, ``"mean"``, ``"sum"``, ``"mean_batch"``, ``"sum_batch"``, | ||
| ``"mean_channel"``, ``"sum_channel"``}, default to ``"mean"``. if "none", will not do reduction. | ||
| get_not_nans: whether to return the `not_nans` count, if True, aggregate() returns (metric, not_nans). | ||
| epsilonDefaults to 1e-7. | ||
|
|
||
| """ | ||
|
|
||
| def __init__( | ||
| self, reduction: MetricReduction | str = MetricReduction.MEAN, get_not_nans: bool = False, epsilon: float = 1e-7 | ||
| ) -> None: | ||
| super().__init__(reduction=reduction, get_not_nans=get_not_nans) | ||
| self.epsilon = epsilon | ||
|
|
||
| def _compute_metric(self, y_pred: torch.Tensor, y: torch.Tensor) -> torch.Tensor: | ||
| return compute_mape_metric(y_pred, y, epsilon=self.epsilon) | ||
|
Comment on lines
+146
to
+175
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🧩 Analysis chain🏁 Script executed: # Check if MAPE tests exist anywhere in the repository
rg "MAPEMetric|compute_mape" --type pyRepository: Project-MONAI/MONAI Length of output: 395 🏁 Script executed: # Find test files for regression metrics
fd -type f -name "*test*regression*" -o -name "*regression*test*" | head -20Repository: Project-MONAI/MONAI Length of output: 233 🏁 Script executed: # Check if there are tests for other metrics like MAEMetric or MSEMetric
rg "MAEMetric|MSEMetric|test.*metric" --type py -l | grep -i test | head -10Repository: Project-MONAI/MONAI Length of output: 485 🏁 Script executed: cat tests/metrics/test_compute_regression_metrics.pyRepository: Project-MONAI/MONAI Length of output: 8197 🏁 Script executed: # Search for any test files that specifically test MAPE
rg "MAPEMetric" --type py tests/Repository: Project-MONAI/MONAI Length of output: 45 🏁 Script executed: # Check if there are any other test files that might import MAPE
rg "compute_mape" --type py tests/Repository: Project-MONAI/MONAI Length of output: 45 🏁 Script executed: # Verify that MAPEMetric is exported as a public API
rg "from monai.metrics import" --type py tests/ | head -5Repository: Project-MONAI/MONAI Length of output: 582 🏁 Script executed: # Check the docstring format and completeness in the regression.py file
sed -n '146,175p' monai/metrics/regression.py | head -40Repository: Project-MONAI/MONAI Length of output: 1670 🏁 Script executed: # Check the full implementation of compute_mape_metric function
rg "def compute_mape_metric" -A 15 monai/metrics/regression.pyRepository: Project-MONAI/MONAI Length of output: 610 Add unit tests for MAPEMetric. No tests exist for the new Also fix the docstring malformation on line 167: "epsilonDefaults to 1e-7." should be "epsilon: float. Defaults to 1e-7." 🤖 Prompt for AI Agents |
||
|
|
||
|
|
||
| class RMSEMetric(RegressionMetric): | ||
| r"""Compute Root Mean Squared Error between two tensors using function: | ||
|
|
||
|
|
@@ -220,6 +252,23 @@ def compute_mean_error_metrics(y_pred: torch.Tensor, y: torch.Tensor, func: Call | |
| return torch.mean(flt(func(y - y_pred)), dim=-1, keepdim=True) | ||
|
|
||
|
|
||
| def compute_mape_metric(y_pred: torch.Tensor, y: torch.Tensor, epsilon: float = 1e-7) -> torch.Tensor: | ||
| """ | ||
| Compute Mean Absolute Percentage Error. | ||
|
|
||
| Args: | ||
| y_pred: predicted values | ||
| y: ground truth values | ||
| epsilon: small value to avoid division by zero | ||
|
|
||
| Returns: | ||
| MAPE value as percentage | ||
| """ | ||
| flt = partial(torch.flatten, start_dim=1) | ||
| percentage_error = torch.abs(y - y_pred) / torch.clamp(torch.abs(y), min=epsilon) * 100.0 | ||
| return torch.mean(flt(percentage_error), dim=-1, keepdim=True) | ||
|
|
||
|
|
||
| class KernelType(StrEnum): | ||
| GAUSSIAN = "gaussian" | ||
| UNIFORM = "uniform" | ||
|
|
||
Uh oh!
There was an error while loading. Please reload this page.