Skip to content

Conversation

@ytl0623
Copy link
Contributor

@ytl0623 ytl0623 commented Dec 19, 2025

Fixes #8601

Description

Support alpha as a list, tuple, or tensor of floats, in addition to the existing scalar support.

Types of changes

  • Non-breaking change (fix or new feature that would not break existing functionality).
  • Breaking change (fix or new feature that would cause existing functionality to change).
  • New tests added to cover the changes.
  • Integration tests passed locally by running ./runtests.sh -f -u --net --coverage.
  • Quick tests passed locally by running ./runtests.sh --quick --unittests --disttests.
  • In-line docstrings updated.
  • Documentation updated, tested make html command in the docs/ folder.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 19, 2025

📝 Walkthrough

Walkthrough

FocalLoss now accepts per-class alpha as float | Sequence[float] | None; sequences are converted to tensors and validated against number of classes. softmax_focal_loss and sigmoid_focal_loss signatures accept float | torch.Tensor | None and implement per-class weighting with proper broadcasting; sigmoid applies sequence alpha only to positive targets. Forward uses an alpha_arg to avoid mutating self.alpha, and scalar alpha is warned/ignored for softmax when include_background=False. Tests add parameterized test_alpha_sequence_broadcasting across devices and settings.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Title check ✅ Passed Title clearly summarizes the main change: adding per-class weight support via alpha parameter in FocalLoss.
Description check ✅ Passed Description covers objectives, types of changes, and new tests added; some optional sections marked incomplete but core content is present.
Linked Issues check ✅ Passed PR fully implements issue #8601: alpha now accepts float/Sequence/Tensor with per-class weighting for both softmax and sigmoid focal loss.
Out of Scope Changes check ✅ Passed All changes directly support per-class alpha weighting objective; no out-of-scope modifications detected.
Docstring Coverage ✅ Passed Docstring coverage is 83.33% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
monai/losses/focal_loss.py (1)

68-120: Add tests for sequence alpha feature.

The new alpha parameter now accepts sequences but has no test coverage. Add tests for:

  • Sequence alpha with correct length (both softmax and sigmoid modes)
  • Sequence alpha with incorrect length (should raise ValueError)
  • Sequence alpha with include_background=False (should work)
  • Broadcasting behavior across spatial dimensions
🧹 Nitpick comments (5)
monai/losses/focal_loss.py (5)

81-87: Clarify docstring with example.

The interaction between include_background, use_softmax, and alpha type is complex. Consider adding a brief example showing sequence alpha usage, e.g., alpha=[0.25, 0.35, 0.4] for 3-class case.


167-167: Add stacklevel to warning.

Per static analysis and best practice, specify stacklevel=2 so the warning points to the user's code, not this internal method.

🔎 Proposed fix
-                    warnings.warn("`include_background=False`, scalar `alpha` ignored when using softmax.")
+                    warnings.warn("`include_background=False`, scalar `alpha` ignored when using softmax.", stacklevel=2)

222-237: Sequence length validation deferred to runtime.

The check that alpha sequence length matches class count (lines 229-232) occurs inside the loss function, not at initialization or start of forward. This means the error surfaces during training rather than at model construction. Consider validating alpha length earlier if class count can be inferred.


230-232: Simplify exception message.

Per static analysis (TRY003), extract long messages into a constant or use shorter inline text.


272-274: Simplify exception message.

Per static analysis (TRY003), extract long messages into a constant or use shorter inline text.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 15fd428 and 88b1182.

📒 Files selected for processing (1)
  • monai/losses/focal_loss.py (6 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

⚙️ CodeRabbit configuration file

Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.

Files:

  • monai/losses/focal_loss.py
🧬 Code graph analysis (1)
monai/losses/focal_loss.py (1)
monai/utils/enums.py (1)
  • LossReduction (253-264)
🪛 Ruff (0.14.8)
monai/losses/focal_loss.py

166-166: Local variable alpha_arg is assigned to but never used

Remove assignment to unused variable alpha_arg

(F841)


167-167: No explicit stacklevel keyword argument found

Set stacklevel=2

(B028)


230-232: Avoid specifying long messages outside the exception class

(TRY003)


272-274: Avoid specifying long messages outside the exception class

(TRY003)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
  • GitHub Check: min-dep-py3 (3.11)
  • GitHub Check: min-dep-py3 (3.9)
  • GitHub Check: min-dep-pytorch (2.8.0)
  • GitHub Check: min-dep-py3 (3.10)
  • GitHub Check: min-dep-py3 (3.12)
  • GitHub Check: min-dep-pytorch (2.5.1)
  • GitHub Check: min-dep-pytorch (2.7.1)
  • GitHub Check: min-dep-pytorch (2.6.0)
  • GitHub Check: min-dep-os (ubuntu-latest)
  • GitHub Check: min-dep-os (macOS-latest)
  • GitHub Check: min-dep-os (windows-latest)
  • GitHub Check: quick-py3 (macOS-latest)
  • GitHub Check: flake8-py3 (pytype)
  • GitHub Check: quick-py3 (ubuntu-latest)
  • GitHub Check: flake8-py3 (codeformat)
  • GitHub Check: flake8-py3 (mypy)
  • GitHub Check: build-docs
  • GitHub Check: quick-py3 (windows-latest)
  • GitHub Check: packaging
🔇 Additional comments (2)
monai/losses/focal_loss.py (2)

73-73: Type hint correctly extended.

The signature now accepts scalar or sequence for per-class alpha weighting.


265-281: Sequence alpha logic correct; validation deferred.

The per-class alpha handling properly validates length (lines 271-274) and broadcasts (lines 276-277). However, like softmax_focal_loss, validation occurs at runtime rather than earlier. The broadcasting and alpha_factor computation are correct.

Similar to softmax_focal_loss, consider validating alpha sequence length earlier in the lifecycle.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (4)
monai/losses/focal_loss.py (4)

161-170: Critical bug from previous review is fixed.

The code now correctly passes the local variable alpha_arg instead of the undefined self.alpha_arg. The logic properly handles include_background=False by nullifying scalar alpha (which assumes background weighting) while preserving sequence alpha.

Add stacklevel=2 to the warning.

Line 167 should include stacklevel=2 for proper warning attribution.

🔎 Fix for warning stacklevel
-                    warnings.warn("`include_background=False`, scalar `alpha` ignored when using softmax.")
+                    warnings.warn("`include_background=False`, scalar `alpha` ignored when using softmax.", stacklevel=2)

210-239: Softmax focal loss correctly handles scalar and sequence alpha.

The scalar path implements standard focal loss weighting (1-alpha for background, alpha for foreground). Sequence validation ensures length matches the number of classes, and broadcasting is correct.

Minor: Long error message inside ValueError.

Lines 230-232 define a long message inline. Consider extracting to a constant if this pattern recurs, but acceptable as-is.


242-283: Sigmoid focal loss correctly handles scalar and sequence alpha.

The scalar path applies alpha when target=1 and 1-alpha when target=0. Sequence validation and per-channel broadcasting are implemented correctly.

Minor: Long error message inside ValueError.

Lines 272-274 define a long message inline. Consider extracting to a constant if this pattern recurs, but acceptable as-is.


73-73: Verify test coverage for new sequence alpha functionality.

The PR adds per-class alpha support, but test checklist items are unchecked. Ensure tests cover:

  • Sequence alpha with include_background=True and False
  • Length validation errors
  • Both softmax and sigmoid variants
  • Edge cases (single class, empty sequence, etc.)

Do you want me to generate a shell script to check for existing test files that cover FocalLoss, or open an issue to track test coverage?

Also applies to: 211-211, 243-243

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 88b1182 and 820ce94.

📒 Files selected for processing (1)
  • monai/losses/focal_loss.py (6 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

⚙️ CodeRabbit configuration file

Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.

Files:

  • monai/losses/focal_loss.py
🧬 Code graph analysis (1)
monai/losses/focal_loss.py (1)
monai/utils/enums.py (1)
  • LossReduction (253-264)
🪛 Ruff (0.14.8)
monai/losses/focal_loss.py

167-167: No explicit stacklevel keyword argument found

Set stacklevel=2

(B028)


230-232: Avoid specifying long messages outside the exception class

(TRY003)


272-274: Avoid specifying long messages outside the exception class

(TRY003)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
  • GitHub Check: build-docs
  • GitHub Check: quick-py3 (ubuntu-latest)
  • GitHub Check: flake8-py3 (mypy)
  • GitHub Check: flake8-py3 (codeformat)
  • GitHub Check: packaging
  • GitHub Check: quick-py3 (windows-latest)
  • GitHub Check: quick-py3 (macOS-latest)
  • GitHub Check: flake8-py3 (pytype)
  • GitHub Check: min-dep-pytorch (2.7.1)
  • GitHub Check: min-dep-py3 (3.12)
  • GitHub Check: min-dep-pytorch (2.8.0)
  • GitHub Check: min-dep-py3 (3.11)
  • GitHub Check: min-dep-pytorch (2.6.0)
  • GitHub Check: min-dep-pytorch (2.5.1)
  • GitHub Check: min-dep-os (macOS-latest)
  • GitHub Check: min-dep-py3 (3.10)
  • GitHub Check: min-dep-py3 (3.9)
  • GitHub Check: min-dep-os (ubuntu-latest)
  • GitHub Check: min-dep-os (windows-latest)
🔇 Additional comments (1)
monai/losses/focal_loss.py (1)

73-73: Type signature and documentation updated correctly.

The alpha parameter now accepts sequences, and the docstring clearly specifies the length requirement relative to include_background.

Also applies to: 81-81, 85-87

Signed-off-by: ytl0623 <[email protected]>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
monai/losses/focal_loss.py (3)

81-87: Clarify softmax scalar alpha convention in docstring.

The docstring describes sequence alpha but doesn't explain that scalar alpha in softmax mode applies 1-alpha to the background class (index 0) and alpha to all other classes. This differs from sigmoid mode where alpha is applied symmetrically based on target value. Document this convention for user clarity.


223-238: Softmax alpha implementation correct.

The implementation properly handles both scalar (asymmetric background weighting) and sequence (explicit per-class) alpha. Length validation and broadcasting are correct.

Optional: The docstring states alpha "should be in [0, 1]" but there's no runtime validation. Consider adding a check if strict enforcement is desired, though this may be left as a user responsibility.


267-281: Sigmoid alpha implementation correct.

The implementation properly handles scalar and sequence alpha with standard focal loss formulation. Length validation and per-class broadcasting are correct.

Optional: Same as softmax—consider adding [0, 1] range validation if strict enforcement is desired.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 820ce94 and 50cc7e9.

📒 Files selected for processing (1)
  • monai/losses/focal_loss.py (6 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

⚙️ CodeRabbit configuration file

Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.

Files:

  • monai/losses/focal_loss.py
🧬 Code graph analysis (1)
monai/losses/focal_loss.py (1)
monai/utils/enums.py (1)
  • LossReduction (253-264)
🪛 Ruff (0.14.8)
monai/losses/focal_loss.py

167-167: No explicit stacklevel keyword argument found

Set stacklevel=2

(B028)


231-233: Avoid specifying long messages outside the exception class

(TRY003)


273-275: Avoid specifying long messages outside the exception class

(TRY003)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
  • GitHub Check: flake8-py3 (codeformat)
  • GitHub Check: quick-py3 (windows-latest)
  • GitHub Check: quick-py3 (ubuntu-latest)
  • GitHub Check: flake8-py3 (pytype)
  • GitHub Check: packaging
  • GitHub Check: flake8-py3 (mypy)
  • GitHub Check: quick-py3 (macOS-latest)
  • GitHub Check: build-docs
  • GitHub Check: min-dep-os (ubuntu-latest)
  • GitHub Check: min-dep-py3 (3.10)
  • GitHub Check: min-dep-py3 (3.12)
  • GitHub Check: min-dep-py3 (3.9)
  • GitHub Check: min-dep-pytorch (2.5.1)
  • GitHub Check: min-dep-py3 (3.11)
  • GitHub Check: min-dep-pytorch (2.8.0)
  • GitHub Check: min-dep-pytorch (2.6.0)
  • GitHub Check: min-dep-os (windows-latest)
  • GitHub Check: min-dep-os (macOS-latest)
  • GitHub Check: min-dep-pytorch (2.7.1)
🔇 Additional comments (2)
monai/losses/focal_loss.py (2)

73-73: Type hint correctly updated.

The signature now accepts scalar, sequence, or None for per-class alpha weighting.


162-170: Forward logic correct.

The alpha_arg local variable correctly handles the special case where scalar alpha is nulled for softmax with include_background=False, while preserving sequence alpha for explicit per-class control.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
monai/losses/focal_loss.py (1)

167-167: Add stacklevel=2 to warning.

Set explicit stacklevel=2 so the warning points to the caller.

🔎 Proposed fix
-                    warnings.warn("`include_background=False`, scalar `alpha` ignored when using softmax.")
+                    warnings.warn("`include_background=False`, scalar `alpha` ignored when using softmax.", stacklevel=2)
🧹 Nitpick comments (2)
monai/losses/focal_loss.py (2)

223-238: LGTM.

Per-class alpha logic is correct: scalar creates standard background/foreground weighting; sequence validates length and broadcasts properly.

Optional: Consider validating that alpha values are in [0,1] as documented in the docstring.


267-281: LGTM.

Per-class alpha logic is correct: scalar preserves original behavior; sequence validates length and applies per-class weighting with proper broadcasting.

Optional: Consider validating that alpha values are in [0,1] as documented in the docstring.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 50cc7e9 and 6d41595.

📒 Files selected for processing (1)
  • monai/losses/focal_loss.py (7 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

⚙️ CodeRabbit configuration file

Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.

Files:

  • monai/losses/focal_loss.py
🧬 Code graph analysis (1)
monai/losses/focal_loss.py (1)
monai/utils/enums.py (1)
  • LossReduction (253-264)
🪛 Ruff (0.14.8)
monai/losses/focal_loss.py

167-167: No explicit stacklevel keyword argument found

Set stacklevel=2

(B028)


231-233: Avoid specifying long messages outside the exception class

(TRY003)


273-275: Avoid specifying long messages outside the exception class

(TRY003)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
  • GitHub Check: min-dep-pytorch (2.8.0)
  • GitHub Check: min-dep-py3 (3.12)
  • GitHub Check: min-dep-py3 (3.9)
  • GitHub Check: min-dep-py3 (3.10)
  • GitHub Check: min-dep-py3 (3.11)
  • GitHub Check: min-dep-os (macOS-latest)
  • GitHub Check: min-dep-os (windows-latest)
  • GitHub Check: min-dep-pytorch (2.7.1)
  • GitHub Check: min-dep-os (ubuntu-latest)
  • GitHub Check: min-dep-pytorch (2.6.0)
  • GitHub Check: min-dep-pytorch (2.5.1)
  • GitHub Check: quick-py3 (ubuntu-latest)
  • GitHub Check: flake8-py3 (codeformat)
  • GitHub Check: build-docs
  • GitHub Check: quick-py3 (macOS-latest)
  • GitHub Check: flake8-py3 (mypy)
  • GitHub Check: packaging
  • GitHub Check: quick-py3 (windows-latest)
  • GitHub Check: flake8-py3 (pytype)
🔇 Additional comments (4)
monai/losses/focal_loss.py (4)

16-16: LGTM.

Import correctly added for sequence type hints.


73-73: LGTM.

Signature correctly extends alpha to accept per-class sequences while preserving backward compatibility.


81-87: LGTM.

Docstring accurately describes new per-class alpha behavior and interaction with include_background.


162-170: LGTM.

Logic correctly handles scalar alpha with include_background=False by nullifying it and warning. The alpha_arg intermediate variable prevents mutation of self.alpha.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
monai/losses/focal_loss.py (1)

167-167: Add stacklevel=2 to warning.

Still missing from previous review.

🧹 Nitpick comments (1)
monai/losses/focal_loss.py (1)

210-218: Add docstrings for alpha parameter.

Both softmax_focal_loss and sigmoid_focal_loss lack documentation for the alpha parameter, including its new sequence support.

Also applies to: 243-250

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 6d41595 and 4211ca5.

📒 Files selected for processing (1)
  • monai/losses/focal_loss.py (7 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

⚙️ CodeRabbit configuration file

Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.

Files:

  • monai/losses/focal_loss.py
🧬 Code graph analysis (1)
monai/losses/focal_loss.py (1)
monai/utils/enums.py (1)
  • LossReduction (253-264)
🪛 Ruff (0.14.8)
monai/losses/focal_loss.py

73-73: Undefined name Sequence

(F821)


74-74: Undefined name Sequence

(F821)


167-167: No explicit stacklevel keyword argument found

Set stacklevel=2

(B028)


231-233: Avoid specifying long messages outside the exception class

(TRY003)


273-275: Avoid specifying long messages outside the exception class

(TRY003)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (18)
  • GitHub Check: min-dep-pytorch (2.8.0)
  • GitHub Check: min-dep-pytorch (2.7.1)
  • GitHub Check: min-dep-pytorch (2.6.0)
  • GitHub Check: min-dep-py3 (3.12)
  • GitHub Check: min-dep-py3 (3.11)
  • GitHub Check: min-dep-pytorch (2.5.1)
  • GitHub Check: min-dep-os (ubuntu-latest)
  • GitHub Check: min-dep-py3 (3.10)
  • GitHub Check: min-dep-os (windows-latest)
  • GitHub Check: min-dep-py3 (3.9)
  • GitHub Check: min-dep-os (macOS-latest)
  • GitHub Check: quick-py3 (windows-latest)
  • GitHub Check: quick-py3 (macOS-latest)
  • GitHub Check: quick-py3 (ubuntu-latest)
  • GitHub Check: flake8-py3 (codeformat)
  • GitHub Check: flake8-py3 (mypy)
  • GitHub Check: flake8-py3 (pytype)
  • GitHub Check: packaging
🔇 Additional comments (4)
monai/losses/focal_loss.py (4)

84-87: LGTM!

Docstring clearly describes the new per-class alpha behavior and validation requirements.


162-170: LGTM!

The alpha_arg logic correctly preserves original alpha while handling the scalar edge case for include_background=False.


222-238: LGTM!

Scalar vs sequence handling is clean. Validation and broadcasting are correct.


266-282: LGTM!

Per-channel alpha handling correctly extends the scalar semantics to sequence form.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
monai/losses/focal_loss.py (1)

175-175: Add stacklevel=2 to warning.

The warning should specify stacklevel=2 so it points to the caller rather than this internal line.

🔎 Proposed fix
-                    warnings.warn("`include_background=False`, scalar `alpha` ignored when using softmax.")
+                    warnings.warn("`include_background=False`, scalar `alpha` ignored when using softmax.", stacklevel=2)
🧹 Nitpick comments (5)
monai/losses/focal_loss.py (5)

81-87: Docstring is accurate but validation is missing.

Line 85 states alpha values should be in [0, 1], but the code doesn't enforce this. Consider adding validation in __init__.

Optionally, adding a brief example of per-class alpha usage (e.g., alpha=[0.1, 0.3, 0.6] for 3 classes) would improve clarity.

🔎 Suggested validation
         if isinstance(alpha, (list, tuple)):
             self.alpha = torch.tensor(alpha)
+            if (self.alpha < 0).any() or (self.alpha > 1).any():
+                raise ValueError("All alpha values must be in the range [0, 1].")
         else:
             self.alpha = alpha
+            if isinstance(alpha, (float, int)) and not (0 <= alpha <= 1):
+                raise ValueError("Alpha must be in the range [0, 1].")

167-170: Alpha device handling is correct.

Properly transfers tensor alpha to the input device. Minor optimization: could skip device transfer for scalar alpha, but current implementation is safe and correct.


230-246: Alpha handling logic is correct.

Properly distinguishes scalar (background/foreground weighting) from sequence (per-class weighting). Validation on line 238 ensures sequence length matches number of classes.

Minor: Static analysis suggests shorter exception messages (TRY003), but this is stylistic and the descriptive message is helpful.


274-289: Per-channel alpha implementation is correct.

Scalar alpha applies standard focal loss weighting, while sequence alpha provides per-channel control. Broadcasting on lines 284-286 properly handles multi-dimensional targets.

Minor: Consider shorter exception message per TRY003, though current message is clear.


73-73: Verify test coverage for per-class alpha.

Ensure tests cover:

  • Sequence alpha for both softmax and sigmoid modes
  • Validation error when sequence length mismatches number of classes
  • Behavior with include_background=False + sequence alpha
  • Edge cases: empty sequence, values outside [0,1] if validation is added

Do you want me to generate test cases for these scenarios?

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 4211ca5 and 1b24834.

📒 Files selected for processing (1)
  • monai/losses/focal_loss.py (7 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

⚙️ CodeRabbit configuration file

Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.

Files:

  • monai/losses/focal_loss.py
🧬 Code graph analysis (1)
monai/losses/focal_loss.py (1)
monai/utils/enums.py (1)
  • LossReduction (253-264)
🪛 Ruff (0.14.8)
monai/losses/focal_loss.py

175-175: No explicit stacklevel keyword argument found

Set stacklevel=2

(B028)


239-241: Avoid specifying long messages outside the exception class

(TRY003)


281-283: Avoid specifying long messages outside the exception class

(TRY003)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
  • GitHub Check: quick-py3 (windows-latest)
  • GitHub Check: quick-py3 (macOS-latest)
  • GitHub Check: flake8-py3 (mypy)
  • GitHub Check: flake8-py3 (codeformat)
  • GitHub Check: build-docs
  • GitHub Check: flake8-py3 (pytype)
  • GitHub Check: quick-py3 (ubuntu-latest)
  • GitHub Check: packaging
  • GitHub Check: min-dep-py3 (3.11)
  • GitHub Check: min-dep-py3 (3.9)
  • GitHub Check: min-dep-pytorch (2.8.0)
  • GitHub Check: min-dep-py3 (3.12)
  • GitHub Check: min-dep-os (macOS-latest)
  • GitHub Check: min-dep-py3 (3.10)
  • GitHub Check: min-dep-pytorch (2.7.1)
  • GitHub Check: min-dep-pytorch (2.6.0)
  • GitHub Check: min-dep-os (ubuntu-latest)
  • GitHub Check: min-dep-os (windows-latest)
  • GitHub Check: min-dep-pytorch (2.5.1)
🔇 Additional comments (4)
monai/losses/focal_loss.py (4)

14-16: Imports are correct.

Sequence from collections.abc is properly imported and used consistently throughout the file.


73-74: Signature extension looks good.

The alpha parameter now correctly accepts per-class weights via Sequence[float].


120-125: Conversion logic is correct.

Properly converts sequence types to torch.Tensor while preserving scalar floats.


176-178: Loss function calls are correct.

Both softmax and sigmoid paths properly forward the alpha_arg parameter.

Signed-off-by: ytl0623 <[email protected]>
Signed-off-by: ytl0623 <[email protected]>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
monai/losses/focal_loss.py (1)

176-176: Add stacklevel=2 to warning.

The warning should include stacklevel=2 to point to the caller.

🔎 Proposed fix
-                    warnings.warn("`include_background=False`, scalar `alpha` ignored when using softmax.")
+                    warnings.warn("`include_background=False`, scalar `alpha` ignored when using softmax.", stacklevel=2)
🧹 Nitpick comments (3)
monai/losses/focal_loss.py (3)

85-87: Clarify that range applies to each alpha element.

The docstring states "The value should be in [0, 1]" but doesn't explicitly say this applies to each element when alpha is a sequence. Consider: "Each value should be in [0, 1]."


237-246: Document scalar alpha behavior for softmax.

When alpha is scalar, lines 239-240 construct [1-alpha, alpha, alpha, ...] treating alpha as the weight for all foreground classes vs. 1-alpha for background. This design choice isn't documented in the function docstring or main class docstring.

Consider adding a note explaining this convention for multi-class softmax with scalar alpha.


73-73: Verify test coverage for sequence alpha.

The PR adds significant new functionality (per-class alpha via sequences). Ensure test coverage includes:

  • Valid sequence alpha for both softmax and sigmoid paths
  • include_background=False with sequence alpha
  • Length mismatch errors (lines 242-245, 288-291)
  • Device and dtype handling for tensor alpha

Do you want me to help generate test cases for these scenarios?

Also applies to: 220-220, 256-256

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 1b24834 and a574d7b.

📒 Files selected for processing (1)
  • monai/losses/focal_loss.py (7 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

⚙️ CodeRabbit configuration file

Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.

Files:

  • monai/losses/focal_loss.py
🧬 Code graph analysis (1)
monai/losses/focal_loss.py (1)
monai/utils/enums.py (1)
  • LossReduction (253-264)
🪛 Ruff (0.14.8)
monai/losses/focal_loss.py

176-176: No explicit stacklevel keyword argument found

Set stacklevel=2

(B028)


243-245: Avoid specifying long messages outside the exception class

(TRY003)


289-291: Avoid specifying long messages outside the exception class

(TRY003)

🔇 Additional comments (1)
monai/losses/focal_loss.py (1)

279-296: Sigmoid sequence alpha implementation looks correct.

The per-class alpha handling properly extends the binary focal loss formula to multi-class multi-label scenarios. Device/dtype handling and broadcasting are correct.

@ytl0623
Copy link
Contributor Author

ytl0623 commented Dec 19, 2025

Hi @ericspod @KumoLiu @Nic-Ma ,

sorry to bother u. the CI is failing with [Errno 28] No space left on device during the build-docs and packaging steps. This seems to be a server-side issue. Could u plz re-trigger the workflows?

Thanks in advance!

@ericspod
Copy link
Member

ericspod commented Jan 6, 2026

Hi @ytl0623 thanks for this, we should have the CICD issues sorted now. It looks good to me though I think we need to add tests for the class with a sequence alpha value to ensure this works correctly. Please add this then update the branch to incorporate the CICD fixes.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tests/losses/test_focal_loss.py (1)

377-416: Test adequately covers per-class alpha broadcasting.

The test validates both softmax and sigmoid paths with alpha sequences, checks scalar output, and verifies error handling for length mismatches. Consider adding a test case with include_background=False to ensure per-class alpha sequences work correctly after background removal, as this is a key use case mentioned in the PR objectives.

Optional: Test with include_background=False

Add a test case to verify per-class alpha with background exclusion:

# Case 4: include_background=False with per-class alpha
if device == devices[0]:
    loss_func_no_bg = FocalLoss(
        to_onehot_y=True, 
        gamma=2.0, 
        alpha=alpha_seq[1:],  # Exclude background alpha
        use_softmax=True, 
        include_background=False,
        reduction="mean"
    )
    loss_no_bg = loss_func_no_bg(logits, target)
    self.assertTrue(torch.is_tensor(loss_no_bg))
    self.assertEqual(loss_no_bg.ndim, 0)
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between a574d7b and 015a894.

📒 Files selected for processing (2)
  • monai/losses/focal_loss.py
  • tests/losses/test_focal_loss.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

⚙️ CodeRabbit configuration file

Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.

Files:

  • tests/losses/test_focal_loss.py
  • monai/losses/focal_loss.py
🧬 Code graph analysis (1)
monai/losses/focal_loss.py (1)
monai/utils/enums.py (1)
  • LossReduction (253-264)
🪛 Ruff (0.14.10)
monai/losses/focal_loss.py

174-174: No explicit stacklevel keyword argument found

Set stacklevel=2

(B028)


241-243: Avoid specifying long messages outside the exception class

(TRY003)


287-289: Avoid specifying long messages outside the exception class

(TRY003)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
  • GitHub Check: quick-py3 (ubuntu-latest)
  • GitHub Check: quick-py3 (macOS-latest)
  • GitHub Check: quick-py3 (windows-latest)
  • GitHub Check: packaging
  • GitHub Check: flake8-py3 (codeformat)
  • GitHub Check: flake8-py3 (mypy)
  • GitHub Check: flake8-py3 (pytype)
  • GitHub Check: build-docs
  • GitHub Check: min-dep-py3 (3.9)
  • GitHub Check: min-dep-py3 (3.10)
  • GitHub Check: min-dep-py3 (3.11)
  • GitHub Check: min-dep-py3 (3.12)
  • GitHub Check: min-dep-pytorch (2.7.1)
  • GitHub Check: min-dep-pytorch (2.6.0)
  • GitHub Check: min-dep-pytorch (2.8.0)
  • GitHub Check: min-dep-os (macOS-latest)
  • GitHub Check: min-dep-pytorch (2.5.1)
  • GitHub Check: min-dep-os (ubuntu-latest)
  • GitHub Check: min-dep-os (windows-latest)
🔇 Additional comments (4)
monai/losses/focal_loss.py (4)

73-73: Signature and docstring correctly document per-class alpha.

Type annotation and documentation clearly explain that alpha accepts sequences and specify length requirements based on include_background.

Also applies to: 81-87


117-123: Alpha conversion logic is correct.

Properly handles scalar, sequence, and None cases. Converting sequences to tensor enables device transfer in helper functions.


229-250: Alpha handling correctly implements per-class weighting for softmax.

Logic properly distinguishes scalar (asymmetric background weighting) from sequence (explicit per-class weights). Length validation ensures alpha matches classes after background exclusion. Broadcasting is correct.


276-296: Alpha handling correctly implements per-class weighting for sigmoid.

Scalar and sequence cases properly apply class-specific alpha weights. Length validation and broadcasting logic are correct.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI Agents
In @monai/losses/focal_loss.py:
- Line 171: Modify the warnings.warn call that emits
"`include_background=False`, scalar `alpha` ignored when using softmax." to pass
stacklevel=2 (e.g., warnings.warn("...message...", stacklevel=2)) so the warning
points to the caller rather than this internal line; update the specific
warnings.warn invocation in focal_loss.py accordingly.
- Line 73: Update the test suite by adding a case in
test_alpha_sequence_broadcasting that exercises FocalLoss (or the test helper
using focal_loss) with include_background=False and a sequence alpha whose
length equals num_classes-1; construct logits and target that include the
background class, instantiate FocalLoss with include_background=False and alpha
as a list/tuple, compute the loss, and assert it matches the expected value
computed by applying the alpha sequence to the non-background channels (i.e.,
remove channel 0 before broadcasting). Use the existing test's pattern for
softmax/sigmoid branching and length-mismatch checks to validate correct
broadcasting when background is excluded.
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 015a894 and 1f37d0d.

📒 Files selected for processing (2)
  • monai/losses/focal_loss.py
  • tests/losses/test_focal_loss.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/losses/test_focal_loss.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

⚙️ CodeRabbit configuration file

Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.

Files:

  • monai/losses/focal_loss.py
🪛 Ruff (0.14.10)
monai/losses/focal_loss.py

171-171: No explicit stacklevel keyword argument found

Set stacklevel=2

(B028)


238-240: Avoid specifying long messages outside the exception class

(TRY003)


284-286: Avoid specifying long messages outside the exception class

(TRY003)

🔇 Additional comments (5)
monai/losses/focal_loss.py (5)

73-73: Type hint correctly updated for per-class alpha.

The signature now accepts scalar or sequence alpha as intended.


81-81: Docstring accurately reflects new sequence alpha support.

The clarifications about include_background=False and sequence requirements are clear.

Also applies to: 85-87


117-123: Alpha normalization is clean and correct.

Storing alpha as None, float, or tensor simplifies downstream handling.


227-245: Sequence alpha logic is correct; scalar alpha depends on forward() fix.

The sequence path properly validates alpha length against target.shape[1] and broadcasts correctly. The scalar path creates [1-alpha, alpha, alpha, ...], which is correct when class 0 is background, but forward() should prevent scalar alpha when include_background=False (see earlier comment).


273-293: Sigmoid focal loss alpha handling is correct for both scalar and sequence.

The scalar path applies per-pixel weighting (alpha for positive, 1-alpha for negative), and the sequence path applies per-channel weighting with proper length validation and broadcasting.

@ytl0623
Copy link
Contributor Author

ytl0623 commented Jan 6, 2026

Hi @ytl0623 thanks for this, we should have the CICD issues sorted now. It looks good to me though I think we need to add tests for the class with a sequence alpha value to ensure this works correctly. Please add this then update the branch to incorporate the CICD fixes.

Hi @ericspod, thanks for the feedback. I have removed the redundant conversion logic and added a new test case. It should be ready for another look. Thanks in advance!

@ericspod
Copy link
Member

ericspod commented Jan 7, 2026

Just about there, let's add a test for background=False and the stacklevel change for the warning and we'll be good to go.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (4)
tests/losses/test_focal_loss.py (1)

384-417: Test validates basic functionality but lacks deeper checks.

The test correctly adjusts alpha_seq length based on include_background (3 vs 2 classes) and verifies a positive scalar loss is produced. However:

  • No validation of correct alpha weighting (e.g., comparing computed loss against manual calculation)
  • No coverage of error cases (e.g., wrong alpha sequence length, which is validated at lines 239-242 and 285-288 in the implementation)

Consider adding a test case that triggers the length-mismatch ValueError to confirm validation works.

Example: Test wrong alpha length
def test_alpha_sequence_wrong_length(self):
    """Test that wrong alpha sequence length raises ValueError."""
    num_classes = 3
    logits = torch.randn(2, num_classes, 4, 4)
    target = torch.randint(0, num_classes, (2, 1, 4, 4))
    
    # Wrong length: 2 values for 3 classes
    wrong_alpha = [0.1, 0.5]
    loss_func = FocalLoss(to_onehot_y=True, alpha=wrong_alpha, use_softmax=True)
    
    with self.assertRaisesRegex(ValueError, "length of alpha"):
        loss_func(logits, target)
monai/losses/focal_loss.py (3)

81-87: Clarify docstring wording on alpha validity.

Line 81 states "alpha is invalid when using softmax unless alpha is a sequence" but the code issues a warning and semantics suggest "scalar alpha is ignored" rather than "invalid." Consider rephrasing for precision:

-                If False, `alpha` is invalid when using softmax unless `alpha` is a sequence (explicit class weights).
+                If False, scalar `alpha` is ignored when using softmax (use a sequence for explicit per-class weights).

240-242: Optional: Extract exception message to constant.

Static analysis flags long inline messages (TRY003). Consider extracting if the message is reused or if you prefer constants for all validation messages, but this is purely stylistic.


286-288: Optional: Extract exception message to constant.

Same static analysis hint (TRY003) as softmax version. Purely stylistic; current code is clear.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 1f37d0d and 50dc037.

📒 Files selected for processing (2)
  • monai/losses/focal_loss.py
  • tests/losses/test_focal_loss.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

⚙️ CodeRabbit configuration file

Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.

Files:

  • tests/losses/test_focal_loss.py
  • monai/losses/focal_loss.py
🧬 Code graph analysis (2)
tests/losses/test_focal_loss.py (2)
tests/test_utils.py (1)
  • test_script_save (750-769)
monai/losses/focal_loss.py (1)
  • FocalLoss (26-213)
monai/losses/focal_loss.py (2)
monai/utils/enums.py (1)
  • LossReduction (253-264)
monai/data/meta_tensor.py (1)
  • as_tensor (349-354)
🪛 Ruff (0.14.10)
monai/losses/focal_loss.py

240-242: Avoid specifying long messages outside the exception class

(TRY003)


286-288: Avoid specifying long messages outside the exception class

(TRY003)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
  • GitHub Check: min-dep-pytorch (2.5.1)
  • GitHub Check: min-dep-pytorch (2.7.1)
  • GitHub Check: min-dep-os (ubuntu-latest)
  • GitHub Check: min-dep-pytorch (2.6.0)
  • GitHub Check: min-dep-pytorch (2.8.0)
  • GitHub Check: min-dep-py3 (3.12)
  • GitHub Check: min-dep-py3 (3.11)
  • GitHub Check: min-dep-os (macOS-latest)
  • GitHub Check: min-dep-os (windows-latest)
  • GitHub Check: min-dep-py3 (3.10)
  • GitHub Check: min-dep-py3 (3.9)
  • GitHub Check: build-docs
  • GitHub Check: flake8-py3 (mypy)
  • GitHub Check: quick-py3 (ubuntu-latest)
  • GitHub Check: packaging
  • GitHub Check: flake8-py3 (pytype)
  • GitHub Check: quick-py3 (windows-latest)
  • GitHub Check: quick-py3 (macOS-latest)
  • GitHub Check: flake8-py3 (codeformat)
🔇 Additional comments (7)
tests/losses/test_focal_loss.py (2)

24-24: LGTM: Using TEST_DEVICES as suggested.

Follows the recommendation from past review to use the standard test utility definition.


80-86: LGTM: Test matrix covers all include_background × use_softmax combinations.

Generates 4 test cases per device (2 include_background values × 2 use_softmax values), addressing the past review request for background=False coverage.

monai/losses/focal_loss.py (5)

73-73: LGTM: Signature correctly extends alpha to support sequences.

Type hint float | Sequence[float] | None properly captures scalar, per-class sequence, and no-weighting cases.


117-123: LGTM: Alpha conversion logic is correct.

Stores None as-is, converts scalars to float, and converts sequences to tensors. Clean and straightforward.


170-173: LGTM: Warning now includes stacklevel=2.

Addresses past review feedback. The warning correctly identifies that scalar alpha behavior is undefined when background is excluded with softmax.


229-247: LGTM: Sequence alpha handling in softmax is correct.

Logic distinguishes scalar (builds [1-α, α, α, ...]) from sequence (validates length, uses directly), then broadcasts across spatial dimensions. Length validation at lines 239-242 ensures sequences match class count.


276-293: LGTM: Sequence alpha handling in sigmoid is correct.

Mirrors softmax logic: scalar applies α to positive class and 1-α to negative; sequence broadcasts per-class weights. Length validation ensures correctness.

@ytl0623
Copy link
Contributor Author

ytl0623 commented Jan 7, 2026

Thank you for the feedback. I have refactored the tests to use @parameterized.expand.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 50dc037 and 7ef77d1.

📒 Files selected for processing (2)
  • monai/losses/focal_loss.py
  • tests/losses/test_focal_loss.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/losses/test_focal_loss.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

⚙️ CodeRabbit configuration file

Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.

Files:

  • monai/losses/focal_loss.py
🧬 Code graph analysis (1)
monai/losses/focal_loss.py (2)
monai/utils/enums.py (1)
  • LossReduction (253-264)
monai/data/meta_tensor.py (1)
  • as_tensor (349-354)
🪛 Ruff (0.14.10)
monai/losses/focal_loss.py

240-242: Avoid specifying long messages outside the exception class

(TRY003)


286-288: Avoid specifying long messages outside the exception class

(TRY003)

🔇 Additional comments (5)
monai/losses/focal_loss.py (5)

73-74: LGTM!

Type hints correctly reflect the new per-class alpha support.


81-87: LGTM!

Docstring accurately describes per-class alpha support and validation.


117-123: LGTM!

Clean initialization logic normalizing alpha to float, tensor, or None.


228-247: LGTM!

Alpha handling correctly distinguishes scalar vs sequence, with proper length validation and broadcasting.


275-295: LGTM!

Sigmoid path correctly handles both scalar and sequence alpha with proper broadcasting for per-class weighting.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
monai/losses/focal_loss.py (3)

81-81: Consider clarifying the docstring.

"alpha is invalid" suggests an error, but scalar alpha is merely ignored with a warning when include_background=False with softmax. Consider rephrasing to: "If False, scalar alpha is ignored when using softmax (a warning is issued); use a sequence for explicit per-class weights."


230-248: LGTM — robust per-class alpha for softmax.

Device/dtype conversion, scalar vs. sequence branching, length validation, and broadcasting are all correct. The scalar path applies (1−α) to background and α to remaining classes as expected.

Optional: Ruff flags the error message (lines 241-243) under TRY003 for being inline. Extracting to a constant or shortening is a style preference.


277-295: LGTM — robust per-class alpha for sigmoid.

Device/dtype conversion, scalar vs. sequence branching, length validation, and per-channel broadcasting are correct. The alpha weighting logic (α for positive, 1−α for negative) is properly applied per class in sequence mode.

Optional: Ruff flags the error message (lines 287-289) under TRY003. Extracting or shortening is a style preference.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 7ef77d1 and 239d37f.

📒 Files selected for processing (1)
  • monai/losses/focal_loss.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

⚙️ CodeRabbit configuration file

Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.

Files:

  • monai/losses/focal_loss.py
🧬 Code graph analysis (1)
monai/losses/focal_loss.py (2)
monai/utils/enums.py (1)
  • LossReduction (253-264)
monai/data/meta_tensor.py (1)
  • as_tensor (349-354)
🪛 Ruff (0.14.10)
monai/losses/focal_loss.py

241-243: Avoid specifying long messages outside the exception class

(TRY003)


287-289: Avoid specifying long messages outside the exception class

(TRY003)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
  • GitHub Check: build-docs
  • GitHub Check: quick-py3 (windows-latest)
  • GitHub Check: flake8-py3 (codeformat)
  • GitHub Check: flake8-py3 (mypy)
  • GitHub Check: quick-py3 (macOS-latest)
  • GitHub Check: quick-py3 (ubuntu-latest)
  • GitHub Check: flake8-py3 (pytype)
  • GitHub Check: packaging
  • GitHub Check: min-dep-py3 (3.9)
  • GitHub Check: min-dep-py3 (3.11)
  • GitHub Check: min-dep-py3 (3.12)
  • GitHub Check: min-dep-pytorch (2.5.1)
  • GitHub Check: min-dep-pytorch (2.6.0)
  • GitHub Check: min-dep-py3 (3.10)
  • GitHub Check: min-dep-os (ubuntu-latest)
  • GitHub Check: min-dep-os (macOS-latest)
  • GitHub Check: min-dep-pytorch (2.8.0)
  • GitHub Check: min-dep-pytorch (2.7.1)
  • GitHub Check: min-dep-os (windows-latest)
🔇 Additional comments (2)
monai/losses/focal_loss.py (2)

117-123: LGTM — clean alpha type handling.

Correctly converts scalar to float, sequence to tensor, and preserves None. Maintains backward compatibility while enabling per-class weights.


167-177: LGTM — correct alpha routing.

The alpha_arg logic correctly ignores scalar alpha for softmax with include_background=False, issues an appropriate warning with stacklevel=2, and passes sequence alpha through unchanged. No mutation of self.alpha.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
monai/losses/focal_loss.py (2)

229-250: LGTM: softmax focal loss correctly handles scalar and per-class alpha.

Scalar creates standard [1-α, α, ...] weighting. Sequence validates length against number of classes and broadcasts correctly. Device/dtype handling ensures compatibility.

Optional: Ruff TRY003 style hint

Static analysis suggests extracting the long error message at lines 241-243 to a class-level constant or exception class. This is purely a style preference; the current inline message is clear and maintainable.


276-298: LGTM: sigmoid focal loss correctly handles scalar and per-class alpha.

Scalar applies standard α weighting. Sequence validates length, broadcasts per-class weights correctly. Device/dtype conversion ensures tensor compatibility.

Optional: Ruff TRY003 style hint

Static analysis suggests extracting the long error message at lines 287-289 to a class-level constant or exception class. Current inline message is clear and specific, so this is purely stylistic.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 239d37f and b547410.

📒 Files selected for processing (2)
  • monai/losses/focal_loss.py
  • tests/losses/test_focal_loss.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/losses/test_focal_loss.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

⚙️ CodeRabbit configuration file

Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.

Files:

  • monai/losses/focal_loss.py
🧬 Code graph analysis (1)
monai/losses/focal_loss.py (1)
monai/utils/enums.py (1)
  • LossReduction (253-264)
🪛 Ruff (0.14.10)
monai/losses/focal_loss.py

241-243: Avoid specifying long messages outside the exception class

(TRY003)


287-289: Avoid specifying long messages outside the exception class

(TRY003)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
  • GitHub Check: quick-py3 (windows-latest)
  • GitHub Check: quick-py3 (ubuntu-latest)
  • GitHub Check: flake8-py3 (codeformat)
  • GitHub Check: packaging
  • GitHub Check: flake8-py3 (mypy)
  • GitHub Check: flake8-py3 (pytype)
  • GitHub Check: quick-py3 (macOS-latest)
  • GitHub Check: build-docs
  • GitHub Check: min-dep-pytorch (2.5.1)
  • GitHub Check: min-dep-os (ubuntu-latest)
  • GitHub Check: min-dep-pytorch (2.6.0)
  • GitHub Check: min-dep-os (windows-latest)
  • GitHub Check: min-dep-pytorch (2.7.1)
  • GitHub Check: min-dep-py3 (3.11)
  • GitHub Check: min-dep-os (macOS-latest)
  • GitHub Check: min-dep-py3 (3.12)
  • GitHub Check: min-dep-py3 (3.10)
  • GitHub Check: min-dep-pytorch (2.8.0)
  • GitHub Check: min-dep-py3 (3.9)
🔇 Additional comments (3)
monai/losses/focal_loss.py (3)

73-87: LGTM: parameter signature and docstring updated correctly.

Type hints properly use Sequence from collections.abc. Docstring accurately documents scalar vs sequence behavior and the include_background constraint for softmax mode.


117-123: LGTM: alpha storage correctly handles scalar and sequence.

Conversion logic is sound: scalars stored as float, sequences as tensor. Type annotation matches stored types.


167-177: LGTM: alpha_arg correctly handles include_background logic.

Scalar alpha appropriately set to None with warning when include_background=False in softmax mode. Sequence alpha passed through for length validation in loss functions. Logic sound.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
monai/losses/focal_loss.py (1)

84-86: Clarify "after excluding background if set" in docstring.

The phrase "after excluding background if set" is slightly ambiguous. Consider rephrasing to "If include_background=False, the sequence length should match the number of non-background classes."

📝 Suggested docstring improvement
             alpha: value of the alpha in the definition of the alpha-balanced Focal loss.
                 The value should be in [0, 1].
-                If a sequence is provided, it must match the number of classes (after excluding background if set).
+                If a sequence is provided, its length must match the number of classes
+                (excluding the background class if `include_background=False`).
                 Defaults to None.
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between b547410 and dc19ec0.

📒 Files selected for processing (1)
  • monai/losses/focal_loss.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

⚙️ CodeRabbit configuration file

Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.

Files:

  • monai/losses/focal_loss.py
🪛 Ruff (0.14.10)
monai/losses/focal_loss.py

240-242: Avoid specifying long messages outside the exception class

(TRY003)


286-288: Avoid specifying long messages outside the exception class

(TRY003)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
  • GitHub Check: min-dep-os (ubuntu-latest)
  • GitHub Check: min-dep-py3 (3.10)
  • GitHub Check: flake8-py3 (mypy)
  • GitHub Check: min-dep-py3 (3.11)
  • GitHub Check: min-dep-pytorch (2.6.0)
  • GitHub Check: flake8-py3 (codeformat)
  • GitHub Check: min-dep-pytorch (2.8.0)
  • GitHub Check: flake8-py3 (pytype)
  • GitHub Check: min-dep-pytorch (2.5.1)
  • GitHub Check: min-dep-py3 (3.9)
  • GitHub Check: min-dep-py3 (3.12)
  • GitHub Check: min-dep-os (windows-latest)
  • GitHub Check: min-dep-pytorch (2.7.1)
  • GitHub Check: min-dep-os (macOS-latest)
  • GitHub Check: quick-py3 (ubuntu-latest)
  • GitHub Check: quick-py3 (windows-latest)
  • GitHub Check: quick-py3 (macOS-latest)
  • GitHub Check: build-docs
  • GitHub Check: packaging
🔇 Additional comments (11)
monai/losses/focal_loss.py (11)

72-73: Type hints correctly reflect the extended API.

The union types accurately describe the new alpha parameter flexibility (scalar, sequence, or None). The Sequence import from collections.abc is correct.


80-80: Docstring clarifies scalar vs sequence alpha behavior with background exclusion.

The updated text correctly notes that scalar alpha is invalid with include_background=False when using softmax, but sequences remain valid.


116-122: Alpha initialization logic is correct.

The conditional conversion (scalar to float, sequence to tensor, None preserved) is appropriate. Type annotation matches the stored types.


166-176: Forward method correctly handles alpha parameter delegation.

The alpha_arg local variable preserves self.alpha immutability. Scalar alpha is appropriately nullified and warned for softmax with include_background=False, while sequence alpha is preserved. The stacklevel=2 ensures the warning points to the caller.


217-217: Function signature correctly extended to support tensor alpha.

The type hint float | torch.Tensor | None matches the implementation and enables per-class weighting.


229-243: Alpha handling logic is correct for both scalar and sequence cases.

Scalar alpha constructs [1-alpha, alpha, alpha, ...] for background + foreground classes. Sequence alpha validates length against target.shape[1] and applies per-class weights. The length validation correctly occurs after any background removal in the forward method.


245-247: Broadcasting correctly applies per-class alpha weights.

The reshape to [-1] + [1] * len(target.shape[2:]) properly broadcasts alpha across spatial dimensions.


252-253: Function signature correctly extended to support tensor alpha.

Consistent with softmax_focal_loss, the type hint enables per-class weighting for sigmoid mode.


275-294: Alpha handling correctly implements per-class weighting for sigmoid mode.

Scalar alpha applies classic binary formulation: target * alpha + (1-target) * (1-alpha). Sequence alpha validates length, broadcasts, and applies per-class weights: alpha_c when target=1, (1-alpha_c) when target=0 for each class c.


240-242: Error message is clear and contextual.

Static analysis suggests extracting the message, but inlining is acceptable here for clarity. The validation correctly occurs after background removal.


286-288: Error message is clear and contextual.

Consistent with softmax_focal_loss, the validation and message are appropriate.

… the weight for positive samples of class c. Negative samples should have a default weight of 1.0

Signed-off-by: ytl0623 <[email protected]>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
monai/losses/focal_loss.py (2)

229-247: Softmax alpha handling implemented correctly.

Scalar and tensor paths both work. Scalar creates standard [1-α, α, ...] weighting, tensor validates length and broadcasts properly.

Minor: Line 237 creates a Python list before tensor conversion. Could construct tensor directly for marginal efficiency gain.


240-242: Optional: Extract error messages.

Static analysis flags inline error messages (TRY003). Extracting to constants would satisfy the linter but is purely stylistic.

Also applies to: 286-288

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between dc19ec0 and 945abfe.

📒 Files selected for processing (2)
  • monai/losses/focal_loss.py
  • tests/losses/test_focal_loss.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/losses/test_focal_loss.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

⚙️ CodeRabbit configuration file

Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.

Files:

  • monai/losses/focal_loss.py
🪛 Ruff (0.14.10)
monai/losses/focal_loss.py

240-242: Avoid specifying long messages outside the exception class

(TRY003)


286-288: Avoid specifying long messages outside the exception class

(TRY003)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
  • GitHub Check: flake8-py3 (pytype)
  • GitHub Check: quick-py3 (ubuntu-latest)
  • GitHub Check: flake8-py3 (mypy)
  • GitHub Check: quick-py3 (macOS-latest)
  • GitHub Check: quick-py3 (windows-latest)
  • GitHub Check: build-docs
  • GitHub Check: flake8-py3 (codeformat)
  • GitHub Check: packaging
  • GitHub Check: min-dep-py3 (3.10)
  • GitHub Check: min-dep-py3 (3.11)
  • GitHub Check: min-dep-py3 (3.12)
  • GitHub Check: min-dep-pytorch (2.6.0)
  • GitHub Check: min-dep-py3 (3.9)
  • GitHub Check: min-dep-pytorch (2.8.0)
  • GitHub Check: min-dep-pytorch (2.5.1)
  • GitHub Check: min-dep-os (ubuntu-latest)
  • GitHub Check: min-dep-pytorch (2.7.1)
  • GitHub Check: min-dep-os (windows-latest)
  • GitHub Check: min-dep-os (macOS-latest)
🔇 Additional comments (5)
monai/losses/focal_loss.py (5)

72-73: Type hints look good.

Correctly broadened alpha to accept scalar, sequence, or None. Sequence import confirmed on line 15.


80-80: Documentation correctly updated.

Clearly explains alpha sequence constraints and include_background interaction.

Also applies to: 84-86


116-122: Alpha initialization logic is sound.

Correctly converts scalar to float, sequence to tensor, or preserves None. Type annotation matches implementation.


166-176: Forward method alpha handling correct.

Properly uses local alpha_arg variable, warns for invalid scalar alpha with softmax when include_background=False, and passes correct value to loss functions.


281-295: Verify tensor alpha behavior for negative samples.

Scalar alpha: negatives weighted by (1-alpha) (line 283).
Tensor alpha: negatives weighted by 1.0 (line 295).

This asymmetry means per-class alpha only affects positive samples, while negatives remain uniformly weighted. This may be intentional for multi-label scenarios, but it's undocumented and differs from scalar behavior. Confirm this design aligns with the "original focal loss formulation" objective.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
monai/losses/focal_loss.py (1)

217-225: Consider adding full docstrings.

softmax_focal_loss and sigmoid_focal_loss lack documented Args/Returns sections. Optional improvement for maintainability.

tests/losses/test_focal_loss.py (1)

385-417: Consider adding negative test for alpha length mismatch.

Current test validates happy path. Adding a case that expects ValueError when len(alpha) != num_classes would improve coverage.

Example addition
def test_alpha_sequence_length_mismatch(self):
    logits = torch.randn(2, 3, 4, 4)
    target = torch.randint(0, 3, (2, 1, 4, 4))
    loss_func = FocalLoss(to_onehot_y=True, alpha=[0.1, 0.5])  # wrong length
    with self.assertRaises(ValueError):
        loss_func(logits, target)
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 945abfe and 65e1c36.

📒 Files selected for processing (2)
  • monai/losses/focal_loss.py
  • tests/losses/test_focal_loss.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

⚙️ CodeRabbit configuration file

Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.

Files:

  • tests/losses/test_focal_loss.py
  • monai/losses/focal_loss.py
🧬 Code graph analysis (1)
tests/losses/test_focal_loss.py (1)
monai/losses/focal_loss.py (1)
  • FocalLoss (25-214)
🪛 Ruff (0.14.10)
monai/losses/focal_loss.py

241-243: Avoid specifying long messages outside the exception class

(TRY003)


287-289: Avoid specifying long messages outside the exception class

(TRY003)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
  • GitHub Check: min-dep-pytorch (2.5.1)
  • GitHub Check: min-dep-os (ubuntu-latest)
  • GitHub Check: min-dep-pytorch (2.7.1)
  • GitHub Check: min-dep-os (macOS-latest)
  • GitHub Check: min-dep-py3 (3.9)
  • GitHub Check: min-dep-os (windows-latest)
  • GitHub Check: min-dep-py3 (3.10)
  • GitHub Check: min-dep-py3 (3.12)
  • GitHub Check: min-dep-py3 (3.11)
  • GitHub Check: min-dep-pytorch (2.8.0)
  • GitHub Check: min-dep-pytorch (2.6.0)
  • GitHub Check: flake8-py3 (mypy)
  • GitHub Check: flake8-py3 (codeformat)
  • GitHub Check: quick-py3 (windows-latest)
  • GitHub Check: flake8-py3 (pytype)
  • GitHub Check: quick-py3 (macOS-latest)
  • GitHub Check: build-docs
  • GitHub Check: quick-py3 (ubuntu-latest)
  • GitHub Check: packaging
🔇 Additional comments (7)
monai/losses/focal_loss.py (5)

72-73: LGTM on type hints.

Sequence is properly imported from collections.abc (line 15) and used consistently.


117-123: LGTM on alpha initialization.

Clean handling of scalar vs. sequence alpha with proper type coercion.


167-177: LGTM on forward path.

alpha_arg pattern correctly avoids mutating self.alpha. Warning with stacklevel=2 points to caller.


229-250: LGTM on softmax alpha weighting.

Per-class weighting with proper validation and broadcasting. The inline ValueError message (flagged by TRY003) is acceptable for clarity.


285-296: Verify: sequence alpha only weights positive samples.

For sigmoid with sequence alpha, negatives get weight 1.0 (line 296). This differs from scalar alpha which weights both. Confirm this asymmetry is intentional for the multi-class use case.

tests/losses/test_focal_loss.py (2)

24-28: LGTM on device-aware test setup.

Using TEST_DEVICES from test utilities aligns with project patterns.


81-86: LGTM on test parameterization.

Covers all combinations of include_background and use_softmax across devices.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Weights in alpha for FocalLoss

2 participants