-
Notifications
You must be signed in to change notification settings - Fork 462
add Tree-Path KL Divergence loss for hier classification + unit test #4706
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
Jyc323
wants to merge
19
commits into
open-edge-platform:develop
Choose a base branch
from
Jyc323:hier_loss
base: develop
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from 11 commits
Commits
Show all changes
19 commits
Select commit
Hold shift + click to select a range
9d52d9f
add Tree-Path KL Divergence loss for hier classification + unit test
Jyc323 1253587
fix code review comments
Jyc323 70b03a5
fix tox errors
Jyc323 a6c70ad
integrate KL loss via recipe YAML; add new H-label model & classifier
Jyc323 b7af7b4
merge
Jyc323 26e2b74
Merge branch 'develop' into hier_loss
Jyc323 3ce8d3b
move files since renaming
Jyc323 86f4017
refactor to base.py, add unit test
Jyc323 442193f
Merge branch 'develop' into hier_loss
Jyc323 79f8612
fix errors from tox
Jyc323 c7bf05d
Merge branch 'develop' into hier_loss
Jyc323 b520b7e
delete unnecessary class, replace **kwargs with kl_weight (add docstr…
Jyc323 c465bdc
modify the mock of head_idx_to_logits_range
Jyc323 395bd13
update list models pattern
Jyc323 bea688c
update list models pattern
Jyc323 0b87652
Merge branch 'develop' into hier_loss
Jyc323 4741836
ruff fix
Jyc323 eae07d7
fix ruff errors
Jyc323 3784847
Merge branch 'develop' into hier_loss
Jyc323 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
52 changes: 52 additions & 0 deletions
52
library/src/otx/backend/native/models/classification/losses/tree_path_kl_divergence_loss.py
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,52 @@ | ||
# Copyright (C) 2025 Intel Corporation | ||
# SPDX-License-Identifier: Apache-2.0 | ||
|
||
"""Module for defining TreePathKLDivergenceLoss.""" | ||
|
||
from __future__ import annotations | ||
|
||
import torch | ||
from torch import nn | ||
from torch.nn import functional | ||
|
||
|
||
class TreePathKLDivergenceLoss(nn.Module): | ||
"""KL divergence between model distribution over concatenated heads and a target distribution. | ||
|
||
Inputs: | ||
logits_list: list of tensors [B, C_l], ordered from root -> leaf | ||
targets: LongTensor [B, L] with per-level GT indices (L == len(logits_list)) | ||
|
||
The target distribution places 1/L probability on the GT index for each level, | ||
and 0 elsewhere, then uses KLDivLoss(log_softmax(logits), target_probs). | ||
""" | ||
|
||
def __init__(self, reduction: str | None = "batchmean", loss_weight: float = 1.0): | ||
super().__init__() | ||
self.reduction = reduction | ||
self.loss_weight = loss_weight | ||
self.kl_div = nn.KLDivLoss(reduction=self.reduction) | ||
|
||
def forward(self, logits_list: list[torch.Tensor], targets: torch.Tensor) -> torch.Tensor: | ||
"""Calculate tree_path KL Divergence loss.""" | ||
if not (isinstance(logits_list, (list, tuple)) and len(logits_list) > 0): | ||
msg = "logits_list must be non-empty" | ||
raise ValueError(msg) | ||
num_levels = len(logits_list) | ||
|
||
# concat logits across all levels | ||
dims = [t.size(1) for t in logits_list] | ||
logits_concat = torch.cat(logits_list, dim=1) # [B, sum(C_l)] | ||
log_probs = functional.log_softmax(logits_concat, dim=1) # [B, sum(C_l)] | ||
|
||
# build sparse target distribution with 1/L at each GT index | ||
batch = log_probs.size(0) | ||
tgt = torch.zeros_like(log_probs) # [B, sum(C_l)] | ||
offset = 0 | ||
for num_c, tgt_l in zip(dims, targets.T): # level-by-level | ||
idx_rows = torch.arange(batch, device=log_probs.device) | ||
tgt[idx_rows, offset + tgt_l] = 1.0 / num_levels | ||
offset += num_c | ||
|
||
kl = self.kl_div(log_probs, tgt) | ||
return self.loss_weight * kl |
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.