-
Notifications
You must be signed in to change notification settings - Fork 16
Log KL Divergence in GRPO Loss function #323
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from 1 commit
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -7,23 +7,26 @@ | |
import torch | ||
from torch import nn | ||
|
||
from forge.data_models.loss_metrics import LossMetrics | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm not sure this is a fully fleshed out data model we want to use. For now could we just define a loose type in this file and shove the metrics in that? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I've done it with data_model, because we might want to log some other things from different losses in future (margins from DPO loss for instance). |
||
|
||
|
||
class SimpleGRPOLoss(nn.Module): | ||
"""Simplified GRPO Loss for simplified single step updates | ||
Inspired by the Hugging Face TRL implementation: | ||
https://github.com/huggingface/trl/blob/417915a3e4d3e3bc8d7b196594308b8eabf928be/trl/trainer/grpo_trainer.py#L1624. | ||
""" | ||
|
||
def __init__(self, beta: float = 0.1): | ||
def __init__(self, beta: float = 0.1) -> torch.Tensor | LossMetrics: | ||
super().__init__() | ||
self.beta = beta | ||
|
||
def forward(self, logprobs, ref_logprobs, advantages, padding_mask): | ||
kl = torch.exp(ref_logprobs - logprobs) - (ref_logprobs - logprobs) - 1 | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can we log the KL divergence minus padding tokens? May have to move that op up in the loss function. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yep good idea There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Adressed |
||
|
||
per_token_policy_loss = torch.exp(logprobs - logprobs.detach()) * advantages | ||
per_token_loss = -(per_token_policy_loss - self.beta * kl) | ||
loss = ( | ||
((per_token_loss * padding_mask).sum(dim=1)) | ||
/ (padding_mask.sum(dim=1).clamp(min=1.0)) | ||
).mean() | ||
return loss | ||
return loss, LossMetrics(kl_divergence=kl, policy_entropy=torch.tensor(0)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shouldn't this be
tuple[..,..]
?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It should!