-
Notifications
You must be signed in to change notification settings - Fork 18
Description
Hi , I’ve been using the cosine_similarity_loss function from your repository, and I’d like to suggest a small modification to improve its consistency in reflecting loss values.
Current Implementation:
def cosine_similarity_loss(f1, f2):
f1 = F.normalize(f1, p=2., dim=-1, eps=1e-5)
f2 = F.normalize(f2, p=2., dim=-1, eps=1e-5)
return -(f1 * f2).sum(dim=1)
Proposed Modification:
def cosine_similarity_loss(f1, f2):
f1 = F.normalize(f1, p=2., dim=-1, eps=1e-5)
f2 = F.normalize(f2, p=2., dim=-1, eps=1e-5)
return 1.0-(f1 * f2).sum(dim=1)
Rationale: The proposed modification changes the loss calculation to:
1.0 -(f1 * f2).sum(dim=1)
This change ensures that the loss values strive towards zero, making it more intuitive and consistent for gradient-based optimization. In scenarios where f1 and f2 are perfectly aligned, the cosine similarity will be 1, resulting in a zero loss, which is desirable for convergence.
I believe this adjustment will enhance the clarity and effectiveness of the loss function.
Thank you for considering this suggestion. I’m happy to discuss further if needed.