generated from amazon-archives/__template_Apache-2.0
-
Notifications
You must be signed in to change notification settings - Fork 2
Open
Description
Hi authors,
While training the model, I noticed that the Gram loss remains extremely small throughout training. Even when I change the weight assigned to the Gram loss, its magnitude is still negligible compared to the other loss terms. I was wondering if you have observed a similar phenomenon during your experiments.
This makes me wonder whether the Gram loss actually contributes meaningfully during optimization. In practice, does the Gram loss significantly improve the reconstruction performance, or is its effect typically minimal?
Thanks for your help!
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels