-
Notifications
You must be signed in to change notification settings - Fork 37
Open
Description
The linear method minimizer has an issue where the regularization can spike from 0.001 to >1 in just one iteration which freezes the optimization even when the norm of the gradients are still on the order of ~1e-5 to ~1e-4. To deduce if the optimizer has converged, we often expect something like the grad of the norm to go to ~1e-8. As you can see, the gradient also freezes and flatlines when the regularization spikes.
I don't know if this is intentional, or due to ill-conditioning of the tangent space? Any way to mitigate this?
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels