-
Notifications
You must be signed in to change notification settings - Fork 17
question about d4 dispersion #5
Copy link
Copy link
Open
Description
In modules/d4_dispersion_energy.py, we have the following:
self.register_parameter(
"_scaleq", nn.Parameter(softplus_inverse(1.0), requires_grad=True)
) # for scaling charges of reference systems
Is requires_grad supposed to be True here? I ask because when that is the case, the value of _scaleq updates with each backwards pass. There is a note regarding this in the code,
def _compute_refc6(self) -> None:
"""
Function to compute the refc6 tensor. Important: If the charges of
reference systems are scaled and the scaleq parameter changes (e.g.
during training), then the refc6 tensor must be recomputed for correct
results.
"""
But the wording makes it sounds like this would not be the default behavior. For me to get correct results with this term, I do have to manually add the code:
loss.backward()
model.d4_dispersion_energy._compute_refc6()
in my training loop. Alternatively, if the refc6 term were a parameter with requires_grad=True, I imagine it would properly update itself. Is there a bug here or is this the expected implementation?
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels