KeyError 'stress' when using GemNet or EquiformerV2 in ts.optimize
#218
-
Hi Team, I am trying to use Below is my code:from torch_sim.models.fairchem import FairChemModel
from fairchem.core.models.model_registry import model_name_to_local_file
# checkpoint_path = model_name_to_local_file('GemNet-OC-S2EFS-OC20+OC22', local_cache='mlip_checkpoints/')
checkpoint_path = model_name_to_local_file('EquiformerV2-lE4-lF100-S2EFS-OC22', local_cache='mlip_checkpoints/')
# run natively on gpus
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
foundational_potential = FairChemModel(model=checkpoint_path, cpu=False, seed=42, pbc=True, compute_stress=False)
trajectory_files = [f"bulk_traj_{i}.h5md" for i in range(len(bulk_systems_map))]
# relax all of the high temperature states
relaxed_state = ts.optimize(
system=[i for i in bulk_systems_map.values()],
model=foundational_potential,
optimizer=ts.frechet_cell_fire,
autobatcher=False,
)
print(relaxed_state.energy) The entire stack trace sits here:
Based on the above error I tried using
Above error aligns with gemnet models where model outputs are:
Does this mean, they are not compatible with torch-sim fire opt? I was able to run it with Package Details:
|
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
Thank you for pointing this out! I've forwarded this to someone on our team who have had a similar issue. Hopefully when they're less busy we'll get back to you on this! |
Beta Was this translation helpful? Give feedback.
-
Judging by the GemNet output, your model is not returning the stress tensor but the stress tensor is required when running |
Beta Was this translation helpful? Give feedback.
Judging by the GemNet output, your model is not returning the stress tensor but the stress tensor is required when running
ts.frechet_cell_fire
. You'll either need to turn on the stress calculation or use a different optimizer that doesn't optimize the cell.