Replies: 1 comment
-
|
Hi @kuntalg97! I'm Dosu and I’m helping the deepmd-kit team. Yes, DeePMD models (.pb files) trained and frozen on CPU are directly compatible with GPU-accelerated MD in LAMMPS. There are no hardware-specific caveats—compatibility depends on the model file format and DeePMD-kit version, not on whether you trained/froze the model on CPU or GPU. Inference speed on GPU is determined by the model architecture, backend, and software version, not by the hardware used for training or freezing the model. Models trained/frozen on GPU are not inherently faster for MD on GPU than those trained/frozen on CPU. For best performance, consider model compression and tabulated inference, which benefit both CPU and GPU runs, but these are independent of the training hardware. More details are in the official documentation and user discussions. To reply, just mention @dosu. How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I have a couple of DeePMD models (.pb files) that I trained and froze using the CPU version of DeePMD. I would like to deploy them on GPU accelerated MD with LAMMPS patched with DeePMD. I can see that they are running successfully, but I wanted to know if there are any caveats on doing this? Like would a DeePMD model trained and frozen directly on GPUs be faster for MD on GPU as compared to the CPU models?
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions