Replies: 1 comment 4 replies
-
In this case, you need to export the directory of cuDNN library to |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I am trying to install Deepmd and LAMMPS with options available for running on a GPU node on a HPC. I have gone through the documentation for both "Easy Install" and "Build from source" over and over again and keep getting the same result:
(1) when I "Easy Install" via conda, everything works as intended BUT I need the LAMMPS VORONOI plug-in which cannot be enabled when LAMMPS was compiled with Conda.
(2) I finally got LAMMPS to work after building from source, BUT it is INSANELY slow even on a GPU (10,000 steps in a 96k atom system in FIVE DAYS). I have made sure that tensorflow "sees" the GPU it is running on, but I notice that the Conda-installed tensorflow is optimized with cuDNN, while the built-from-source tensorflow is not. Could this be the problem?
My question is: what is the best way to install Deepmd-kit and LAMMPS optimized to run on GPUs while also enabling custom lammps packages?
Beta Was this translation helpful? Give feedback.
All reactions