Running lammps with GPU #1769
Replies: 4 comments 7 replies
-
|
Beta Was this translation helpful? Give feedback.
-
Dear Dr. Wang, Thank you for your response. I just did some tests and found that the parallel simulation seems not to work for multiple nodes. How can I optimize the simulation for multiple nodes??? Also, I found that using one node (4 GPUs) and running with 12 MPI (mpirun -np 12 ...) tasks provide the best performance for some reason. About our HPC nodes: Each node contains four Nvidia V100 GPUs and two 24-core Intel Xeon Scalable 'Cascade Lake' processors. Please check the attachments for the log files. 1 node + 4 mpi tasks 2 node + 8 mpi tasks 1 node + 12 mpi tasks Any suggestion is very appreciated Bests |
Beta Was this translation helpful? Give feedback.
-
I only see stderr in your logfiles. Could you please provide stdout? or |
Beta Was this translation helpful? Give feedback.
-
You did not set |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi Everyone,
I installed the GPU version of Deepmd using conda:
conda create -n deepmd deepmd-kit=*=*gpu libdeepmd=*=*gpu lammps-dp cudatoolkit=11.3 horovod -c https://conda.deepmodeling.org
I wonder if the installed version of lammps is also GPU?
If I would like to run this version of lammps on a HPC node of 48 CPUs and 4 GPUs. What command should I use
mpirun -np 48 lmp -i input.lammps OR mpirun -np 4 lmp -i input.lammps ???
Thank you for your help!
Beta Was this translation helpful? Give feedback.
All reactions