Skip to content
Discussion options

You must be logged in to vote

Dear developers and community members,

I want to use DPA3 model to predict a large system's (5000 atoms) energy and force. When using Python3-based interface (here), OOM was thrown even batch_size = 1 (V100-16G GPU was used). It seems that python3 interface can't use multiple GPU card for inference.

I also converted the configuration into the format of lammps data. By using LAMMPS, I can successfully predict the energy and force for this system (using multiple GPU cards)

Unfortunately, the Python interface does not support splitting large systems into smaller subsystems. Based on our observation, a system with ~5000 atoms, it will take roughly 40G vRAM to run DPA3.

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@MoseyQAQ
Comment options

Answer selected by njzjz
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants