Replies: 2 comments 2 replies
-
oneDNN doesn't support FP64 (the default setting of the deepmd-kit). Have you set the precision to FP32? |
Beta Was this translation helpful? Give feedback.
2 replies
-
Understood, thank you. One observation, if I may: "Precision" is not accepted as an input under "Model", but it is accepted under "Descriptor" and "Fitting Net". When placed as an argument under "Model", the error "Not permitted in strict mode" applies". |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I've noticed a significant bottleneck regarding running DeePMD with multiple workers on a single node. This specifically concerns the MatMul operator. Using an optimized release of Tensorflow (oneDNN in place of eigen), I had expected to see MKL being called with the execution of Tensorflow operators such as MatMul. However, this is not the case.
I understand that MatMul is used in place of GEMM due to your use of model compression to allow for large speedups in the inference phase.
In your experience, do you know of ways to speed-up MatMul's compute time?
Thank you very much for your time.
Beta Was this translation helpful? Give feedback.
All reactions