An error when running conda-installed lmp #282
Replies: 9 comments
-
Which version did you install? |
Beta Was this translation helpful? Give feedback.
-
I am using the latest version by doing: The model I use is the Al-Mg downloaded from deepmd-kit website. |
Beta Was this translation helpful? Give feedback.
-
Oh I see. You cannot use a model trained by 1.x version deepmd-kit in 1.y deepmd-kit if x≠y. |
Beta Was this translation helpful? Give feedback.
-
@amcadmus @jameswind Who maintains the website? We need to indicate the deepmd-kit version of the model. |
Beta Was this translation helpful? Give feedback.
-
To install 1.1 version, using conda install deepmd-kit=1.1.*=*cpu lammps-dp=1.1.*=*cpu -c deepmodeling |
Beta Was this translation helpful? Give feedback.
-
Thanks for your response. It however give me another error called "out of range" using 1.1.* version. build with tf lib: /home/zit.bam.de/lwang1/miniconda3/envs/deepmd/lib/libtensorflow_cc.so;/home/zit.bam.de/lwang1/miniconda3/envs/deepmd/lib/libtensorflow_framework.so |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
The version of 0.12. * seems not working? conda install deepmd-kit==0.12.8=*cpu lammps-dp==0.12.8=*cpu -c deepmodeling UnsatisfiableError: The following specifications were found to be incompatible with each other: Output in format: Requested package -> Available versions Package deepmd-kit conflicts for: |
Beta Was this translation helpful? Give feedback.
-
We deleted 0.12 because there was not enough space. We just re-upload it. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
2020-10-26 23:57:10.091502: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: SSE4.1 SSE4.2 AVX AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-10-26 23:57:10.116914: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 2800075000 Hz
2020-10-26 23:57:10.120487: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55ca67ebf0c0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-10-26 23:57:10.120554: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
Not found: Op type not registered 'DescrptNorot' in binary running on ws8364. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.)
tf.contrib.resampler
should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.Beta Was this translation helpful? Give feedback.
All reactions