Skip to content

Commit bbc055d

Browse files
[LTS] update toolchain version to 202502 (#6203)
* Toolchain-202502 Updates * followed 202502 updates * elpa-gpu modify
1 parent 1be7425 commit bbc055d

30 files changed

+380
-240
lines changed

toolchain/README.md

Lines changed: 72 additions & 47 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# The ABACUS Toolchain
22

3-
Version 2025.1
3+
Version 2025.2
44

55
## Main Developer
66

@@ -33,7 +33,6 @@ and give setup files that you can use to compile ABACUS.
3333
- [ ] Support a JSON or YAML configuration file for toolchain, which can be easily modified by users.
3434
- [ ] A better README and Detail markdown file.
3535
- [ ] Automatic installation of [DEEPMD](https://github.com/deepmodeling/deepmd-kit).
36-
- [ ] Better compliation method for ABACUS-DEEPMD and ABACUS-DEEPKS.
3736
- [ ] Modulefile generation scripts.
3837

3938

@@ -44,17 +43,17 @@ which will use scripts in *scripts* directory
4443
to compile install dependencies of ABACUS.
4544
It can be directly used, but not recommended.
4645

47-
There are also well-modified script to run *install_abacus_toolchain.sh* for `gnu-openblas` and `intel-mkl` toolchains dependencies.
46+
There are also well-modified script to run *install_abacus_toolchain.sh* for `gnu` (gcc-openblas), `intel` (intel-mkl-mpi-compiler), `gcc-aocl` and `aocc-aocl` toolchains dependencies.
4847

4948
```shell
5049
# for gnu-openblas
5150
> ./toolchain_gnu.sh
5251
# for intel-mkl
5352
> ./toolchain_intel.sh
54-
# for amd aocc-aocl
55-
> ./toolchain_amd.sh
56-
# for intel-mkl-mpich
57-
> ./toolchain_intel-mpich.sh
53+
# for AMD gcc-aocl
54+
> ./toolchain_gcc-aocl.sh
55+
# for AMD aocc-aocl
56+
> ./toolchain_aocc-aocl.sh
5857
```
5958

6059
It is recommended to run one of them first to get a fast installation of ABACUS under certain environments.
@@ -66,13 +65,16 @@ If you are using Intel environments via Intel-OneAPI: please note:
6665
4. Users can manually specify `--with-ifx=no` in `toolchain*.sh` to use `ifort` while keep other compiler to new version.
6766
5. More information is in the later part of this README.
6867

69-
**Notice: You GCC version should be no lower than 5 !!!, larger than 7.3.0 is recommended**
68+
If you are using AMD AOCL and AOCC, please note:
7069

71-
**Notice: You SHOULD `source` or `module load` related environments before use toolchain method for installation, espacially for `gcc` or `intel-oneAPI` !!!! for example, `module load mkl mpi icc compiler`**
70+
71+
**Notice: You GCC version should be no lower than 5 !!!. The toolchain will check it, and gcc with version larger than 7.3.0 is recommended.**
72+
73+
**Notice: You SHOULD `source` or `module load` related environments before use toolchain method for installation, especially for `intel`, `gcc-aocl` or `aocc-aocl` toolchain! For example, `module load mkl mpi icc compiler` for loading oneapi envs.**
7274

7375
**Notice: You SHOULD keep your environments systematic, for example, you CANNOT load `intel-OneAPI` environments while use gcc toolchain !!!**
7476

75-
**Notice: If your server system already have libraries like `cmake`, `openmpi`, please change related setting in `toolchain*.sh` like `--with-cmake=system`**
77+
**Notice: If your server system already have libraries like `cmake`, `openmpi`, please change related setting in `toolchain*.sh` like `--with-cmake=system`, note that the environments of these system package will not be added into install/setup file**
7678

7779

7880
All packages will be downloaded from [cp2k-static/download](https://www.cp2k.org/static/downloads). by `wget` , and will be detailedly compiled and installed in `install` directory by toolchain scripts, despite of:
@@ -82,7 +84,7 @@ All packages will be downloaded from [cp2k-static/download](https://www.cp2k.org
8284
- `LibRI` which will be downloaded from [LibRI](https://github.com/abacusmodeling/LibRI)
8385
- `LibCOMM` which will be downloaded from [LibComm](https://github.com/abacusmodeling/LibComm)
8486
- `RapidJSON` which will be downloaded from [RapidJSON](https://github.com/Tencent/rapidjson)
85-
Notice: These packages will be downloaded by `wget` from `github.com`, which is hard to be done in Chinese Internet. You may need to use offline installation method.
87+
Notice: These packages will be downloaded by `wget` from `codeload.github.com`, which bypass the difficulty of Chinese Internet in some extent. If any downloading problem occurs, you may need to use offline installation method.
8688

8789
Instead of github.com, we offer other package station, you can use it by:
8890
```shell
@@ -98,7 +100,7 @@ The above station will be updated handly but one should notice that the version
98100
If one want to install ABACUS by toolchain OFFLINE,
99101
one can manually download all the packages from [cp2k-static/download](https://www.cp2k.org/static/downloads) or official website
100102
and put them in *build* directory by formatted name
101-
like *fftw-3.3.10.tar.gz*, or *openmpi-5.0.6.tar.bz2*,
103+
like *fftw-3.3.10.tar.gz*, or *openmpi-5.0.7.tar.bz2*,
102104
then run this toolchain.
103105
All package will be detected and installed automatically.
104106
Also, one can install parts of packages OFFLINE and parts of packages ONLINE
@@ -113,17 +115,17 @@ just by using this toolchain
113115

114116
The needed dependencies version default:
115117

116-
- `cmake` 3.31.2
118+
- `cmake` 3.31.7
117119
- `gcc` 13.2.0 (which will always NOT be installed, But use system)
118-
- `OpenMPI` 5.0.6 (Version 5 OpenMPI is good but will have compability problem, user can manually downarade to Version 4 in toolchain scripts)
120+
- `OpenMPI` 5.0.7 (Version 5 OpenMPI is good but will have compability problem, user can manually downarade to Version 4 in toolchain scripts by specify `--with-openmpi4`)
119121
- `MPICH` 4.3.0
120-
- `OpenBLAS` 0.3.28 (Intel toolchain need `get_vars.sh` tool from it)
121-
- `ScaLAPACK` 2.2.1 (a developing version)
122+
- `OpenBLAS` 0.3.29 (Intel toolchain need `get_vars.sh` tool from it)
123+
- `ScaLAPACK` 2.2.2
122124
- `FFTW` 3.3.10
123125
- `LibXC` 7.0.0
124-
- `ELPA` 2025.01.001
125-
- `CEREAL` 1.3.2
126-
- `RapidJSON` 1.1.0
126+
- `ELPA` 2025.01.001 (may not be conpatiable for gpu-ver)
127+
- `CEREAL` master (for oneapi compatibility)
128+
- `RapidJSON` master (for oneapi compatibility)
127129
And:
128130
- Intel-oneAPI need user or server manager to manually install from Intel.
129131
- - [Intel-oneAPI](https://www.intel.cn/content/www/cn/zh/developer/tools/oneapi/toolkits.html)
@@ -132,23 +134,21 @@ And:
132134
- - [AOCL](https://www.amd.com/zh-cn/developer/aocl.html)
133135

134136
Dependencies below are optional, which is NOT installed by default:
135-
136137
- `LibTorch` 2.1.2
137138
- `Libnpy` 1.0.1
138-
- `LibRI` 0.2.0
139-
- `LibComm` 0.1.1
139+
- `LibRI` 0.2.1.0
140+
- `LibComm` master (for openmpi compatibility)
140141

141142
Users can install them by using `--with-*=install` in toolchain*.sh, which is `no` in default. Also, user can specify the absolute path of the package by `--with-*=path/to/package` in toolchain*.sh to allow toolchain to use the package.
142143
> Notice: LibTorch always suffer from GLIBC_VERSION problem, if you encounter this, please downgrade LibTorch version to 1.12.1 in scripts/stage4/install_torch.sh
143144
>
144145
> Notice: LibRI, LibComm, Rapidjson and Libnpy is on actively development, you should check-out the package version when using this toolchain.
145146
146147
Users can easily compile and install dependencies of ABACUS
147-
by running these scripts after loading `gcc` or `intel-mkl-mpi`
148-
environment.
148+
by running these scripts after loading related environment.
149149

150150
The toolchain installation process can be interrupted at anytime.
151-
just re-run *toolchain_\*.sh*, toolchain itself may fix it. If you encouter some problem, you can always remove some package in the interrupted points and re-run the toolchain.
151+
just re-run *toolchain_\*.sh*, toolchain itself may fix it. If you encouter some problem like file corrupted, you can always remove some package in the interrupted points and re-run the toolchain.
152152

153153
Some useful options:
154154
- `--dry-run`: just run the main install scripts for environment setting, without any package downloading or installation.
@@ -157,22 +157,25 @@ Some useful options:
157157
If compliation is successful, a message will be shown like this:
158158

159159
```shell
160-
> Done!
161-
> To use the installed tools and libraries and ABACUS version
162-
> compiled with it you will first need to execute at the prompt:
163-
> source ./install/setup
164-
> To build ABACUS by gnu-toolchain, just use:
165-
> ./build_abacus_gnu.sh
166-
> To build ABACUS by intel-toolchain, just use:
167-
> ./build_abacus_intel.sh
168-
> To build ABACUS by amd-toolchain in gcc-aocl, just use:
169-
> ./build_abacus_amd.sh
170-
> or you can modify the builder scripts to suit your needs.
160+
========================== usage =========================
161+
Done!
162+
To use the installed tools and libraries and ABACUS version
163+
compiled with it you will first need to execute at the prompt:
164+
source ${SETUPFILE}
165+
To build ABACUS by gnu-toolchain, just use:
166+
./build_abacus_gnu.sh
167+
To build ABACUS by intel-toolchain, just use:
168+
./build_abacus_intel.sh
169+
To build ABACUS by amd-toolchain in gcc-aocl, just use:
170+
./build_abacus_gnu-aocl.sh
171+
To build ABACUS by amd-toolchain in aocc-aocl, just use:
172+
./build_abacus_aocc-aocl.sh
173+
or you can modify the builder scripts to suit your needs.
171174
```
172175

173176
You can run *build_abacus_gnu.sh* or *build_abacus_intel.sh* to build ABACUS
174-
by gnu-toolchain or intel-toolchain respectively, the builder scripts will
175-
automatically locate the environment and compile ABACUS.
177+
by gnu-toolchain or intel-toolchain respectively, same for the `gcc-aocl` and `aocc-aocl` toolchain.
178+
Then, the builder scripts will automatically locate the environment and compile ABACUS.
176179
You can manually change the builder scripts to suit your needs.
177180
The builder scripts will generate `abacus_env.sh` for source
178181

@@ -240,6 +243,28 @@ then just build the abacus executable program by compiling it with `./build_abac
240243

241244
The ELPA method need more parameter setting, but it doesn't seem to be affected by the CUDA toolkits version, and it is no need to manually install and package.
242245

246+
Note: ELPA-2025.01.001 may have problem in nvidia-GPU compilation on some V100-GPU with AMD-CPU machine, error message:
247+
```bash
248+
1872 | static __forceinline void CONCAT_8ARGS(hh_trafo_complex_kernel_,ROW_LENGTH,_,SIMD_SET,_,BLOCK,hv_,WORD_LENGTH) (DATA_TYPE_PTR q, DATA_TYPE_PTR hh, int nb, int ldq
249+
| ^~~~~~~~~~~~~~~~~~~~~~~~
250+
../src/elpa2/kernels/complex_128bit_256bit_512bit_BLOCK_template.c:51:47: note: in definition of macro 'CONCAT2_8ARGS'
251+
51 | #define CONCAT2_8ARGS(a, b, c, d, e, f, g, h) a ## b ## c ## d ## e ## f ## g ## h
252+
| ^
253+
../src/elpa2/kernels/complex_128bit_256bit_512bit_BLOCK_template.c:1872:27: note: in expansion of macro 'CONCAT_8ARGS'
254+
1872 | static __forceinline void CONCAT_8ARGS(hh_trafo_complex_kernel_,ROW_LENGTH,_,SIMD_SET,_,BLOCK,hv_,WORD_LENGTH) (DATA_TYPE_PTR q, DATA_TYPE_PTR hh, int nb, int ldq
255+
| ^~~~~~~~~~~~
256+
PPFC src/GPU/libelpa_openmp_private_la-mod_vendor_agnostic_general_layer.lo
257+
PPFC test/shared/GPU/libelpatest_openmp_la-test_gpu_vendor_agnostic_layer.lo
258+
../src/GPU/CUDA/./cudaFunctions_template.h(942): error: identifier "creal" is undefined
259+
double alpha_real = creal(alpha);
260+
^
261+
262+
../src/GPU/CUDA/./cudaFunctions_template.h(960): error: identifier "creal" is undefined
263+
float alpha_real = creal(alpha);
264+
```
265+
266+
And you may need to change ELPA version to 2024.05.001, edit `toolchain/scripts/stage3/install_elpa.sh` to do it.
267+
243268
2. For the cusolvermp method, toolchain_*.sh does not need to be changed, just follow it directly install dependencies using `./toolchain_*.sh`, and then add
244269
```shell
245270
-DUSE_CUDA=ON \
@@ -268,11 +293,8 @@ After compiling, you can specify `device GPU` in INPUT file to use GPU version o
268293
269294
#### OneAPI 2025.0 problem
270295
271-
Generally, OneAPI 2025.0 can be useful to compile basic function of ABACUS, but one will encounter compatible problem related to something. Here is the treatment
272-
- related to rapidjson:
273-
- - Not to use rapidjson in your toolchain
274-
- - or use the master branch of [RapidJSON](https://github.com/Tencent/rapidjson)
275-
- related to LibRI: not to use LibRI or downgrade your OneAPI.
296+
Generally, OneAPI 2025.0 can be useful to compile basic function of ABACUS, but one will encounter compatible problem related to something.
297+
- related to LibRI: refer to [#6190](https://github.com/deepmodeling/abacus-develop/issues/6190), it is recommended not to use LibRI or downgrade your OneAPI now.
276298
277299
#### ELPA problem via Intel-oneAPI toolchain in AMD server
278300
@@ -301,19 +323,22 @@ And will not occur in Intel-MPI before 2021.10.0 (Intel-oneAPI before 2023.2.0)
301323
302324
More problem and possible solution can be accessed via [#2928](https://github.com/deepmodeling/abacus-develop/issues/2928)
303325
326+
#### gcc-MKL problem
327+
328+
You cannot use gcc as compiler while using MKL as math library for compile ABACUS, there will be lots of error in the lask linking step. See [#3198](https://github.com/deepmodeling/abacus-develop/issues/3198)
329+
304330
### AMD AOCC-AOCL problem
305331
306-
You cannot use AOCC to complie abacus now, see [#5982](https://github.com/deepmodeling/abacus-develop/issues/5982) .
332+
Use AOCC-AOCL to compile dependencies is permitted and usually get boosting in ABACUS efficiency. But you need to get rid of `flang` while compiling ELPA. Toolchain itself helps you make this `flang` shade in default of `aocc-aocl` toolchain, and you can manually use `flang` by setting `--with-flang=yes` in `toolchain_aocc-aocl.sh` to have a try, while toolchain helps you to bypass the possible errors in compiling ELPA with AOCC-AOCL, but the computing efficiency will be relatively lower compared to `gnu` or `gcc-aocl` toolchain.
307333
308-
However, use AOCC-AOCL to compile dependencies is permitted and usually get boosting in ABACUS effciency. But you need to get rid of `flang` while compling ELPA. Toolchain itself help you make this `flang` shade in default, and you can manully use `flang` by setting `--with-flang=yes` in `toolchain_amd.sh` to have a try.
334+
The `gcc-aocl` toolchain will have no problem above for aocc-dependent aocl. However, the gcc-dependent aocl will have some package linking problem related to OpenMPI. Take it with caution.
309335
310-
Notice: ABACUS via GCC-AOCL in AOCC-AOCL toolchain have no application with DeePKS, DeePMD and LibRI.
311336
312337
### OpenMPI problem
313338
314339
#### in EXX and LibRI
315340
316-
- GCC toolchain with OpenMPI cannot compile LibComm v0.1.1 due to the different MPI variable type from MPICH and IntelMPI, see discussion here [#5033](https://github.com/deepmodeling/abacus-develop/issues/5033), you can try use a newest branch of LibComm by
341+
- [Fixed in Toolchain 2025-02] GCC toolchain with OpenMPI cannot compile LibComm v0.1.1 due to the different MPI variable type from MPICH and IntelMPI, see discussion here [#5033](https://github.com/deepmodeling/abacus-develop/issues/5033), you can try use a newest branch of LibComm by
317342
```
318343
git clone https://gitee.com/abacus_dft/LibComm -b MPI_Type_Contiguous_Pool
319344
```

toolchain/build_abacus_gnu-aocl.sh renamed to toolchain/build_abacus_aocc-aocl.sh

Lines changed: 14 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -16,21 +16,22 @@ INSTALL_DIR=$TOOL/install
1616
source $INSTALL_DIR/setup
1717
cd $ABACUS_DIR
1818
ABACUS_DIR=$(pwd)
19-
#AOCLhome=/opt/aocl # user can specify this parameter
19+
#AOCLhome=/opt/aocl-linux-aocc-5.0.0/5.0.0/aocl/ # user should specify this parameter
2020

2121
BUILD_DIR=build_abacus_aocl
2222
rm -rf $BUILD_DIR
2323

2424
PREFIX=$ABACUS_DIR
2525
ELPA=$INSTALL_DIR/elpa-2025.01.001/cpu
26-
CEREAL=$INSTALL_DIR/cereal-1.3.2/include/cereal
26+
# ELPA=$INSTALL_DIR/elpa-2025.01.001/nvidia # for gpu-lcao
27+
CEREAL=$INSTALL_DIR/cereal-master/include/cereal
2728
LIBXC=$INSTALL_DIR/libxc-7.0.0
28-
RAPIDJSON=$INSTALL_DIR/rapidjson-1.1.0/
29-
# LAPACK=$AOCLhome/lib
30-
# SCALAPACK=$AOCLhome/lib
31-
# FFTW3=$AOCLhome
29+
RAPIDJSON=$INSTALL_DIR/rapidjson-master/
30+
LAPACK=$AOCLhome/lib
31+
SCALAPACK=$AOCLhome/lib
32+
FFTW3=$AOCLhome
3233
# LIBRI=$INSTALL_DIR/LibRI-0.2.1.0
33-
# LIBCOMM=$INSTALL_DIR/LibComm-0.1.1
34+
# LIBCOMM=$INSTALL_DIR/LibComm-master
3435
# LIBTORCH=$INSTALL_DIR/libtorch-2.1.2/share/cmake/Torch
3536
# LIBNPY=$INSTALL_DIR/libnpy-1.0.1/include
3637
# DEEPMD=$HOME/apps/anaconda3/envs/deepmd # v3.0 might have problem
@@ -40,6 +41,9 @@ RAPIDJSON=$INSTALL_DIR/rapidjson-1.1.0/
4041
cmake -B $BUILD_DIR -DCMAKE_INSTALL_PREFIX=$PREFIX \
4142
-DCMAKE_CXX_COMPILER=clang++ \
4243
-DMPI_CXX_COMPILER=mpicxx \
44+
-DLAPACK_DIR=$LAPACK \
45+
-DSCALAPACK_DIR=$SCALAPACK \
46+
-DFFTW3_DIR=$FFTW3 \
4347
-DELPA_DIR=$ELPA \
4448
-DCEREAL_INCLUDE_DIR=$CEREAL \
4549
-DLibxc_DIR=$LIBXC \
@@ -49,16 +53,16 @@ cmake -B $BUILD_DIR -DCMAKE_INSTALL_PREFIX=$PREFIX \
4953
-DUSE_ELPA=ON \
5054
-DENABLE_RAPIDJSON=ON \
5155
-DRapidJSON_DIR=$RAPIDJSON \
52-
# -DLAPACK_DIR=$LAPACK \
53-
# -DSCALAPACK_DIR=$SCALAPACK \
54-
# -DFFTW3_DIR=$FFTW3 \
5556
# -DENABLE_DEEPKS=1 \
5657
# -DTorch_DIR=$LIBTORCH \
5758
# -Dlibnpy_INCLUDE_DIR=$LIBNPY \
5859
# -DENABLE_LIBRI=ON \
5960
# -DLIBRI_DIR=$LIBRI \
6061
# -DLIBCOMM_DIR=$LIBCOMM \
6162
# -DDeePMD_DIR=$DEEPMD \
63+
# -DUSE_CUDA=ON \
64+
# -DENABLE_CUSOLVERMP=ON \
65+
# -D CAL_CUSOLVERMP_PATH=/opt/nvidia/hpc_sdk/Linux_x86_64/2x.xx/math_libs/1x.x/targets/x86_64-linux/lib
6266

6367
# if one want's to include deepmd, your system gcc version should be >= 11.3.0 for glibc requirements
6468

toolchain/build_abacus_intel-mpich.sh renamed to toolchain/build_abacus_gcc-aocl.sh

Lines changed: 29 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -6,36 +6,44 @@
66
#SBATCH -e install.err
77
# JamesMisaka in 2025.03.09
88

9-
# Build ABACUS by intel-toolchain with mpich
9+
# Build ABACUS by amd-openmpi toolchain
1010

11-
# module load mkl compiler
12-
# source path/to/setvars.sh
11+
# module load openmpi aocc aocl
1312

1413
ABACUS_DIR=..
1514
TOOL=$(pwd)
1615
INSTALL_DIR=$TOOL/install
1716
source $INSTALL_DIR/setup
1817
cd $ABACUS_DIR
1918
ABACUS_DIR=$(pwd)
19+
#AOCLhome=/opt/aocl-linux-aocc-5.0.0/5.0.0/aocl/ # user should specify this parameter
2020

21-
BUILD_DIR=build_abacus_intel-mpich
21+
BUILD_DIR=build_abacus_aocl
2222
rm -rf $BUILD_DIR
2323

2424
PREFIX=$ABACUS_DIR
2525
ELPA=$INSTALL_DIR/elpa-2025.01.001/cpu
26-
CEREAL=$INSTALL_DIR/cereal-1.3.2/include/cereal
27-
LIBXC=$INSTALL_DIR/libx-7.0.0
28-
RAPIDJSON=$INSTALL_DIR/rapidjson-1.1.0/
26+
# ELPA=$INSTALL_DIR/elpa-2025.01.001/nvidia # for gpu-lcao
27+
CEREAL=$INSTALL_DIR/cereal-master/include/cereal
28+
LIBXC=$INSTALL_DIR/libxc-7.0.0
29+
RAPIDJSON=$INSTALL_DIR/rapidjson-master/
30+
LAPACK=$AOCLhome/lib
31+
SCALAPACK=$AOCLhome/lib
32+
FFTW3=$AOCLhome
33+
# LIBRI=$INSTALL_DIR/LibRI-0.2.1.0
34+
# LIBCOMM=$INSTALL_DIR/LibComm-master
2935
# LIBTORCH=$INSTALL_DIR/libtorch-2.1.2/share/cmake/Torch
3036
# LIBNPY=$INSTALL_DIR/libnpy-1.0.1/include
31-
# LIBRI=$INSTALL_DIR/LibRI-0.2.1.0
32-
# LIBCOMM=$INSTALL_DIR/LibComm-0.1.1
3337
# DEEPMD=$HOME/apps/anaconda3/envs/deepmd # v3.0 might have problem
3438

39+
# if clang++ have problem, switch back to g++
40+
3541
cmake -B $BUILD_DIR -DCMAKE_INSTALL_PREFIX=$PREFIX \
36-
-DCMAKE_CXX_COMPILER=icpx \
42+
-DCMAKE_CXX_COMPILER=g++ \
3743
-DMPI_CXX_COMPILER=mpicxx \
38-
-DMKLROOT=$MKLROOT \
44+
-DLAPACK_DIR=$LAPACK \
45+
-DSCALAPACK_DIR=$SCALAPACK \
46+
-DFFTW3_DIR=$FFTW3 \
3947
-DELPA_DIR=$ELPA \
4048
-DCEREAL_INCLUDE_DIR=$CEREAL \
4149
-DLibxc_DIR=$LIBXC \
@@ -45,14 +53,16 @@ cmake -B $BUILD_DIR -DCMAKE_INSTALL_PREFIX=$PREFIX \
4553
-DUSE_ELPA=ON \
4654
-DENABLE_RAPIDJSON=ON \
4755
-DRapidJSON_DIR=$RAPIDJSON \
48-
# -DENABLE_DEEPKS=1 \
49-
# -DTorch_DIR=$LIBTORCH \
50-
# -Dlibnpy_INCLUDE_DIR=$LIBNPY \
51-
# -DENABLE_LIBRI=ON \
52-
# -DLIBRI_DIR=$LIBRI \
53-
# -DLIBCOMM_DIR=$LIBCOMM \
54-
# -DDeePMD_DIR=$DEEPMD \
55-
56+
# -DENABLE_DEEPKS=1 \
57+
# -DTorch_DIR=$LIBTORCH \
58+
# -Dlibnpy_INCLUDE_DIR=$LIBNPY \
59+
# -DENABLE_LIBRI=ON \
60+
# -DLIBRI_DIR=$LIBRI \
61+
# -DLIBCOMM_DIR=$LIBCOMM \
62+
# -DDeePMD_DIR=$DEEPMD \
63+
# -DUSE_CUDA=ON \
64+
# -DENABLE_CUSOLVERMP=ON \
65+
# -D CAL_CUSOLVERMP_PATH=/opt/nvidia/hpc_sdk/Linux_x86_64/2x.xx/math_libs/1x.x/targets/x86_64-linux/lib
5666

5767
# if one want's to include deepmd, your system gcc version should be >= 11.3.0 for glibc requirements
5868

0 commit comments

Comments
 (0)