You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/advanced/acceleration/cuda.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -36,7 +36,7 @@ The ABACUS program will automatically determine whether the current ELPA support
36
36
## Run with the GPU support by editing the INPUT script:
37
37
38
38
In `INPUT` file we need to set the input parameter [device](../input_files/input-main.md#device) to `gpu`. If this parameter is not set, ABACUS will try to determine if there are available GPUs.
39
-
- Set `ks_solver`: For the PW basis, CG, BPCG and Davidson methods are supported on GPU; set the input parameter [ks_solver](../input_files/input-main.md#ks_solver) to `cg`, `bpcg` or `dav`. For the LCAO basis, `cusolver` and `elpa` is supported on GPU.
39
+
- Set `ks_solver`: For the PW basis, CG, BPCG and Davidson methods are supported on GPU; set the input parameter [ks_solver](../input_files/input-main.md#ks_solver) to `cg`, `bpcg` or `dav`. For the LCAO basis, `cusolver`, `cusolvermp` and `elpa` is supported on GPU.
40
40
-**multi-card**: ABACUS allows for multi-GPU acceleration. If you have multiple GPU cards, you can run ABACUS with several MPI processes, and each process will utilize one GPU card. For example, the command `mpirun -n 2 abacus` will by default launch two GPUs for computation. If you only have one card, this command will only start one GPU.
Copy file name to clipboardExpand all lines: docs/advanced/input_files/input-main.md
+28-2Lines changed: 28 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -161,6 +161,7 @@
161
161
-[nbands\_istate](#nbands_istate)
162
162
-[bands\_to\_print](#bands_to_print)
163
163
-[if\_separate\_k](#if_separate_k)
164
+
-[out\_elf](#out_elf)
164
165
-[Density of states](#density-of-states)
165
166
-[dos\_edelta\_ev](#dos_edelta_ev)
166
167
-[dos\_sigma](#dos_sigma)
@@ -932,6 +933,8 @@ calculations.
932
933
-**genelpa**: This method should be used if you choose localized orbitals.
933
934
-**scalapack_gvx**: Scalapack can also be used for localized orbitals.
934
935
-**cusolver**: This method needs building with CUDA and at least one gpu is available.
936
+
-**cusolvermp**: This method supports multi-GPU acceleration and needs building with CUDA。 Note that when using cusolvermp, you should set the number of MPI processes to be equal to the number of GPUs.
937
+
-**elpa**: The ELPA solver supports both CPU and GPU. By setting the `device` to GPU, you can launch the ELPA solver with GPU acceleration (provided that you have installed a GPU-supported version of ELPA, which requires you to manually compile and install ELPA, and the ABACUS should be compiled with -DUSE_ELPA=ON and -DUSE_CUDA=ON). The ELPA solver also supports multi-GPU acceleration.
935
938
936
939
If you set ks_solver=`genelpa` for basis_type=`pw`, the program will be stopped with an error message:
937
940
@@ -940,7 +943,13 @@ calculations.
940
943
```
941
944
942
945
Then the user has to correct the input file and restart the calculation.
943
-
-**Default**: cg (plane-wave basis), or genelpa (localized atomic orbital basis, if compiling option `USE_ELPA` has been set),lapack (localized atomic orbital basis, if compiling option `ENABLE_MPI` has not been set), scalapack_gvx, (localized atomic orbital basis, if compiling option `USE_ELPA` has not been set and if compiling option `ENABLE_MPI` has been set)
946
+
-**Default**:
947
+
-**PW basis**: cg.
948
+
-**LCAO basis**:
949
+
- genelpa (if compiling option `USE_ELPA` has been set)
950
+
- lapack (if compiling option `ENABLE_MPI` has not been set)
951
+
- scalapack_gvx (if compiling option `USE_ELPA` has not been set and compiling option `ENABLE_MPI` has been set)
952
+
- cusolver (if compiling option `USE_CUDA` has been set)
944
953
945
954
### nbands
946
955
@@ -1521,7 +1530,7 @@ These variables are used to control the output of properties.
1521
1530
-**Type**: Integer \[Integer\](optional)
1522
1531
-**Description**:
1523
1532
The first integer controls whether to output the charge density on real space grids:
1524
-
-1. Output the charge density (in Bohr^-3) on real space grids into the density files in the folder `OUT.${suffix}`. The files are named as:
1533
+
- 1: Output the charge density (in Bohr^-3) on real space grids into the density files in the folder `OUT.${suffix}`. The files are named as:
1525
1534
- nspin = 1: SPIN1_CHG.cube;
1526
1535
- nspin = 2: SPIN1_CHG.cube, and SPIN2_CHG.cube;
1527
1536
- nspin = 4: SPIN1_CHG.cube, SPIN2_CHG.cube, SPIN3_CHG.cube, and SPIN4_CHG.cube.
@@ -1801,6 +1810,23 @@ The band (KS orbital) energy for each (k-point, spin, band) will be printed in t
1801
1810
-**Description**: Specifies whether to write the partial charge densities for all k-points to individual files or merge them. **Warning**: Enabling symmetry may produce incorrect results due to incorrect k-point weights. Therefore, when calculating partial charge densities, it is strongly recommended to set `symmetry = -1`.
1802
1811
-**Default**: false
1803
1812
1813
+
### out_elf
1814
+
1815
+
-**Type**: Integer \[Integer\](optional)
1816
+
-**Availability**: Only for Kohn-Sham DFT and Orbital Free DFT.
1817
+
-**Description**: Whether to output the electron localization function (ELF) in the folder `OUT.${suffix}`. The files are named as
The second integer controls the precision of the kinetic energy density output, if not given, will use `3` as default. For purpose restarting from this file and other high-precision involved calculation, recommend to use `10`.
1825
+
1826
+
---
1827
+
In molecular dynamics calculations, the output frequency is controlled by [out_interval](#out_interval).
0 commit comments