Skip to content

Commit 385dcd6

Browse files
authored
fix typo: lcueve.out->lcurve.out (#1077)
1 parent 91787ab commit 385dcd6

File tree

2 files changed

+14
-14
lines changed

2 files changed

+14
-14
lines changed

deepmd/utils/argcheck.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -581,7 +581,7 @@ def training_args(): # ! modified by Ziyao: data configuration isolated.
581581
arg_validation_data,
582582
Argument("numb_steps", int, optional=False, doc=doc_numb_steps, alias=["stop_batch"]),
583583
Argument("seed", [int,None], optional=True, doc=doc_seed),
584-
Argument("disp_file", str, optional=True, default='lcueve.out', doc=doc_disp_file),
584+
Argument("disp_file", str, optional=True, default='lcurve.out', doc=doc_disp_file),
585585
Argument("disp_freq", int, optional=True, default=1000, doc=doc_disp_freq),
586586
Argument("numb_test", [list,int,str], optional=True, default=1, doc=doc_numb_test),
587587
Argument("save_freq", int, optional=True, default=1000, doc=doc_save_freq),

doc/train-input-auto.rst

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -898,7 +898,7 @@ model:
898898
.. _`model/fitting_net[polar]/scale`:
899899

900900
scale:
901-
| type: ``list`` | ``float``, optional, default: ``1.0``
901+
| type: ``float`` | ``list``, optional, default: ``1.0``
902902
| argument path: ``model/fitting_net[polar]/scale``
903903
904904
The output of the fitting net (polarizability matrix) will be scaled by ``scale``
@@ -1102,71 +1102,71 @@ loss:
11021102
.. _`loss[ener]/start_pref_e`:
11031103

11041104
start_pref_e:
1105-
| type: ``int`` | ``float``, optional, default: ``0.02``
1105+
| type: ``float`` | ``int``, optional, default: ``0.02``
11061106
| argument path: ``loss[ener]/start_pref_e``
11071107
11081108
The prefactor of energy loss at the start of the training. Should be larger than or equal to 0. If set to none-zero value, the energy label should be provided by file energy.npy in each data system. If both start_pref_energy and limit_pref_energy are set to 0, then the energy will be ignored.
11091109

11101110
.. _`loss[ener]/limit_pref_e`:
11111111

11121112
limit_pref_e:
1113-
| type: ``int`` | ``float``, optional, default: ``1.0``
1113+
| type: ``float`` | ``int``, optional, default: ``1.0``
11141114
| argument path: ``loss[ener]/limit_pref_e``
11151115
11161116
The prefactor of energy loss at the limit of the training, Should be larger than or equal to 0. i.e. the training step goes to infinity.
11171117

11181118
.. _`loss[ener]/start_pref_f`:
11191119

11201120
start_pref_f:
1121-
| type: ``int`` | ``float``, optional, default: ``1000``
1121+
| type: ``float`` | ``int``, optional, default: ``1000``
11221122
| argument path: ``loss[ener]/start_pref_f``
11231123
11241124
The prefactor of force loss at the start of the training. Should be larger than or equal to 0. If set to none-zero value, the force label should be provided by file force.npy in each data system. If both start_pref_force and limit_pref_force are set to 0, then the force will be ignored.
11251125

11261126
.. _`loss[ener]/limit_pref_f`:
11271127

11281128
limit_pref_f:
1129-
| type: ``int`` | ``float``, optional, default: ``1.0``
1129+
| type: ``float`` | ``int``, optional, default: ``1.0``
11301130
| argument path: ``loss[ener]/limit_pref_f``
11311131
11321132
The prefactor of force loss at the limit of the training, Should be larger than or equal to 0. i.e. the training step goes to infinity.
11331133

11341134
.. _`loss[ener]/start_pref_v`:
11351135

11361136
start_pref_v:
1137-
| type: ``int`` | ``float``, optional, default: ``0.0``
1137+
| type: ``float`` | ``int``, optional, default: ``0.0``
11381138
| argument path: ``loss[ener]/start_pref_v``
11391139
11401140
The prefactor of virial loss at the start of the training. Should be larger than or equal to 0. If set to none-zero value, the virial label should be provided by file virial.npy in each data system. If both start_pref_virial and limit_pref_virial are set to 0, then the virial will be ignored.
11411141

11421142
.. _`loss[ener]/limit_pref_v`:
11431143

11441144
limit_pref_v:
1145-
| type: ``int`` | ``float``, optional, default: ``0.0``
1145+
| type: ``float`` | ``int``, optional, default: ``0.0``
11461146
| argument path: ``loss[ener]/limit_pref_v``
11471147
11481148
The prefactor of virial loss at the limit of the training, Should be larger than or equal to 0. i.e. the training step goes to infinity.
11491149

11501150
.. _`loss[ener]/start_pref_ae`:
11511151

11521152
start_pref_ae:
1153-
| type: ``int`` | ``float``, optional, default: ``0.0``
1153+
| type: ``float`` | ``int``, optional, default: ``0.0``
11541154
| argument path: ``loss[ener]/start_pref_ae``
11551155
11561156
The prefactor of atom_ener loss at the start of the training. Should be larger than or equal to 0. If set to none-zero value, the atom_ener label should be provided by file atom_ener.npy in each data system. If both start_pref_atom_ener and limit_pref_atom_ener are set to 0, then the atom_ener will be ignored.
11571157

11581158
.. _`loss[ener]/limit_pref_ae`:
11591159

11601160
limit_pref_ae:
1161-
| type: ``int`` | ``float``, optional, default: ``0.0``
1161+
| type: ``float`` | ``int``, optional, default: ``0.0``
11621162
| argument path: ``loss[ener]/limit_pref_ae``
11631163
11641164
The prefactor of atom_ener loss at the limit of the training, Should be larger than or equal to 0. i.e. the training step goes to infinity.
11651165

11661166
.. _`loss[ener]/relative_f`:
11671167

11681168
relative_f:
1169-
| type: ``NoneType`` | ``float``, optional
1169+
| type: ``float`` | ``NoneType``, optional
11701170
| argument path: ``loss[ener]/relative_f``
11711171
11721172
If provided, relative force error will be used in the loss. The difference of force will be normalized by the magnitude of the force in the label with a shift given by `relative_f`, i.e. DF_i / ( || F || + relative_f ) with DF denoting the difference between prediction and label and || F || denoting the L2 norm of the label.
@@ -1179,15 +1179,15 @@ loss:
11791179
.. _`loss[tensor]/pref`:
11801180

11811181
pref:
1182-
| type: ``int`` | ``float``
1182+
| type: ``float`` | ``int``
11831183
| argument path: ``loss[tensor]/pref``
11841184
11851185
The prefactor of the weight of global loss. It should be larger than or equal to 0. If controls the weight of loss corresponding to global label, i.e. 'polarizability.npy` or `dipole.npy`, whose shape should be #frames x [9 or 3]. If it's larger than 0.0, this npy should be included.
11861186

11871187
.. _`loss[tensor]/pref_atomic`:
11881188

11891189
pref_atomic:
1190-
| type: ``int`` | ``float``
1190+
| type: ``float`` | ``int``
11911191
| argument path: ``loss[tensor]/pref_atomic``
11921192
11931193
The prefactor of the weight of atomic loss. It should be larger than or equal to 0. If controls the weight of loss corresponding to atomic label, i.e. `atomic_polarizability.npy` or `atomic_dipole.npy`, whose shape should be #frames x ([9 or 3] x #selected atoms). If it's larger than 0.0, this npy should be included. Both `pref` and `pref_atomic` should be provided, and either can be set to 0.0.
@@ -1408,7 +1408,7 @@ training:
14081408
.. _`training/disp_file`:
14091409

14101410
disp_file:
1411-
| type: ``str``, optional, default: ``lcueve.out``
1411+
| type: ``str``, optional, default: ``lcurve.out``
14121412
| argument path: ``training/disp_file``
14131413
14141414
The file for printing learning curve.

0 commit comments

Comments
 (0)