You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Or you can use it to run MD simulations. The script, an example input xyz file and a Colab notebook demonstration are available in the [examples directory.](./examples) This should work with any input, simply modify the input_file and cell_size parameters. We recommend using constant volume simulations.
121
128
122
129
130
+
### Floating Point Precision
131
+
132
+
As shown in usage snippets above, we support 3 floating point precision types: `"float32-high"`, `"float32-highest"` and `"float64"`.
133
+
134
+
The default value of `"float32-high"` is recommended for maximal acceleration when using A100 / H100 Nvidia GPUs. However, we have observed some performance loss for high-precision calculations involving second and third order properties of the PES. In these cases, we recommend `"float32-highest"`.
135
+
136
+
In stark constrast to other universal forcefields, we have not found any benefit to using `"float64"`.
137
+
123
138
### Finetuning
124
139
You can finetune the model using your custom dataset.
125
140
The dataset should be an [ASE sqlite database](https://wiki.fysik.dtu.dk/ase/ase/db/db.html#module-ase.db.core).
precision="float32-high", # or precision="float32-highest"
161
+
)
136
162
```
137
163
138
164
> ⚠ **Caveats**
139
165
>
140
-
> Our finetuning script is designed for simplicity and advanced users may wish to develop it further. Please be aware that:
166
+
> Our finetuning script is designed for simplicity. We strongly advise users to customise it further for their use-case to get the best performance. Please be aware that:
141
167
> - The script assumes that your ASE database rows contain **energy, forces, and stress** data. To train on molecular data without stress, you will need to edit the code.
142
168
> -**Early stopping** is not implemented. However, you can use the command line argument `save_every_x_epochs` (default is 5), so "retrospective" early stopping can be applied by selecting a suitable checkpoint.
143
169
> - The **learning rate schedule is hardcoded** to be `torch.optim.lr_scheduler.OneCycleLR` with `pct_start=0.05`. The `max_lr`/`min_lr` will be 10x greater/smaller than the `lr` specified via the command line. To get the best performance, you may wish to try other schedulers.
@@ -147,7 +173,6 @@ model = pretrained.orb_v2(weights_path=<path_to_ckpt>)
147
173
148
174
149
175
150
-
151
176
### Citing
152
177
153
178
A preprint describing the model in more detail can be found here: https://arxiv.org/abs/2410.22570
0 commit comments