You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Switch to uv
* Update code
* Update tests
* Readmes, examples, fixes
* TorchSim and D3 examples, bug fixes
* Use uv in CI
* Clean up
* Dockerfile
* Made torch-sim optional
* Removed default loss_weights from conservative/direct_regressor, added them in pretrained.py instead
* Added changelog to README.md
The `orb_models`package uses [Poetry](https://python-poetry.org/) for dependency management. To install the package and its dependencies, run the following command:
3
+
The `orb_models`repository uses [uv](https://docs.astral.sh/uv/) for dependency management. To install the package and its dependencies, run the following command:
4
4
5
5
```bash
6
-
pip install poetry#Install Poetry if you don't have it
7
-
poetry install
6
+
pipx install uv#If you don't have uv, we recommend installing it into an isolated environment with pipx: https://docs.astral.sh/uv/getting-started/installation/#pypi
7
+
uv sync --group dev # Install orb-models and development packages
8
8
```
9
9
10
-
Optionally, also install [cuML](https://docs.rapids.ai/install/) (requires CUDA):
10
+
### Running linters
11
+
12
+
The `orb_models` repository uses `ruff` for formatting and linting, and `mypy` for type checking. To run the linters, use the following commands:
13
+
11
14
```bash
12
-
pip install "cuml-cu11==25.2.*"# For cuda versions >=11.4, <11.8
13
-
pip install "cuml-cu12==25.2.*"# For cuda versions >=12.0, <13.0
15
+
ruff format .# Format code
16
+
ruff check .# Check for linting errors
17
+
mypy .# Run type checking
14
18
```
15
19
16
20
### Running tests
17
21
18
-
The `orb_models`package uses `pytest` for testing. To run the tests, navigate to the root directory of the package and run the following command:
22
+
The `orb_models`repository uses `pytest` for testing. To run the tests, navigate to the root directory of the package and run the following command:
19
23
20
24
```bash
21
-
pytest
25
+
pytest -n auto ./tests/
22
26
```
23
27
24
28
### Publishing
25
29
26
-
The `orb_models` package is published using [trusted publishers](https://docs.pypi.org/trusted-publishers/). Whenever a new release is created on GitHub, the package is automatically published to PyPI using GitHub Actions.
30
+
The `orb_models` package is published using [trusted publishers](https://docs.pypi.org/trusted-publishers/). Whenever a new release is created on GitHub, the package is automatically published to PyPI using GitHub Actions.
@@ -300,8 +302,6 @@ model = pretrained.orb_v3_direct_omol(
300
302
# The model is now ready for training with your custom configuration!
301
303
```
302
304
303
-
This approach is more "pythonic" and clearly documents what configuration options are available. It also encapsulates the implementation details, making your code less fragile to internal changes.
304
-
305
305
## How It Works
306
306
307
307
### Reference Energies
@@ -319,7 +319,7 @@ import torch
319
319
from orb_models.forcefield import pretrained
320
320
321
321
# Load model architecture (set train=False for inference)
322
-
model = pretrained.orb_v3_conservative_omol(train=False)
Copy file name to clipboardExpand all lines: MODELS.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ We provide several pretrained models that can be used to calculate energies, for
4
4
5
5
### OrbMol Models
6
6
7
-
These models are a continuation of the [`orb-v3`](#v3-models) series trained on the [Open Molecules 2025 (OMol25)](https://arxiv.org/pdf/2505.08762) dataset—over 100M high-accuracy DFT calculations (ωB97M-V/def2-TZVPD) on diverse molecular systems including metal complexes, biomolecules, and electrolytes. Note: The training data does not contain periodic systmems and these models have not been carefully tested on periodic systems.
7
+
These models are a continuation of the [`orb-v3`](#v3-models) series trained on the [Open Molecules 2025 (OMol25)](https://arxiv.org/pdf/2505.08762) dataset—over 100M high-accuracy DFT calculations (ωB97M-V/def2-TZVPD) on diverse molecular systems including metal complexes, biomolecules, and electrolytes. Note: The training data does not contain periodic systems and these models have not been carefully tested on periodic systems.
8
8
9
9
There are two options:
10
10
*`orb-v3-conservative-omol`
@@ -15,11 +15,11 @@ See below for more explanation of this naming convention. Both models have `inf`
15
15
### [V3 Models](https://arxiv.org/abs/2504.06231)
16
16
17
17
V3 models use the following naming convention: ```orb-v3-X-Y-Z``` where:
18
-
-`X`: Model type - `direct` or `conservative`. Conservative models compute forces and stress via backpropagation, which is a physically motivated choice that appears necessary for certain types of simulation such as NVE Molecular dynamics. Conservative models are signficantly slower and use more memory than their direct counterparts.
18
+
-`X`: Model type - `direct` or `conservative`. Conservative models compute forces and stress via backpropagation, which is a physically motivated choice that appears necessary for certain types of simulation such as NVE Molecular dynamics. Conservative models are significantly slower and use more memory than their direct counterparts.
19
19
20
-
-`Y`: Maximum neighbors per atom: `20` or `inf`. A finite cutoff of `20` induces discontinuties in the PES, which can lead to significant inaccuracies for certain types of highly sensitive calculations (e.g. calculations involving Hessians). However, finite cutoffs reduce the amount of edge processing in the network, reducing latency and memory use.
20
+
-`Y`: Maximum neighbors per atom: `20` or `inf`. A finite cutoff of `20` induces discontinuities in the PES, which can lead to significant inaccuracies for certain types of highly sensitive calculations (e.g. calculations involving Hessians). However, finite cutoffs reduce the amount of edge processing in the network, reducing latency and memory use.
21
21
22
-
-`Z`: Training dataset - `omat` or `mpa`. Both of these dataset consist of small bulk crystal structures. We find that models trained on such data can generalise reasonably well to non-periodic systems (organic molecules) or partially periodic systems (slabs), but caution is advised in these scenarios.
22
+
-`Z`: Training dataset - `omat` or `mpa`. Both of these datasets consist of small bulk crystal structures. We find that models trained on such data can generalise reasonably well to non-periodic systems (organic molecules) or partially periodic systems (slabs), but caution is advised in these scenarios.
23
23
24
24
#### Features:
25
25
@@ -35,7 +35,7 @@ V3 models use the following naming convention: ```orb-v3-X-Y-Z``` where:
35
35
#### Advice / Caveats
36
36
37
37
- Consider using `orb-v3-conservative-120-omat` for initial testing, specifying `precision='float32-highest'` when loading the model. This is the most computational expensive but accurate configuration. If this level of accuracy meets your needs, then other models and precisions can be investigated to improve speed and system-size scalability.
38
-
- We do not advise using the `-mpa` models unless they are required for compatability with benchmarks (for example, Matbench Discovery). They are generally less performant.
38
+
- We do not advise using the `-mpa` models unless they are required for compatibility with benchmarks (for example, Matbench Discovery). They are generally less performant.
39
39
- Orb-v3 models are **compiled** by default and use Pytorch's dynamic batching, which means that they do not need to recompile as graph sizes change. However, the first call to the model will be slower, as the graph is compiled by torch.
40
40
41
41
### [V2 Models](https://arxiv.org/abs/2410.22570)
@@ -53,4 +53,4 @@ V3 models use the following naming convention: ```orb-v3-X-Y-Z``` where:
53
53
54
54
### [V1 Models](https://arxiv.org/abs/2410.22570)
55
55
56
-
Our initial release. These models were state of the art performance on the matbench discovery dataset at time of release, but have since been superceeded and removed.
56
+
Our initial release. These models were state of the art performance on the matbench discovery dataset at time of release, but have since been superseded and removed.
0 commit comments