You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+9-4Lines changed: 9 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -70,9 +70,12 @@ For example, `orb-v3-conservative-inf-omat` is a model that:
70
70
- Computes forces/stress as gradients of energy
71
71
- Has effectively infinite neighbors (120 in practice)
72
72
- Was trained on the OMat24 dataset
73
-
```
74
73
75
-
*We suggest using models trained on OMAT24, as these models are more performant and the data they are trained on uses newer pseudopotentials in VASP (PBE54 vs PBE52)*. `-mpa` models should be used if compatability with benchmarks (for example, Matbench Discovery) is required.
74
+
75
+
Orb-v3 models are **compiled** by default and use Pytorch's dynamic batching, which means that they do not need to recompile as graph sizes change. However, the first call to the model will be slower, as the graph is compiled by torch.
76
+
77
+
78
+
**We suggest using models trained on OMAT24**, as these models are more performant and the data they are trained on uses newer pseudopotentials in VASP (PBE54 vs PBE52)*. `-mpa` models should be used if compatability with benchmarks (for example, Matbench Discovery) is required.
76
79
77
80
#### V2 Models
78
81
@@ -99,11 +102,13 @@ from orb_models.forcefield.base import batch_graphs
99
102
100
103
device ="cpu"# or device="cuda"
101
104
orbff = pretrained.orb_v3_conservative_inf_omat(
102
-
device=device
105
+
device=device,
103
106
precision="float32-high", # or "float32-highest" / "float64
0 commit comments