2
2
3
3
![ badge] ( https://img.shields.io/endpoint?url=https://gist.githubusercontent.com/mlipbot/b6e4bf384215e60775699a83c3c00aef/raw/pytest-coverage-comment.json )
4
4
5
- ## ⚠️ Important note
6
-
7
- The * mlip* library is currently available as a pre-release version only.
8
- The release of the first stable version will follow later this month.
9
-
10
5
## 👀 Overview
11
6
12
7
* mlip* is a Python library for training and deploying
@@ -27,6 +22,9 @@ material science applications, (2) **extensibility and flexibility** for users m
27
22
experienced with MLIP and JAX, and (3) a focus on ** high inference speeds** that enable
28
23
running long MD simulations on large systems which we believe is necessary in order to
29
24
bring MLIP to large-scale industrial application.
25
+ See our [ inference speed benchmark] ( #-inference-time-benchmarks ) below.
26
+ With our library, we observe a 10x speedup on 138 atoms and up to 4x speed up
27
+ on 1205 atoms over equivalent implementations relying on Torch and ASE.
30
28
31
29
See the [ Installation] ( #-installation ) section for details on how to install
32
30
MLIP-JAX and the example Google Colab notebooks linked below for a quick way
@@ -75,6 +73,11 @@ directly from the GitHub repository, like this:
75
73
pip install git+https://github.com/jax-md/jax-md.git
76
74
```
77
75
76
+ Furthermore, note that among our library dependencies we have pinned the versions
77
+ for * jaxlib* , * matscipy* , and * orbax-checkpoint* to one specific version only to
78
+ prioritize reliability, however, we plan to allow for a more flexible definition of
79
+ our dependencies in upcoming releases.
80
+
78
81
## ⚡ Examples
79
82
80
83
In addition to the in-depth tutorials provided as part of our documentation
@@ -130,35 +133,39 @@ please refer to the model cards of the relevant HuggingFace repos.
130
133
131
134
## 🚀 Inference time benchmarks
132
135
133
- In order to showcase the runtime efficiency, we conducted benchmarks across all three models
134
- on two different systems: 1UAO (138 atoms) and 1ABT (1205 atoms), both run for 1ns on a H100
135
- NVidia GPU. All model implementations are our own, including the Torch + ASE benchmarks, and
136
+ In order to showcase the runtime efficiency, we conducted benchmarks across all three
137
+ models on two different systems: Chignolin
138
+ ([ 1UAO] ( https://www.rcsb.org/structure/1UAO ) , 138 atoms) and Alpha-bungarotoxin
139
+ ([ 1ABT] ( https://www.rcsb.org/structure/1ABT ) , 1205 atoms), both run for 1 ns of
140
+ MD simulation on a H100 NVIDIA GPU.
141
+ All model implementations are our own, including the Torch + ASE benchmarks, and
136
142
should not be considered representative of the performance of the code developed by the
137
- original authors of the methods. Further details can be found in our whitepaper (see below).
143
+ original authors of the methods.
144
+ Further details can be found in our white paper (see [ below] ( #-citing-our-work ) ).
138
145
139
146
** MACE (2,139,152 parameters):**
140
- | Systems | JAX + JAX MD | JAX + ASE | Torch + ASE |
147
+ | Systems | JAX + JAX- MD | JAX + ASE | Torch + ASE |
141
148
| --------- | -------------:| -------------:| -------------:|
142
- | 1UAO | 6.3 ms/step | 11.6 ms/step | TBC ms /step |
143
- | 1ABT | TBC ms/step | TBC ms/step | TBC ms /step |
149
+ | 1UAO | 6.3 ms/step | 11.6 ms/step | 44.2 /step |
150
+ | 1ABT | 66.8 ms/step | 99.5 ms/step | 157.2 /step |
144
151
145
152
** ViSNet (1,137,922 parameters):**
146
- | Systems | JAX + JAX MD | JAX + ASE | Torch + ASE |
153
+ | Systems | JAX + JAX- MD | JAX + ASE | Torch + ASE |
147
154
| --------- | -------------:| -------------:| -------------:|
148
155
| 1UAO | 2.9 ms/step | 6.2 ms/step | 33.8 ms/step |
149
- | 1ABT | 25.4 ms/step | TBC ms/step | TBC ms/step |
156
+ | 1ABT | 25.4 ms/step | 46.4 ms/step | 101.6 ms/step|
150
157
151
158
** NequIP (1,327,792 parameters):**
152
- | Systems | JAX + JAX MD | JAX + ASE | Torch + ASE |
159
+ | Systems | JAX + JAX- MD | JAX + ASE | Torch + ASE |
153
160
| --------- | -------------:| -------------:| -------------:|
154
161
| 1UAO | 3.8 ms/step | 8.5 ms/step | 38.7 ms/step |
155
- | 1ABT | TBC ms/step | TBC ms/step | TBC ms/step |
162
+ | 1ABT | 67.0 ms/step | 105.7 ms/step| 117.0 ms/step|
156
163
157
164
## 🙏 Acknowledgments
158
165
159
166
We would like to acknowledge beta testers for this library: Isabel Wilkinson,
160
167
Nick Venanzi, Hassan Sirelkhatim, Leon Wehrhan, Sebastien Boyer, Massimo Bortone,
161
- Tom Barrett, and Alex Laterre.
168
+ Scott Cameron, Louis Robinson, Tom Barrett, and Alex Laterre.
162
169
163
170
## 📚 Citing our work
164
171
0 commit comments