Skip to content

Commit 58340f3

Browse files
github-actions[bot]chrbrunk
authored andcommitted
release: create release-0.0.1a2 branch
1 parent a04d3e3 commit 58340f3

File tree

5 files changed

+44
-44
lines changed

5 files changed

+44
-44
lines changed

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ bring MLIP to large-scale industrial application.
3333
See the [Installation](#-installation) section for details on how to install
3434
MLIP-JAX and the example Google Colab notebooks linked below for a quick way
3535
to get started. For detailed instructions, visit our extensive
36-
[code documentation](https://instadeep.github.io/mlip/).
36+
[code documentation](https://instadeepai.github.io/mlip/).
3737

3838
This repository currently supports implementations of:
3939
- [MACE](https://arxiv.org/abs/2206.07697)
@@ -80,7 +80,7 @@ pip install git+https://github.com/jax-md/jax-md.git
8080
## ⚡ Examples
8181

8282
In addition to the in-depth tutorials provided as part of our documentation
83-
[here](https://instadeep.github.io/mlip/user_guide/index.html#deep-dive-tutorials),
83+
[here](https://instadeepai.github.io/mlip/user_guide/index.html#deep-dive-tutorials),
8484
we also provide example Jupyter notebooks that can be used as
8585
simple templates to build your own MLIP pipelines:
8686

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[tool.poetry]
22
name = "mlip"
3-
version = "0.0.1a1"
3+
version = "0.0.1a2"
44
description = ""
55
license = "LICENSE"
66
authors = [

tutorials/model_addition_tutorial.ipynb

Lines changed: 22 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@
2222
"cell_type": "markdown",
2323
"metadata": {},
2424
"source": [
25-
"As a first step, we will run the installation of the *mlip* library directly from pip. We also install the appropriate Jax CUDA backend to run on GPU (comment it out to run on CPU). In this notebook, we will not run any simulation and therefore do not install Jax-MD, for details on how to do so, please refer to our *simulation* tutorial. Note that if you have ran another tutorial in the same environment, this installation is not required. Please refer to [our installation page](https://mlip-jax-dot-int-research-tpu.uc.r.appspot.com/installation/index.html) for more information."
25+
"As a first step, we will run the installation of the *mlip* library directly from pip. We also install the appropriate Jax CUDA backend to run on GPU (comment it out to run on CPU). In this notebook, we will not run any simulation and therefore do not install Jax-MD, for details on how to do so, please refer to our *simulation* tutorial. Note that if you have ran another tutorial in the same environment, this installation is not required. Please refer to [our installation page](https://instadeepai.github.io/mlip/installation/index.html) for more information."
2626
]
2727
},
2828
{
@@ -74,20 +74,20 @@
7474
"- [`MLIPNetwork`][MLIPNetwork] is a base class for GNNs that **computes node-wise energy** summands from edge vectors, node species, and graph edges passed as `senders` and `receivers` index arrays.\n",
7575
"- [`ForceFieldPredictor`][ForceFieldPredictor] is a generic wrapper around any [`MLIPNetwork`][MLIPNetwork].\n",
7676
"\n",
77-
" It gathers **total energy, forces (and, if required, stress)** in the [`Prediction`](https://mlip-jax-dot-int-research-tpu.uc.r.appspot.com/api_reference/models/prediction.html) dataclass, by summing the node energies obtained from [`MLIPNetwork`][MLIPNetwork] on a [`jraph.GraphsTuple`](https://jraph.readthedocs.io/en/latest/api.html) object, and differentiating with respect to positions (and unit cell).\n",
77+
" It gathers **total energy, forces (and, if required, stress)** in the [`Prediction`](https://instadeepai.github.io/mlip/api_reference/models/prediction.html) dataclass, by summing the node energies obtained from [`MLIPNetwork`][MLIPNetwork] on a [`jraph.GraphsTuple`](https://jraph.readthedocs.io/en/latest/api.html) object, and differentiating with respect to positions (and unit cell).\n",
7878
"\n",
7979
"\n",
80-
"For convenience, our training loop and simulation engines finally work with [`ForceField`](https://mlip-jax-dot-int-research-tpu.uc.r.appspot.com/api_reference/models/force_field.html) objects that **wrap a force field predictor and its learnable parameters within a frozen dataclass object**.\n",
80+
"For convenience, our training loop and simulation engines finally work with [`ForceField`](https://instadeepai.github.io/mlip/api_reference/models/force_field.html) objects that **wrap a force field predictor and its learnable parameters within a frozen dataclass object**.\n",
8181
"\n",
8282
"For illustration, in this notebook we will\n",
8383
"\n",
8484
"2. Define a very simple model that returns constant energies,\n",
8585
"3. Define a more involved GNN model without equivariance constraints.\n",
8686
"\n",
87-
"[MLIPNetwork]: https://mlip-jax-dot-int-research-tpu.uc.r.appspot.com/api_reference/models/mlip_network.html\n",
88-
"[ForceFieldPredictor]: https://mlip-jax-dot-int-research-tpu.uc.r.appspot.com/api_reference/models/predictor.html\n",
89-
"[ForceField]: https://mlip-jax-dot-int-research-tpu.uc.r.appspot.com/api_reference/models/force_field.html\n",
90-
"[Prediction]: https://mlip-jax-dot-int-research-tpu.uc.r.appspot.com/api_reference/models/prediction.html"
87+
"[MLIPNetwork]: https://instadeepai.github.io/mlip/api_reference/models/mlip_network.html\n",
88+
"[ForceFieldPredictor]: https://instadeepai.github.io/mlip/api_reference/models/predictor.html\n",
89+
"[ForceField]: https://instadeepai.github.io/mlip/api_reference/models/force_field.html\n",
90+
"[Prediction]: https://instadeepai.github.io/mlip/api_reference/models/prediction.html"
9191
]
9292
},
9393
{
@@ -102,16 +102,16 @@
102102
"\n",
103103
"### a. *Config and DatasetInfo*\n",
104104
"\n",
105-
"To facilitate model loading and saving, our [`MLIPNetwork`](https://mlip-jax-dot-int-research-tpu.uc.r.appspot.com/api_reference/models/mlip_network.html) class **gathers (almost) all of their hyperparameters within a `pydantic.BaseModel` subclass**. Their class attribute `.Config` points to this configuration class. Only exceptions consist of hyperparameters that are data dependent, and might\n",
105+
"To facilitate model loading and saving, our [`MLIPNetwork`](https://instadeepai.github.io/mlip/api_reference/models/mlip_network.html) class **gathers (almost) all of their hyperparameters within a `pydantic.BaseModel` subclass**. Their class attribute `.Config` points to this configuration class. Only exceptions consist of hyperparameters that are data dependent, and might\n",
106106
"conflict with the data processing pipeline.\n",
107107
"\n",
108-
"This is why [`MLIPNetwork`](https://mlip-jax-dot-int-research-tpu.uc.r.appspot.com/api_reference/models/mlip_network.html) **also accept a [`DatasetInfo`](https://mlip-jax-dot-int-research-tpu.uc.r.appspot.com/api_reference/data/dataset_info.html) object** upon initialization, that notably stores:\n",
108+
"This is why [`MLIPNetwork`](https://instadeepai.github.io/mlip/api_reference/models/mlip_network.html) **also accept a [`DatasetInfo`](https://instadeepai.github.io/mlip/api_reference/data/dataset_info.html) object** upon initialization, that notably stores:\n",
109109
"- `cutoff_distance_angstrom : float`\n",
110110
"- `atomic_energies_map : dict[int, float]`\n",
111111
"- `avg_num_neighbours : float`\n",
112112
"- and some other data computed when processing the dataset.\n",
113113
"\n",
114-
"This way, we are sure that our models can only be used in the context they were trained for, and will not be evaluated e.g. on atomic numbers they have never seen. We create a dummy [`DatasetInfo`](https://mlip-jax-dot-int-research-tpu.uc.r.appspot.com/api_reference/data/dataset_info.html) for the purpose of this example:"
114+
"This way, we are sure that our models can only be used in the context they were trained for, and will not be evaluated e.g. on atomic numbers they have never seen. We create a dummy [`DatasetInfo`](https://instadeepai.github.io/mlip/api_reference/data/dataset_info.html) for the purpose of this example:"
115115
]
116116
},
117117
{
@@ -197,7 +197,7 @@
197197
"source": [
198198
"### c. *Constant force field*\n",
199199
"\n",
200-
"Now that we have defined this simple `ConstantMLIP` subclass, we can already define a state-holding [`ForceField`](https://mlip-jax-dot-int-research-tpu.uc.r.appspot.com/api_reference/models/force_field.html) object. The quickest (but slightly opaque) way is to use the helper classmethod `ForceField.from_mlip_network()`:"
200+
"Now that we have defined this simple `ConstantMLIP` subclass, we can already define a state-holding [`ForceField`](https://instadeepai.github.io/mlip/api_reference/models/force_field.html) object. The quickest (but slightly opaque) way is to use the helper classmethod `ForceField.from_mlip_network()`:"
201201
]
202202
},
203203
{
@@ -237,9 +237,9 @@
237237
"source": [
238238
"For the sake of transparency, let us detail what is actually being done here.\n",
239239
"\n",
240-
"First, a [`ForceFieldPredictor`](https://mlip-jax-dot-int-research-tpu.uc.r.appspot.com/api_reference/models/predictor.html) instance is created on top of the `constant_mlip` model.\n",
240+
"First, a [`ForceFieldPredictor`](https://instadeepai.github.io/mlip/api_reference/models/predictor.html) instance is created on top of the `constant_mlip` model.\n",
241241
"\n",
242-
"Then, random parameters are initialized by calling the predictor's `.init()` method on a random seed and a dummy graph. These two objects (the predictor and its parameter dict) are wrapped for convenience inside the [`ForceField`](https://mlip-jax-dot-int-research-tpu.uc.r.appspot.com/api_reference/models/force_field.html) dataclass. The following is thus equivalent:"
242+
"Then, random parameters are initialized by calling the predictor's `.init()` method on a random seed and a dummy graph. These two objects (the predictor and its parameter dict) are wrapped for convenience inside the [`ForceField`](https://instadeepai.github.io/mlip/api_reference/models/force_field.html) dataclass. The following is thus equivalent:"
243243
]
244244
},
245245
{
@@ -265,9 +265,9 @@
265265
"id": "Po9NjwTo3tvX"
266266
},
267267
"source": [
268-
"We'll see below how to manually initialize parameters, and call the [`ForceField`](https://mlip-jax-dot-int-research-tpu.uc.r.appspot.com/api_reference/models/force_field.html) default constructor : this only requires an input graph.\n",
268+
"We'll see below how to manually initialize parameters, and call the [`ForceField`](https://instadeepai.github.io/mlip/api_reference/models/force_field.html) default constructor : this only requires an input graph.\n",
269269
"\n",
270-
"**N.B.** The [`ForceField`](https://mlip-jax-dot-int-research-tpu.uc.r.appspot.com/api_reference/models/force_field.html) dataclass is frozen: this is to prevent any stateful operations to be performed on the parameters, which would be incompatible with JAX compilation and tracing mechanisms. You can think of [`ForceField`](https://mlip-jax-dot-int-research-tpu.uc.r.appspot.com/api_reference/models/force_field.html) as holding the _state_ of a learnable [`ForceFieldPredictor`](https://mlip-jax-dot-int-research-tpu.uc.r.appspot.com/api_reference/models/predictor.html), although _it remains immutable_."
270+
"**N.B.** The [`ForceField`](https://instadeepai.github.io/mlip/api_reference/models/force_field.html) dataclass is frozen: this is to prevent any stateful operations to be performed on the parameters, which would be incompatible with JAX compilation and tracing mechanisms. You can think of [`ForceField`](https://instadeepai.github.io/mlip/api_reference/models/force_field.html) as holding the _state_ of a learnable [`ForceFieldPredictor`](https://instadeepai.github.io/mlip/api_reference/models/predictor.html), although _it remains immutable_."
271271
]
272272
},
273273
{
@@ -360,9 +360,9 @@
360360
"source": [
361361
"### e. *Wrapping the model state in ForceField*\n",
362362
"\n",
363-
"In order to hide the `flax` logic for downstream applications, our `TrainingLoop` class takes in and returns a [`ForceField`](https://mlip-jax-dot-int-research-tpu.uc.r.appspot.com/api_reference/models/force_field.html) object that simply wraps the predictor with its initial and final parameters respectively.\n",
363+
"In order to hide the `flax` logic for downstream applications, our `TrainingLoop` class takes in and returns a [`ForceField`](https://instadeepai.github.io/mlip/api_reference/models/force_field.html) object that simply wraps the predictor with its initial and final parameters respectively.\n",
364364
"\n",
365-
"This frozen dataclass can then be easily passed to the [`SimulationEngine`](https://mlip-jax-dot-int-research-tpu.uc.r.appspot.com/api_reference/simulation/simulation_engine.html), or just saved for later (by JSON-serializing the MLIPNetwork's `.config` and `.dataset_info`, and dumping the flattened parameter dict as `.npz`)."
365+
"This frozen dataclass can then be easily passed to the [`SimulationEngine`](https://instadeepai.github.io/mlip/api_reference/simulation/simulation_engine.html), or just saved for later (by JSON-serializing the MLIPNetwork's `.config` and `.dataset_info`, and dumping the flattened parameter dict as `.npz`)."
366366
]
367367
},
368368
{
@@ -393,7 +393,7 @@
393393
"id": "m0lNIXkOovaI"
394394
},
395395
"source": [
396-
"Note that [`ForceField`](https://mlip-jax-dot-int-research-tpu.uc.r.appspot.com/api_reference/models/force_field.html) instances are also callable, and morally equivalent to `functools.partial(predictor.apply, params)`.\n",
396+
"Note that [`ForceField`](https://instadeepai.github.io/mlip/api_reference/models/force_field.html) instances are also callable, and morally equivalent to `functools.partial(predictor.apply, params)`.\n",
397397
"\n",
398398
"This means they can be directly evaluated on a graph by forgetting about the (frozen) learnable parameters, as done during simulation."
399399
]
@@ -420,7 +420,7 @@
420420
"id": "_AnG7n_vszX5"
421421
},
422422
"source": [
423-
"In theory, the [`ForceField`](https://mlip-jax-dot-int-research-tpu.uc.r.appspot.com/api_reference/models/force_field.html) class is duck-typed for the [`SimulationEngine`](https://mlip-jax-dot-int-research-tpu.uc.r.appspot.com/api_reference/simulation/simulation_engine.html), and you could provide any other object with the following methods and properties (e.g. to wrap models defined in another JAX framework):\n",
423+
"In theory, the [`ForceField`](https://instadeepai.github.io/mlip/api_reference/models/force_field.html) class is duck-typed for the [`SimulationEngine`](https://instadeepai.github.io/mlip/api_reference/simulation/simulation_engine.html), and you could provide any other object with the following methods and properties (e.g. to wrap models defined in another JAX framework):\n",
424424
"- `.__call__(graph: GraphsTuple) -> Prediction`\n",
425425
"- `.cutoff_distance: float`\n",
426426
"- `.allowed_atomic_numbers: set[int]`\n",
@@ -472,9 +472,9 @@
472472
"id": "0MdUJqneXoay"
473473
},
474474
"source": [
475-
"Having defined our config, we can now create our MLIP model class. Our custom model must inherits the [`MLIPNetwork`](https://mlip-jax-dot-int-research-tpu.uc.r.appspot.com/api_reference/models/mlip_network.html) class, which is itself a `flax.linen.Module` object. As such, we can easily define our network using flax `@nn.compact` decorator, see [the flax docs](https://flax-linen.readthedocs.io/en/latest/quick_start.html) for more information.\n",
475+
"Having defined our config, we can now create our MLIP model class. Our custom model must inherits the [`MLIPNetwork`](https://instadeepai.github.io/mlip/api_reference/models/mlip_network.html) class, which is itself a `flax.linen.Module` object. As such, we can easily define our network using flax `@nn.compact` decorator, see [the flax docs](https://flax-linen.readthedocs.io/en/latest/quick_start.html) for more information.\n",
476476
"\n",
477-
"Our model must also have a dataset_info attribute of type [`DatasetInfo`](https://mlip-jax-dot-int-research-tpu.uc.r.appspot.com/api_reference/data/dataset_info.html). This object encapsulates the relevant informations about the dataset at hand that can be used to create the model. For instance, this attribute contains the average number of neighbors per atom in the dataset, which is used in models like [MACE](https://arxiv.org/pdf/2206.07697) to normalize the messages passed to each nodes.\n",
477+
"Our model must also have a dataset_info attribute of type [`DatasetInfo`](https://instadeepai.github.io/mlip/api_reference/data/dataset_info.html). This object encapsulates the relevant informations about the dataset at hand that can be used to create the model. For instance, this attribute contains the average number of neighbors per atom in the dataset, which is used in models like [MACE](https://arxiv.org/pdf/2206.07697) to normalize the messages passed to each nodes.\n",
478478
"\n",
479479
"We provide a very simple example of MPNN below, which computes messages through an `MLP` encoding of sender and receiver features with edge distances."
480480
]
@@ -583,7 +583,7 @@
583583
"id": "MgYcKGGXZZmO"
584584
},
585585
"source": [
586-
"Having defined both our model and its associated config classes, we can now instantiate our model and turn it into a [`ForceField`](https://mlip-jax-dot-int-research-tpu.uc.r.appspot.com/api_reference/models/force_field.html) object that can be used for training and simulations."
586+
"Having defined both our model and its associated config classes, we can now instantiate our model and turn it into a [`ForceField`](https://instadeepai.github.io/mlip/api_reference/models/force_field.html) object that can be used for training and simulations."
587587
]
588588
},
589589
{

0 commit comments

Comments
 (0)