Skip to content

Commit 07cfe1d

Browse files
authored
docs: revision (#43)
1 parent a582242 commit 07cfe1d

File tree

5 files changed

+15
-19
lines changed

5 files changed

+15
-19
lines changed

docs/source/benchmarks/general/scaling.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ Purpose
77
-------
88

99
This benchmark evaluates how the computational cost of machine-learned interatomic potentials (**MLIP**) scales with system size.
10-
By running single, long **MD** episodes on a series of molecular systems of increasing size, we systematically assess the
10+
By running single **MD** episodes on a series of molecular systems of increasing size, we systematically assess the
1111
relationship between molecular complexity and inference performance. The results provide insight into the efficiency and
1212
scalability of the **MLIP** implementation, helping to identify potential bottlenecks and guide optimization for large-scale
1313
simulations.
@@ -16,7 +16,7 @@ Description
1616
-----------
1717

1818
For each system in the dataset, the benchmark performs a **MD** simulation using the **MLIP** model in the **NVT** ensemble at **300 K**
19-
for **1,000,000 steps** (1ns), leveraging the `jax-md <https://github.com/google/jax-md>`_, as integrated via the
19+
for **1000 steps** (1 ps), leveraging the `jax-md <https://github.com/google/jax-md>`_, as integrated via the
2020
`mlip <https://github.com/instadeepai/mlip>`_ library. During each simulation, a timer tracks the duration of each episode, and the average episode time (excluding the first episode)
2121
is recorded. After all simulations are complete, the benchmark reports the **average inference time per episode as a function of
2222
system size**, providing a direct measure of how the **MLIP** implementation's computational cost grows with increasing molecular
@@ -46,4 +46,4 @@ They have the following ids:
4646
Interpretation
4747
--------------
4848

49-
The inference scaling score is a measure of the scaling of the **MLIP** model.
49+
This benchmark does not produce a score but can be used to estimate how a model's simulation speed scales with system size.

docs/source/benchmarks/general/stability.rst

Lines changed: 9 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ Description
1313
-----------
1414

1515
For each system in the dataset, the benchmark performs a **MD** simulation using the **MLIP** model in the
16-
**NVT** ensemble at **300 K** for **1,000,000 steps** (1ns), leveraging the
16+
**NVT** ensemble at **300 K** for **100,000 steps** (100 ps), leveraging the
1717
`jax-md <https://github.com/google/jax-md>`_, as integrated via the `mlip <https://github.com/instadeepai/mlip>`_
1818
library. The test monitors the system for signs of instability by detecting abrupt temperature spikes
1919
(**“explosions”**) and hydrogen atom drift. These indicators help determine whether the **MLIP** maintains
@@ -39,15 +39,14 @@ Dataset
3939
The structures that are tested for stability are a series of protein structures, RNA fragments, peptides and inhibitors taken from the PDB.
4040
They have the following ids:
4141

42-
* 1JRS
43-
* 1UAO
44-
* 1P79
45-
* 5KGZ
46-
* 1AB7
47-
* 1BIP
48-
* 1A5E
49-
* 1A7M
50-
* 2BQV
42+
* 1JRS (Leupeptin)
43+
* 1UAO (Chignolin)
44+
* 1P79 (RNA Fragment)
45+
* 5KGZ (Protein structure with 634 atoms)
46+
* 1AB7 (Protein structure with 1,432 atoms)
47+
* 1BIP (Protein structure with 1,818 atoms)
48+
* 1A5E (Protein structure with 2,301 atoms)
49+
* 1A7M (Protein structure with 2,803 atoms)
5150

5251
Interpretation
5352
--------------

docs/source/benchmarks/small_molecules/conformer_selection.rst

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -21,9 +21,7 @@ For each system, the benchmark leverages the `mlip <https://github.com/instadeep
2121
comparing the predicted energies and forces against quantum mechanical **QM** reference data. Performance is quantified using
2222
the following metrics:
2323

24-
- **MAE (Mean Absolute Error)** and **RMSE (Root Mean Square Error)** for:
25-
- Total energies (in kcal/mol)
26-
- Atomic forces (in kcal/mol/Å)
24+
- **MAE (Mean Absolute Error)** and **RMSE (Root Mean Square Error)** for total energies (in kcal/mol)
2725
- **Spearman rank correlation coefficient** for conformer energy ordering
2826

2927

docs/source/benchmarks/small_molecules/dihedral_scan.rst

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,6 @@ Performance is quantified using the following metrics:
3131
:align: center
3232
:figclass: align-center
3333

34-
Structure 1
3534
- .. figure:: img/dihedral_scan.png
3635
:width: 100%
3736
:align: center

docs/source/benchmarks/small_molecules/tautomers.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,8 +27,8 @@ For each molecule, the benchmark compares **MLIP**-predicted energies against
2727
quantum mechanical **QM** reference data. Performance
2828
is quantified using the following metrics:
2929

30-
- :abbr:`MAE (Mean Absolute Error)`
31-
- :abbr:`RMSE (Root Mean Square Error)`
30+
- **MAE (Mean Absolute Error)**
31+
- **RMSE (Root Mean Square Error)**
3232

3333

3434
Dataset

0 commit comments

Comments
 (0)