Skip to content

Commit a9c8c81

Browse files
benrhodes26colbyfordben rhodes
authored
Docker image (#86)
* Update README.md * Create Dockerfile * Update README.md * Update Dockerfile * Update README.md * Update .gitignore * Update Dockerfile to v3 weights * Update Dockerfile Co-authored-by: Ben Rhodes <benjamin.rhodes26@gmail.com> * Fix docker image and readme --------- Co-authored-by: Colby Ford <colbytylerford@gmail.com> Co-authored-by: Colby T. Ford <colby@tuple.xyz> Co-authored-by: ben rhodes <benrhodes@bens-MacBook-Pro.local>
1 parent 3530472 commit a9c8c81

File tree

3 files changed

+44
-2
lines changed

3 files changed

+44
-2
lines changed

.gitignore

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -162,4 +162,7 @@ cython_debug/
162162
#.idea/
163163

164164
# Private
165-
datasets/
165+
datasets/
166+
167+
# VS Code
168+
.devcontainer/

Dockerfile

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
FROM pytorch/pytorch:2.6.0-cuda12.4-cudnn9-runtime
2+
3+
ENV DEBIAN_FRONTEND=noninteractive
4+
5+
## Install system requirements
6+
RUN apt-get update && \
7+
apt-get install -y \
8+
ca-certificates \
9+
wget \
10+
git \
11+
sudo \
12+
gcc \
13+
g++
14+
15+
# Help Numba find lubcudart.so
16+
ENV LD_LIBRARY_PATH=/opt/conda/lib/python3.11/site-packages/nvidia/cuda_runtime/lib:$LD_LIBRARY_PATH
17+
RUN ln -s \
18+
/opt/conda/lib/python3.11/site-packages/nvidia/cuda_runtime/lib/libcudart.so.12 \
19+
/opt/conda/lib/python3.11/site-packages/nvidia/cuda_runtime/lib/libcudart.so
20+
21+
## Install Python requirements
22+
RUN pip install orb-models && \
23+
pip install --extra-index-url=https://pypi.nvidia.com "cuml-cu12==25.2.*"

README.md

Lines changed: 17 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,8 @@ pip install --extra-index-url=https://pypi.nvidia.com "cuml-cu11==25.2.*" # Fo
2323
pip install --extra-index-url=https://pypi.nvidia.com "cuml-cu12==25.2.*" # For cuda versions >=12.0, <13.0
2424
```
2525

26+
Alternatively, you can use Docker to run orb-models; [see instructions below](#docker).
27+
2628
### Updates
2729

2830
**April 2025**: We have released the [Orb-v3 set of potentials](https://arxiv.org/abs/2504.06231). These models improve substantially over Orb-v2, in particular:
@@ -244,6 +246,20 @@ model = getattr(pretrained, <base_model>)(
244246
> - The script only tracks a limited set of metrics (energy/force/stress MAEs) which may be insufficient for some downstream use-cases. For instance, if you wish to finetune a model for Molecular Dynamics simulations, we have found (anecdotally) that models that are just on the cusp of overfitting to force MAEs can be substantially worse for simulations. Ideally, more robust "rollout" metrics would be included in the finetuning training loop. In lieu of this, we recommend more aggressive early-stopping i.e. using models several epochs prior to any sign of overfitting.
245247
246248

249+
## Docker
250+
251+
You can run orb-models using Docker, which provides a consistent environment with all dependencies pre-installed:
252+
253+
1. Build the Docker image locally:
254+
255+
```bash
256+
docker build -t orb_models .
257+
```
258+
2. Run the Docker container:
259+
260+
```bash
261+
docker run --gpus all --rm --name orb_models -it orb_models /bin/bash
262+
```
247263

248264

249265
### Citing
@@ -282,4 +298,4 @@ ORB models are licensed under the Apache License, Version 2.0. Please see the [L
282298

283299
### Community
284300

285-
Please join the discussion on Discord by following [this](https://discord.gg/SyD6vWSSTB) link.
301+
Please join the discussion on Discord by following [this](https://discord.gg/SyD6vWSSTB) link.

0 commit comments

Comments
 (0)