Skip to content

Commit 5120ef7

Browse files
committed
Merge branch 'ar/build-notes' into 'master'
docs(builds): Adds notes about candle and tch builds. See merge request machine-learning/modkit!287
2 parents 725cb9f + 5ac2a1a commit 5120ef7

File tree

2 files changed

+37
-0
lines changed

2 files changed

+37
-0
lines changed

BUILD_NOTES_candle.txt

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
This build has CUDA NVIDIA GPU features enabled since it was compiled with the `--features candle` option. This means that the `modkit open-chromatin predict` subcommand will use the Candle backend and is suitable for use with NVIDIA GPUs. There are a few points to keep in mind:
2+
3+
1. Whilst this backend should be compatible with a lot of NVIDIA GPUs, it may not work with ALL GPU-equipped compute setups.
4+
2. This backend is still 1-3x slower than the Torchlib backend.
5+
6+
Basically, use this build if you have a NVIDIA GPU and want to get a quick start without downloading additional software. You can find additional information about how Candle works with Burn here: https://github.com/tracel-ai/burn?tab=readme-ov-file#backend
7+
8+

BUILD_NOTES_tch.txt

Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
This build of Modkit has been compiled against libtorch version 2.6.0+cu118 (hash: 2236df1770800ffea5697b11b0bb0d910b2e59e1) using the `--features tch` option. This is the fastest way to use `modkit open-chromatin` but requires a little more set up.
2+
3+
To use this build, you should download and extract libtorch like this:
4+
5+
wget -O libtorch.zip https://download.pytorch.org/libtorch/cu118/libtorch-cxx11-abi-shared-with-deps-2.6.0%2Bcu118.zip
6+
unzip libtorch.zip
7+
8+
Then set the following environment variables:
9+
10+
export LIBTORCH={path/to/}libtorch
11+
export DYLD_LIBRARY_PATH=${LIBTORCH}/lib
12+
export LD_LIBRARY_PATH=${DYLD_LIBRARY_PATH}
13+
14+
You can check that everything is working by running `modkit open-chromatin predict <args> --dryrun`.
15+
16+
In our testing this version of torchlib is compatible with a wide range of NVIDIA GPUs, but it may not be compatible with all versions.
17+
18+
If this distribution isn't compatible with your setup the recommendation is so compile Modkit from source:
19+
20+
1. Download the version of libtorch for your CUDA version: `https://download.pytorch.org/libtorch/${CUDA_VERSION}/libtorch-cxx11-abi-shared-with-deps-2.6.0%2B{CUDA_VERSION}.zip`, see the example above.
21+
2. Download rust `curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh`
22+
3. Clone Modkit: `git clone --depth 1 --branch <modkit version> https://github.com/nanoporetech/modkit.git`
23+
4. Build Modkit: `cd modkit && cargo build --release --features tch`
24+
25+
Then follow the instructions above.
26+
27+
28+
29+

0 commit comments

Comments
 (0)