@@ -21,12 +21,21 @@ devices, or MacOS M1/2/3 chips), we recommend installing:
2121- `CUTLASS <https://github.com/NVIDIA/cutlass >`__ for cutlass-based
2222 layers
2323
24- Currently, the engine ** needs to be built from source **. We provide
25- instructions for the following options:
24+ Binary Releases (coming soon)
25+ -----------------------------
2626
27- - Conda + Linux (with CUDA and cutlass)
28- - Docker (with CUDA and cutlass)
29- - Conda + MacOS (with MLX)
27+ We are currently preparing experimental binary releases. Their
28+ installation will be documented in this section. For now, please follow
29+ the guide below to build from source.
30+
31+ Build From Source
32+ -----------------
33+
34+ We provide instructions for the following options:
35+
36+ - `Conda + Linux <#conda-on-linux-with-cuda >`__ (with CUDA and cutlass)
37+ - `Docker <#docker-with-cuda >`__ (with CUDA and cutlass)
38+ - `Conda + MacOS <#conda-on-macos-with-mlx >`__ (with MLX)
3039
3140We recommend managing your BITorch Engine installation in a conda
3241environment (otherwise you should adapt/remove certain variables,
@@ -36,7 +45,7 @@ environments. You may wish to adapt the CUDA version to 12.1 where
3645applicable.
3746
3847Conda on Linux (with CUDA)
39- --------------------------
48+ ~~~~~~~~~~~~~~~~~~~~~~~~~~
4049
4150To use these instructions, you need to have
4251`conda <https://conda.io/projects/conda/en/latest/user-guide/getting-started.html >`__
@@ -55,16 +64,16 @@ and a suitable C++ compiler installed.
5564
5665 conda install -y -c " nvidia/label/cuda-11.8.0" cuda-toolkit
5766
58- 3. `Download customized Torch
59- 2.1.0 <https://drive.google.com/drive/folders/1T22b8JhN-E3xbn3h332rI1VjqXONZeB7?usp=sharing> `__
60- (it allows gradients on INT tensors, built for Python 3.9 and CUDA
61- 11.8) and install it with pip:
67+ 3. Download our customized torch for CUDA 11.8 and Python 3.9, it allows
68+ gradients on INT tensors and install it with pip (you can find other
69+ versions `here <https://packages.greenbit.ai/whl/ >`__):
6270
6371.. code :: bash
6472
65- pip install torch-2.1.0-cp39-cp39-linux_x86_64.whl
66- # optional: install corresponding torchvision (check https://github.com/pytorch/vision?tab=readme-ov-file#installation in the future )
73+ pip install " https://packages.greenbit.ai/whl/cu118/ torch/torch -2.1.0-cp39-cp39-linux_x86_64.whl"
74+ # as bitorch currently requires torchvision, we need to install a version for our correct CUDA (otherwise it will reinstall torch )
6775 pip install " torchvision==0.16.0" --index-url https://download.pytorch.org/whl/cu118
76+ # (check https://github.com/pytorch/vision?tab=readme-ov-file#installation in the future)
6877
6978 4. To use cutlass layers, you should also install CUTLASS 2.8.0 (from
7079 source), adjust ``CUTLASS_HOME `` (this is where we clone and install
@@ -117,17 +126,16 @@ environment and clone all repositories within one “root” directory.
117126
118127 conda install -y -c " nvidia/label/cuda-11.8.0" cuda-toolkit
119128
120- 3. `Download customized Torch
121- 2.1.0 <https://drive.google.com/drive/folders/1T22b8JhN-E3xbn3h332rI1VjqXONZeB7?usp=sharing> `__,
122- select the package fit for the cuda version you installed in the
123- previous step (it allows gradients on INT tensors, built for Python
124- 3.9 and CUDA 11.8) and install it with pip:
129+ 3. Download our customized torch for CUDA 11.8 and Python 3.9, it allows
130+ gradients on INT tensors and install it with pip (you can find other
131+ versions `here <https://packages.greenbit.ai/whl/ >`__):
125132
126133 .. code :: bash
127134
128- pip install torch-2.1.0-cp39-cp39-linux_x86_64.whl
129- # optional: install corresponding torchvision (check https://github.com/pytorch/vision?tab=readme-ov-file#installation in the future )
135+ pip install " https://packages.greenbit.ai/whl/cu118/ torch/torch -2.1.0-cp39-cp39-linux_x86_64.whl"
136+ # as bitorch currently requires torchvision, we need to install a version for our correct CUDA (otherwise it will reinstall torch )
130137 pip install " torchvision==0.16.0" --index-url https://download.pytorch.org/whl/cu118
138+ # (check https://github.com/pytorch/vision?tab=readme-ov-file#installation in the future)
131139
132140 4. To use cutlass layers, you should also install CUTLASS 2.8.0 (if you
133141 have older or newer GPUs you may need to add your `CUDA compute
@@ -166,7 +174,7 @@ hide the build output remove ``-v``):
166174 CPATH=" ${CUTLASS_HOME} /install/include" CUDA_HOME=" ${CONDA_PREFIX} " pip install -e . -v
167175
168176 Docker (with CUDA)
169- ------------------
177+ ~~~~~~~~~~~~~~~~~~
170178
171179You can also use our prepared Dockerfile to build a docker image (which
172180includes building the engine under ``/bitorch-engine ``):
@@ -181,7 +189,7 @@ Check the `docker readme <https://github.com/GreenBitAI/bitorch-engine/blob/HEAD
181189details.
182190
183191Conda on MacOS (with MLX)
184- -------------------------
192+ ~~~~~~~~~~~~~~~~~~~~~~~~~
185193
1861941. We recommend to create a virtual environment for and activate it. In
187195 the following example we use a conda environment for python 3.9, but
@@ -199,9 +207,10 @@ Conda on MacOS (with MLX)
199207
200208.. code :: bash
201209
202- pip install path/to/ torch-2.2.1-cp39-none-macosx_11_0_arm64.whl
203- # optional: install corresponding torchvision (check https://github.com/pytorch/vision?tab=readme-ov-file#installation in the future )
210+ pip install " https://packages.greenbit.ai/whl/macosx/ torch/torch -2.2.1-cp39-none-macosx_11_0_arm64.whl"
211+ # as bitorch currently requires torchvision, we need to install a version for our correct CUDA (otherwise it will reinstall torch )
204212 pip install " torchvision==0.17.1"
213+ # (check https://github.com/pytorch/vision?tab=readme-ov-file#installation in the future)
205214
206215 3. For MacOS users and to use OpenMP acceleration, install OpenMP with
207216 Homebrew and configure the environment:
0 commit comments