Skip to content

Commit e480eed

Browse files
committed
add links to readme and update .rst files
1 parent e808515 commit e480eed

File tree

3 files changed

+49
-27
lines changed

3 files changed

+49
-27
lines changed

README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -59,9 +59,9 @@ For now, please follow the guide below to build from source.
5959

6060
We provide instructions for the following options:
6161

62-
- Conda + Linux (with CUDA and cutlass)
63-
- Docker (with CUDA and cutlass)
64-
- Conda + MacOS (with MLX)
62+
- [Conda + Linux](#conda-on-linux-with-cuda) (with CUDA and cutlass)
63+
- [Docker](#docker-with-cuda) (with CUDA and cutlass)
64+
- [Conda + MacOS](#conda-on-macos-with-mlx) (with MLX)
6565

6666
We recommend managing your BITorch Engine installation in a conda environment (otherwise you should adapt/remove certain variables, e.g. `CUDA_HOME`).
6767
You may want to keep everything (environment, code, etc.) in one directory or use the default directory for conda environments.
@@ -156,7 +156,7 @@ cd bitorch-engine
156156
CPATH="${CUTLASS_HOME}/install/include" CUDA_HOME="${CONDA_PREFIX}" pip install -e . -v
157157
```
158158

159-
#### Docker (with CUDA)
159+
#### Docker (with CUDA)
160160

161161
You can also use our prepared Dockerfile to build a docker image (which includes building the engine under `/bitorch-engine`):
162162

docs/source/build_options.rst

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -35,3 +35,16 @@ setting ``BIE_FORCE_CUDA="true"``:
3535
3636
BIE_FORCE_CUDA="true" pip install -e . -v
3737
38+
Skip Library File Building
39+
--------------------------
40+
41+
If you just want to avoid rebuilding any files, you can set
42+
``BIE_SKIP_BUILD``:
43+
44+
.. code:: bash
45+
46+
BIE_SKIP_BUILD="true" python3 -m build --no-isolation --wheel
47+
48+
This would create a wheel and package ``.so`` files without trying to
49+
rebuild them.
50+

docs/source/installation.rst

Lines changed: 32 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -21,12 +21,21 @@ devices, or MacOS M1/2/3 chips), we recommend installing:
2121
- `CUTLASS <https://github.com/NVIDIA/cutlass>`__ for cutlass-based
2222
layers
2323

24-
Currently, the engine **needs to be built from source**. We provide
25-
instructions for the following options:
24+
Binary Releases (coming soon)
25+
-----------------------------
2626

27-
- Conda + Linux (with CUDA and cutlass)
28-
- Docker (with CUDA and cutlass)
29-
- Conda + MacOS (with MLX)
27+
We are currently preparing experimental binary releases. Their
28+
installation will be documented in this section. For now, please follow
29+
the guide below to build from source.
30+
31+
Build From Source
32+
-----------------
33+
34+
We provide instructions for the following options:
35+
36+
- `Conda + Linux <#conda-on-linux-with-cuda>`__ (with CUDA and cutlass)
37+
- `Docker <#docker-with-cuda>`__ (with CUDA and cutlass)
38+
- `Conda + MacOS <#conda-on-macos-with-mlx>`__ (with MLX)
3039

3140
We recommend managing your BITorch Engine installation in a conda
3241
environment (otherwise you should adapt/remove certain variables,
@@ -36,7 +45,7 @@ environments. You may wish to adapt the CUDA version to 12.1 where
3645
applicable.
3746

3847
Conda on Linux (with CUDA)
39-
--------------------------
48+
~~~~~~~~~~~~~~~~~~~~~~~~~~
4049

4150
To use these instructions, you need to have
4251
`conda <https://conda.io/projects/conda/en/latest/user-guide/getting-started.html>`__
@@ -55,16 +64,16 @@ and a suitable C++ compiler installed.
5564
5665
conda install -y -c "nvidia/label/cuda-11.8.0" cuda-toolkit
5766
58-
3. `Download customized Torch
59-
2.1.0 <https://drive.google.com/drive/folders/1T22b8JhN-E3xbn3h332rI1VjqXONZeB7?usp=sharing>`__
60-
(it allows gradients on INT tensors, built for Python 3.9 and CUDA
61-
11.8) and install it with pip:
67+
3. Download our customized torch for CUDA 11.8 and Python 3.9, it allows
68+
gradients on INT tensors and install it with pip (you can find other
69+
versions `here <https://packages.greenbit.ai/whl/>`__):
6270

6371
.. code:: bash
6472
65-
pip install torch-2.1.0-cp39-cp39-linux_x86_64.whl
66-
# optional: install corresponding torchvision (check https://github.com/pytorch/vision?tab=readme-ov-file#installation in the future)
73+
pip install "https://packages.greenbit.ai/whl/cu118/torch/torch-2.1.0-cp39-cp39-linux_x86_64.whl"
74+
# as bitorch currently requires torchvision, we need to install a version for our correct CUDA (otherwise it will reinstall torch)
6775
pip install "torchvision==0.16.0" --index-url https://download.pytorch.org/whl/cu118
76+
# (check https://github.com/pytorch/vision?tab=readme-ov-file#installation in the future)
6877
6978
4. To use cutlass layers, you should also install CUTLASS 2.8.0 (from
7079
source), adjust ``CUTLASS_HOME`` (this is where we clone and install
@@ -117,17 +126,16 @@ environment and clone all repositories within one “root” directory.
117126
118127
conda install -y -c "nvidia/label/cuda-11.8.0" cuda-toolkit
119128
120-
3. `Download customized Torch
121-
2.1.0 <https://drive.google.com/drive/folders/1T22b8JhN-E3xbn3h332rI1VjqXONZeB7?usp=sharing>`__,
122-
select the package fit for the cuda version you installed in the
123-
previous step (it allows gradients on INT tensors, built for Python
124-
3.9 and CUDA 11.8) and install it with pip:
129+
3. Download our customized torch for CUDA 11.8 and Python 3.9, it allows
130+
gradients on INT tensors and install it with pip (you can find other
131+
versions `here <https://packages.greenbit.ai/whl/>`__):
125132

126133
.. code:: bash
127134
128-
pip install torch-2.1.0-cp39-cp39-linux_x86_64.whl
129-
# optional: install corresponding torchvision (check https://github.com/pytorch/vision?tab=readme-ov-file#installation in the future)
135+
pip install "https://packages.greenbit.ai/whl/cu118/torch/torch-2.1.0-cp39-cp39-linux_x86_64.whl"
136+
# as bitorch currently requires torchvision, we need to install a version for our correct CUDA (otherwise it will reinstall torch)
130137
pip install "torchvision==0.16.0" --index-url https://download.pytorch.org/whl/cu118
138+
# (check https://github.com/pytorch/vision?tab=readme-ov-file#installation in the future)
131139
132140
4. To use cutlass layers, you should also install CUTLASS 2.8.0 (if you
133141
have older or newer GPUs you may need to add your `CUDA compute
@@ -166,7 +174,7 @@ hide the build output remove ``-v``):
166174
CPATH="${CUTLASS_HOME}/install/include" CUDA_HOME="${CONDA_PREFIX}" pip install -e . -v
167175
168176
Docker (with CUDA)
169-
------------------
177+
~~~~~~~~~~~~~~~~~~
170178

171179
You can also use our prepared Dockerfile to build a docker image (which
172180
includes building the engine under ``/bitorch-engine``):
@@ -181,7 +189,7 @@ Check the `docker readme <https://github.com/GreenBitAI/bitorch-engine/blob/HEAD
181189
details.
182190

183191
Conda on MacOS (with MLX)
184-
-------------------------
192+
~~~~~~~~~~~~~~~~~~~~~~~~~
185193

186194
1. We recommend to create a virtual environment for and activate it. In
187195
the following example we use a conda environment for python 3.9, but
@@ -199,9 +207,10 @@ Conda on MacOS (with MLX)
199207

200208
.. code:: bash
201209
202-
pip install path/to/torch-2.2.1-cp39-none-macosx_11_0_arm64.whl
203-
# optional: install corresponding torchvision (check https://github.com/pytorch/vision?tab=readme-ov-file#installation in the future)
210+
pip install "https://packages.greenbit.ai/whl/macosx/torch/torch-2.2.1-cp39-none-macosx_11_0_arm64.whl"
211+
# as bitorch currently requires torchvision, we need to install a version for our correct CUDA (otherwise it will reinstall torch)
204212
pip install "torchvision==0.17.1"
213+
# (check https://github.com/pytorch/vision?tab=readme-ov-file#installation in the future)
205214
206215
3. For MacOS users and to use OpenMP acceleration, install OpenMP with
207216
Homebrew and configure the environment:

0 commit comments

Comments
 (0)