Skip to content

Commit 4c14a68

Browse files
Update installation doc page
1 parent 35b0374 commit 4c14a68

File tree

1 file changed

+101
-31
lines changed

1 file changed

+101
-31
lines changed

docs/installation.md

Lines changed: 101 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -1,71 +1,141 @@
1-
## Segger Installation Guide
1+
## segger Installation Guide
22

3-
Select the appropriate installation method based on your requirements.
3+
segger provides multiple installation options to suit your requirements. You can install it using:
4+
5+
- **Virtual environments** (recommended for most users)
6+
- **Containerized environments** (Docker or Singularity)
7+
- **Editable mode from GitHub** (for developers or users who want to modify the source code)
8+
9+
!!! tip "Recommendation"
10+
To avoid dependency conflicts, we recommend installing segger in a virtual environment or a container environment.
11+
12+
segger requires **CUDA 11** or **CUDA 12** for GPU acceleration.
13+
14+
### :snake: Installation in Virtual Environment
15+
16+
#### Using `venv`
417

5-
=== ":rocket: Micromamba Installation"
618
```bash
7-
micromamba create -n segger-rapids --channel-priority 1 \
8-
-c rapidsai -c conda-forge -c nvidia -c pytorch -c pyg \
9-
rapids=24.10 python=3.* 'cuda-version>=12.0,<=12.1' jupyterlab \
10-
'pytorch=*=*cuda*' 'pyg=*=*cu121' pyg-lib pytorch-sparse
11-
micromamba install -n segger-rapids --channel-priority 1 --file mamba_environment.yml
12-
micromamba run -n segger-rapids pip install --no-deps ./
19+
# Step 1: Create and activate the virtual environment.
20+
python3.10 -m venv segger-venv
21+
source segger-venv/bin/activate
22+
23+
# Step 2: Install segger with CUDA support.
24+
pip install --upgrade pip
25+
pip install .[cuda12]
26+
27+
# Step 3: Verify the installation.
28+
python --version
29+
pip show segger
30+
31+
# step 4 [Optional]: If your system doesn't have a universally installed CUDA toolkit, you can link CuPy to PyTorch's CUDA runtime library.
32+
export LD_LIBRARY_PATH=$(pwd)/segger-venv/lib/python3.10/site-packages/nvidia/cuda_nvrtc/lib:$LD_LIBRARY_PATH
1333
```
1434

15-
=== ":snake: Conda Installation"
35+
#### Using `conda`
36+
1637
```bash
38+
# Step 1: Create and activate the conda environment.
1739
conda create -n segger-env python=3.10
1840
conda activate segger-env
19-
conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
20-
conda install pyg -c pyg
21-
pip install .
41+
42+
# Step 2: Install segger with CUDA support.
43+
pip install --upgrade pip
44+
pip install .[cuda12]
45+
46+
# Step 3: Verify the installation.
47+
python --version
48+
pip show segger
49+
50+
# Step 4 [Optional]: If your system doesn't have a universally installed CUDA toolkit, you can link CuPy to PyTorch's CUDA runtime library.
51+
export LD_LIBRARY_PATH=$(conda info --base)/envs/segger-env/lib/python3.10/site-packages/nvidia/cuda_nvrtc/lib:$LD_LIBRARY_PATH
2252
```
2353

24-
=== ":whale: Docker Installation"
54+
#### How to Choose Between `[cuda11]` and `[cuda12]`
55+
1. **Check Your NVIDIA Driver Version**: Run `nvidia-smi`. Use `[cuda11]` for driver version ≥ 450.80.02 or `[cuda12]` for version ≥ 525.60.13.
56+
2. **Check for a CUDA Toolkit**: Run `nvcc --version`. If it outputs a CUDA version (11.x or 12.x), choose the corresponding `[cuda11]` or `[cuda12]`.
57+
3. **Default to PyTorch CUDA Runtime**: If CUDA toolkit is not installed, segger can use PyTorch's bundled CUDA runtime. You can link CuPy as shown in Step 4 of the venv/conda installation.
58+
59+
60+
### :whale: Installation in Container Environment
61+
62+
#### Using `docker`
63+
2564
```bash
65+
# Step 1: Pull the official Docker image.
2666
docker pull danielunyi42/segger_dev:cuda121
67+
68+
# Step 2: Run the Docker container with GPU support.
69+
docker run --gpus all -it danielunyi42/segger_dev:cuda121
2770
```
2871

29-
The Docker image comes with all required packages pre-installed, including PyTorch, RAPIDS, and PyTorch Geometric.
30-
The current images support CUDA 11.8 and CUDA 12.1, which can be specified in the image tag.
72+
The official Docker image comes with all dependencies pre-installed, including the CUDA toolkit, PyTorch, and CuPy.
73+
The current images support **CUDA 11.8** and **CUDA 12.1**, which can be specified in the image tag.
3174

32-
For users who prefer Singularity:
75+
#### Using `singularity`
3376

3477
```bash
78+
# Step 1: Pull the official Docker image.
3579
singularity pull docker://danielunyi42/segger_dev:cuda121
80+
81+
# Step 2: Run the Singularity container with GPU support.
82+
singularity exec --nv segger_dev_cuda121.sif
3683
```
3784

38-
=== ":octocat: Github Installation"
85+
The Singularity image is derived from the official Docker image and includes all pre-installed dependencies.
86+
87+
#### :file_folder: Directory Mapping for Input and Output Data
88+
89+
Directory mapping allows:
90+
91+
- **Access to input data** (spatial transcriptomics datasets) from your local machine inside the container.
92+
- **Saving output data** (segmentation results and logs) generated by segger to your local machine.
93+
94+
Setting up directory mapping is really easy:
95+
96+
- **For Docker**:
3997
```bash
40-
git clone https://github.com/EliHei2/segger_dev.git
41-
cd segger_dev
42-
pip install -e "."
98+
docker run --gpus all -it -v /path/to/local/data:/workspace/data danielunyi42/segger_dev:cuda121
4399
```
44100

45-
=== ":rocket: Pip Installation of RAPIDS with CUDA 11"
101+
- **For Singularity**:
46102
```bash
47-
pip install "segger[rapids11]"
103+
singularity exec --nv -B /path/to/local/data:/workspace/data segger_dev_cuda121.sif
48104
```
105+
- Place your input datasets in `/path/to/local/data` on your host machine.
106+
- Inside the container, access these datasets from `/workspace/data`.
107+
- Save results to `/workspace/data`, which will be available in `/path/to/local/data` on the host machine.
108+
109+
### :smirk_cat: Editable GitHub installation
110+
111+
For developers or users who want to modify the source code:
49112

50-
=== ":rocket: Pip Installation of RAPIDS with CUDA 12"
51113
```bash
52-
pip install "segger[rapids12]"
114+
git clone https://github.com/EliHei2/segger_dev.git
115+
cd segger_dev
116+
pip install -e ".[cuda12]"
53117
```
54118

55119
!!! warning "Common Installation Issues"
56-
- **Python Version**: Ensure you are using Python >= 3.10. Check your version with:
120+
- **Python Version**: Ensure you are using Python >= 3.10. Check your Python version by running:
57121
```bash
58122
python --version
59123
```
60-
If necessary, upgrade to the correct version.
124+
If your version is lower than 3.10, please upgrade Python.
61125

62-
- **CUDA Compatibility (GPU)**: For GPU installations, ensure the correct CUDA drivers are installed. Verify your setup with:
126+
- **CUDA Compatibility (GPU)**: For GPU installations, verify that your system has the correct NVIDIA drivers installed. Run:
63127
```bash
64128
nvidia-smi
65129
```
66-
Ensure your CUDA version is compatible with the package.
130+
Ensure that the displayed CUDA version is compatible with your selected `[cuda11]` or `[cuda12]` extra.
131+
132+
- Minimum driver version for **CUDA 11.x**: `450.80.02`
133+
- Minimum driver version for **CUDA 12.x**: `525.60.13`
67134

68-
- **Permissions**: If you encounter permission errors, use the `--user` flag to install without admin rights:
135+
- **Permissions**: If you encounter permission errors during installation, use the --user flag to install the package without requiring administrative privileges:
69136
```bash
70-
pip install --user .
137+
pip install --user .[cuda12]
71138
```
139+
Alternatively, consider using a virtual environment (venv or conda) to isolate the installation.
140+
141+
- **Environment Configuration**: Ensure that all required dependencies are installed in your environment.

0 commit comments

Comments
 (0)