You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
TensorRT LLM 1.1 supports both CUDA 12.9 and 13.0 while some dependency changes are required. The `requirements.txt` contains dependencies needed by CUDA 13.0. If you are using CUDA 12.9, please uncomment lines end with `# <For CUDA 12.9>` and comment out the next lines.
153
-
```
154
-
155
150
### Option 1: Full Build with C++ Compilation
156
151
157
152
The following command compiles the C++ code and packages the compiled libraries along with the Python files into a wheel. When developing C++ code, you need this full build command to apply your code changes.
Copy file name to clipboardExpand all lines: docs/source/installation/linux.md
+2-9Lines changed: 2 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,23 +12,16 @@
12
12
Install CUDA Toolkit following the [CUDA Installation Guide for Linux](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/) and
13
13
make sure `CUDA_HOME` environment variable is properly set.
14
14
15
-
```{tip}
16
-
:name: installation-linux-tip-cuda-version
17
-
TensorRT LLM 1.1 supports both CUDA 12.9 and 13.0. The wheel package release only supports CUDA 12.9, while CUDA 13.0 is only supported through NGC container release.
18
-
```
19
-
20
15
```bash
21
-
#Optional step: Only required for NVIDIA Blackwell GPUs and SBSA platform
#By default, PyTorch CUDA 12.8 package is installed. Install PyTorch CUDA 13.0 package to align with the CUDA version used for building TensorRT LLM wheels.
PyTorch CUDA 12.8 package is required for supporting NVIDIA Blackwell GPUs and SBSA platform. On prior GPUs or Linux x86_64 platform, this extra installation is not required.
31
-
32
25
```{tip}
33
26
Instead of manually installing the preqrequisites as described
34
27
above, it is also possible to use the pre-built [TensorRT LLM Develop container
0 commit comments