Skip to content

Commit c6c4471

Browse files
Dont pre-install example dependencies in docker, install on run-time
Signed-off-by: Keval Morabia <[email protected]>
1 parent b895dc5 commit c6c4471

File tree

5 files changed

+18
-22
lines changed

5 files changed

+18
-22
lines changed

.gitlab/tests.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -56,6 +56,7 @@ example:
5656
script:
5757
# Uninstall apex since T5 Int8 (PixArt) + Apex is not supported as per https://github.com/huggingface/transformers/issues/21391
5858
- if [ "$TEST" = "diffusers" ]; then pip uninstall -y apex; fi
59+
- if [ -f examples/$TEST/requirements.txt ]; then pip install -r examples/$TEST/requirements.txt; fi
5960
- if [ "$TEST_TYPE" = "pytest" ]; then pytest -s tests/examples/$TEST; else bash tests/examples/test_$TEST.sh; fi
6061

6162
example-ada:

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@ To install from source in editable mode with all development dependencies or to
6868

6969
```bash
7070
# Clone the Model Optimizer repository
71-
git clone https://github.com/NVIDIA/TensorRT-Model-Optimizer.git
71+
git clone git@github.com:NVIDIA/TensorRT-Model-Optimizer.git
7272
cd TensorRT-Model-Optimizer
7373

7474
pip install -e .[dev]

docker/Dockerfile

Lines changed: 5 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -18,17 +18,13 @@ RUN ln -s /app/tensorrt_llm /workspace/tensorrt_llm
1818
ENV LD_LIBRARY_PATH="/usr/lib/x86_64-linux-gnu:/usr/local/tensorrt/targets/x86_64-linux-gnu/lib:${LD_LIBRARY_PATH}" \
1919
PATH="/usr/local/tensorrt/targets/x86_64-linux-gnu/bin:${PATH}"
2020

21-
# Install modelopt with all optional dependencies and pre-compile CUDA extensions otherwise they take several minutes on every docker run
22-
RUN pip install -U "nvidia-modelopt[all,dev-test]"
23-
RUN python -c "import modelopt.torch.quantization.extensions as ext; ext.precompile()"
24-
25-
# Find and install requirements.txt files for all examples excluding windows
21+
# Install modelopt from source with all optional dependencies and pre-compile CUDA extensions otherwise they take several minutes on every docker run
22+
# Pre-install llm_ptq requirements.txt
2623
COPY . TensorRT-Model-Optimizer
24+
RUN pip install -e "./TensorRT-Model-Optimizer[all,dev-test]"
2725
RUN rm -rf TensorRT-Model-Optimizer/.git
28-
RUN find TensorRT-Model-Optimizer/examples -name "requirements.txt" | grep -v "windows" | while read req_file; do \
29-
echo "Installing from $req_file"; \
30-
pip install -r "$req_file" || exit 1; \
31-
done
26+
RUN python -c "import modelopt.torch.quantization.extensions as ext; ext.precompile()"
27+
RUN pip install -r TensorRT-Model-Optimizer/examples/llm_ptq/requirements.txt
3228

3329
# Allow users to run without root
3430
RUN chmod -R 777 /workspace

docs/source/getting_started/_installation_for_Linux.rst

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -34,16 +34,16 @@ Environment setup
3434

3535
To use Model Optimizer with full dependencies (e.g. TensorRT/TensorRT-LLM deployment), we recommend using our provided docker image
3636
which is based on the `TensorRT-LLM <https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tensorrt-llm/containers/release/tags>`_
37-
docker image with additional example-specific dependencies installed.
37+
docker image with additional dependencies installed.
3838

3939
After installing the `NVIDIA Container Toolkit <https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html>`_,
40-
please run the following commands to build the Model Optimizer docker container which has all the necessary
41-
dependencies pre-installed for running the examples.
40+
please run the following commands to build the Model Optimizer docker container which has all the base
41+
dependencies pre-installed. You may need to install additional dependencies from the examples's `requirements.txt` file.
4242

4343
.. code-block:: shell
4444
4545
# Clone the ModelOpt repository
46-
git clone https://github.com/NVIDIA/TensorRT-Model-Optimizer.git
46+
git clone git@github.com:NVIDIA/TensorRT-Model-Optimizer.git
4747
cd TensorRT-Model-Optimizer
4848
4949
# Build the docker (will be tagged `docker.io/library/modelopt_examples:latest`)
@@ -60,8 +60,7 @@ Environment setup
6060

6161
For PyTorch, you can also use `NVIDIA NGC PyTorch container <https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch/tags>`_
6262
and for NVIDIA NeMo framework, you can use the `NeMo container <https://catalog.ngc.nvidia.com/orgs/nvidia/containers/nemo/tags>`_.
63-
Both of these containers come with Model Optimizer pre-installed. NeMo container also comes with the HuggingFace and TensorRT-LLM
64-
dependencies. Make sure to update the Model Optimizer to the latest version if not already.
63+
Both of these containers come with Model Optimizer pre-installed. Make sure to update the Model Optimizer to the latest version if not already.
6564

6665
For ONNX PTQ, you can use the optimized docker image from [onnx_ptq Dockerfile](https://github.com/NVIDIA/TensorRT-Model-Optimizer/tree/main/examples/onnx_ptq/docker).
6766

examples/onnx_ptq/docker/Dockerfile

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -19,16 +19,16 @@ ENV LD_LIBRARY_PATH="${CUDNN_LIB_DIR}:${TRT_PATH}/lib:/usr/include:${LD_LIBRARY_
1919
ENV PATH="${TRT_PATH}/bin:${PATH}"
2020

2121
# Copy application code and install requirements
22-
COPY modelopt modelopt/modelopt
23-
COPY examples/onnx_ptq modelopt/examples/onnx_ptq
24-
COPY setup.py modelopt/setup.py
25-
COPY pyproject.toml modelopt/pyproject.toml
22+
COPY modelopt TensorRT-Model-Optimizer/modelopt
23+
COPY examples/onnx_ptq TensorRT-Model-Optimizer/examples/onnx_ptq
24+
COPY setup.py TensorRT-Model-Optimizer/setup.py
25+
COPY pyproject.toml TensorRT-Model-Optimizer/pyproject.toml
2626

2727
# Install onnx_ptq requirements
28-
RUN pip install -r modelopt/examples/onnx_ptq/requirements.txt
28+
RUN pip install -r TensorRT-Model-Optimizer/examples/onnx_ptq/requirements.txt
2929

3030
# Install modelopt
31-
RUN pip install -e "./modelopt[hf,onnx]"
31+
RUN pip install -e "./TensorRT-Model-Optimizer[hf,onnx]"
3232

3333
# Allow users to run without root
3434
RUN chmod -R 777 /workspace

0 commit comments

Comments
 (0)