Skip to content

build error #8462

@geraldstanje

Description

@geraldstanje

hi,

i run this command:
git clone https://github.com/triton-inference-server/server -b r25.09
cd server && ./build.py --backend tensorrtllm:release/1.0 --backend python:r25.09 --enable-gpu --build-type Release --target-platform linux --endpoint grpc --endpoint http

and get error:
-- Found CUDA: /usr/local/cuda (found version "13.0")
-- Found Python3: /usr/bin/python3.12 (found version "3.12.3") found components: Interpreter Development Development.Module Development.Embed
CMake Error at CMakeLists.txt:151 (find_library):
Could not find tensorrt_llm using the following names: libtensorrt_llm.so

-- Configuring incomplete, errors occurred!
error: build failed

how to make it work? if i need to build tensorrtllm - what to run? the tensorrt backend here is for triton 25.06 - and docker build: https://github.com/triton-inference-server/tensorrtllm_backend/blob/main/dockerfile/Dockerfile.triton.trt_llm_backend

cc @rmccorm4 @statiraju @mc-nv @yinggeh @krishung5 @the-david-oy

Metadata

Metadata

Assignees

No one assigned

    Labels

    buildIssues pertaining to buildsquestionFurther information is requested

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions