-
Notifications
You must be signed in to change notification settings - Fork 3.1k
Description
Hi Dusty & Team,
I'm having trouble running the jetson-inference
examples inside the recommended Docker container on my Jetson Nano. Both the pre-compiled binaries and building from source seem to fail due to issues with the CUDA NPP library within the container.
Environment:
- Hardware: Jetson Nano Developer Kit (4GB)
- JetPack/L4T Version: JetPack 4.4.1 / L4T R32.4.4 (Confirmed via
jtop
and/etc/nv_tegra_release
) - Docker Image:
dustynv/jetson-inference:r32.4.4
(Pulled viadocker/run.sh
) - Host OS: Ubuntu 18.04 (Standard JetPack)
- Camera: 720p USB Camera (Detected as
/dev/video0
inside container) - Connection Method: Using NoMachine to access the Jetson desktop.
Problem Description & Sequence:
My primary goal is to run the inference examples (like video-viewer
or detectnet
) using my USB camera within the Docker container.
- After starting the container using
docker/run.sh
, I navigated tobuild/aarch64/bin
. - I confirmed my camera was present at
/dev/video0
inside the container. - Attempting to run a pre-compiled example directly failed with a missing library error:
root@JetNano:/jetson-inference/build/aarch64/bin# video-viewer /dev/video0 video-viewer: error while loading shared libraries: libnppicc.so.10: cannot open shared object file: No such file or directory
- Because the pre-compiled binary failed, I attempted to rebuild the project inside the container following the standard procedure (
cd /jetson-inference
,rm -rf build
,mkdir build && cd build
,cmake ../
). - The
cmake ../
command failed during configuration with the following error:CMake Error: The following variables are used in this project, but they are set to NOTFOUND. Please set them or make sure they are set and tested correctly in the CMake files: CUDA_nppicc_LIBRARY (ADVANCED) linked by target "jetson-utils" in directory /jetson-inference/utils -- Configuring incomplete, errors occurred! See also "/jetson-inference/build/CMakeFiles/CMakeOutput.log". See also "/jetson-inference/build/CMakeFiles/CMakeError.log".
It appears the core issue is the missing or inaccessible libnppicc.so.10
library within the container environment.
Steps to Reproduce:
- Clone the repository:
git clone --recursive https://github.com/dusty-nv/jetson-inference
- Navigate into the directory:
cd jetson-inference
- Run the Docker script:
docker/run.sh
(Enter password, wait for container start) - Inside the container, navigate to
build/aarch64/bin
. - Attempt to run pre-compiled binary:
video-viewer /dev/video0
. - Observe
libnppicc.so.10: cannot open shared object file
error. - (Optional) Navigate back to
/jetson-inference/
. - Clean build:
rm -rf build
. - Create/enter build dir:
mkdir build && cd build
. - Run CMake:
cmake ../
. - Observe
CUDA_nppicc_LIBRARY NOTFOUND
error.
Troubleshooting Steps Already Taken:
- Confirmed host library
/usr/lib/aarch64-linux-gnu/libnvinfer.so.7
exists and is valid (212MB). (Initial error before rebuild attempt involved libnvinfer.so.7) - Tried removing the specific Docker image (
docker rmi dustynv/jetson-inference:r32.4.4
) and re-pulling viadocker/run.sh
. Pre-compiledvideo-viewer
still failed withlibnppicc.so.10
error. - Tried a full Docker cleanup (
docker rmi -f ...
,docker system prune -a
) followed by re-pulling the image viadocker/run.sh
. Attempting to build still resulted in the CMakeCUDA_nppicc_LIBRARY NOTFOUND
error. - Verified that
libnppicc.so*
is missing inside thedustynv/jetson-inference:r32.4.4
container usingls -l /usr/local/cuda/lib64/libnppicc*
andls -l /usr/lib/aarch64-linux-gnu/libnppicc*
. Both commands reported "No such file or directory". - Launched the base L4T container (
sudo docker run --rm -it --runtime nvidia nvcr.io/nvidia/l4t-base:r32.4.4 bash
) and confirmed thatlibnppicc.so*
is also missing in the same locations within the base image. - Reinstalled
nvidia-docker2
on the host (sudo apt update && sudo apt install --reinstall nvidia-docker2 && sudo systemctl restart docker
) and re-tested the base L4T container; the library was still missing.
Question:
Since the libnppicc
library seems to be missing even in the official base l4t-base:r32.4.4
container for my JetPack version (JP 4.4.1), could there be an issue with this specific base image tag on NGC? Or is there perhaps another configuration step needed for JetPack 4.4.1 / L4T R32.4.4 to correctly make the NPP libraries available inside the NVIDIA container runtime that I might have missed?
I'd appreciate any insights or workarounds to get the CUDA NPP libraries accessible within the Docker container so I can build and run the jetson-inference
examples.
Thanks!