Skip to content

libnppicc.so.10 missing & CMake Error: CUDA_nppicc_LIBRARY NOTFOUND inside Docker container (JP 4.4.1 / L4T R32.4.4) on Jetson Nano 4GB #1934

@maitiSoutrik

Description

@maitiSoutrik

Hi Dusty & Team,

I'm having trouble running the jetson-inference examples inside the recommended Docker container on my Jetson Nano. Both the pre-compiled binaries and building from source seem to fail due to issues with the CUDA NPP library within the container.

Environment:

  • Hardware: Jetson Nano Developer Kit (4GB)
  • JetPack/L4T Version: JetPack 4.4.1 / L4T R32.4.4 (Confirmed via jtop and /etc/nv_tegra_release)
  • Docker Image: dustynv/jetson-inference:r32.4.4 (Pulled via docker/run.sh)
  • Host OS: Ubuntu 18.04 (Standard JetPack)
  • Camera: 720p USB Camera (Detected as /dev/video0 inside container)
  • Connection Method: Using NoMachine to access the Jetson desktop.

Problem Description & Sequence:

My primary goal is to run the inference examples (like video-viewer or detectnet) using my USB camera within the Docker container.

  1. After starting the container using docker/run.sh, I navigated to build/aarch64/bin.
  2. I confirmed my camera was present at /dev/video0 inside the container.
  3. Attempting to run a pre-compiled example directly failed with a missing library error:
    root@JetNano:/jetson-inference/build/aarch64/bin# video-viewer /dev/video0
    video-viewer: error while loading shared libraries: libnppicc.so.10: cannot open shared object file: No such file or directory
  4. Because the pre-compiled binary failed, I attempted to rebuild the project inside the container following the standard procedure (cd /jetson-inference, rm -rf build, mkdir build && cd build, cmake ../).
  5. The cmake ../ command failed during configuration with the following error:
    CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
    Please set them or make sure they are set and tested correctly in the CMake files:
    CUDA_nppicc_LIBRARY (ADVANCED)
        linked by target "jetson-utils" in directory /jetson-inference/utils
    
    -- Configuring incomplete, errors occurred!
    See also "/jetson-inference/build/CMakeFiles/CMakeOutput.log".
    See also "/jetson-inference/build/CMakeFiles/CMakeError.log".

It appears the core issue is the missing or inaccessible libnppicc.so.10 library within the container environment.

Steps to Reproduce:

  1. Clone the repository: git clone --recursive https://github.com/dusty-nv/jetson-inference
  2. Navigate into the directory: cd jetson-inference
  3. Run the Docker script: docker/run.sh (Enter password, wait for container start)
  4. Inside the container, navigate to build/aarch64/bin.
  5. Attempt to run pre-compiled binary: video-viewer /dev/video0.
  6. Observe libnppicc.so.10: cannot open shared object file error.
  7. (Optional) Navigate back to /jetson-inference/.
  8. Clean build: rm -rf build.
  9. Create/enter build dir: mkdir build && cd build.
  10. Run CMake: cmake ../.
  11. Observe CUDA_nppicc_LIBRARY NOTFOUND error.

Troubleshooting Steps Already Taken:

  1. Confirmed host library /usr/lib/aarch64-linux-gnu/libnvinfer.so.7 exists and is valid (212MB). (Initial error before rebuild attempt involved libnvinfer.so.7)
  2. Tried removing the specific Docker image (docker rmi dustynv/jetson-inference:r32.4.4) and re-pulling via docker/run.sh. Pre-compiled video-viewer still failed with libnppicc.so.10 error.
  3. Tried a full Docker cleanup (docker rmi -f ..., docker system prune -a) followed by re-pulling the image via docker/run.sh. Attempting to build still resulted in the CMake CUDA_nppicc_LIBRARY NOTFOUND error.
  4. Verified that libnppicc.so* is missing inside the dustynv/jetson-inference:r32.4.4 container using ls -l /usr/local/cuda/lib64/libnppicc* and ls -l /usr/lib/aarch64-linux-gnu/libnppicc*. Both commands reported "No such file or directory".
  5. Launched the base L4T container (sudo docker run --rm -it --runtime nvidia nvcr.io/nvidia/l4t-base:r32.4.4 bash) and confirmed that libnppicc.so* is also missing in the same locations within the base image.
  6. Reinstalled nvidia-docker2 on the host (sudo apt update && sudo apt install --reinstall nvidia-docker2 && sudo systemctl restart docker) and re-tested the base L4T container; the library was still missing.

Question:

Since the libnppicc library seems to be missing even in the official base l4t-base:r32.4.4 container for my JetPack version (JP 4.4.1), could there be an issue with this specific base image tag on NGC? Or is there perhaps another configuration step needed for JetPack 4.4.1 / L4T R32.4.4 to correctly make the NPP libraries available inside the NVIDIA container runtime that I might have missed?

I'd appreciate any insights or workarounds to get the CUDA NPP libraries accessible within the Docker container so I can build and run the jetson-inference examples.

Thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions