Unable to see gpu processes pid inside the container #25754
Replies: 4 comments
-
can you provide a reproducer that doesnt involve running an nvidia tool ? it could help us narrow things down without knowing anything about their tooling. |
Beta Was this translation helpful? Give feedback.
-
This does not sound like a bug with podman, I suggest you ask the nvidia container toolkit about it. I move it to a discussion (In theory it could be caused by something podman is doing but I don't see what that would be, you are are part of the same pid namespace so the program should see all its processes that run there) |
Beta Was this translation helpful? Give feedback.
-
Okay, on nvidia issues I found they suggest using |
Beta Was this translation helpful? Give feedback.
-
Is there any workaround for this? I am basically running |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Issue Description
When I run any gpu process inside my rootful podman container, I see that GPU is getting utilised but the pids are not visible.
Steps to reproduce the issue
Steps to reproduce the issue
sudo podman run --name test --privileged --cap-add net_admin --systemd always --device nvidia.com/gpu=all --device /dev/net/tun:/dev/net/tun --replace -d nvidia/cuda:12.1.0-cudnn8-devel-ubuntu22.04
sudo podman exec -it test bash
nvidia-smi
Describe the results you received
Describe the results you expected
podman info output
Podman in a container
No
Privileged Or Rootless
Privileged
Upstream Latest Release
Yes
Additional environment details
Additional environment details
Additional information
Additional information like issue happens only occasionally or issue happens with a particular architecture or on a particular setting
Beta Was this translation helpful? Give feedback.
All reactions