numa_node
#4563
Replies: 1 comment
-
Getting exactly the same running in container and as binary ( DOCKER_INSTALL true/false ) |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I need your help figuring out a GPU issue with this setup. localAI loads and runs, but is extremely slow and does not use the GPU. I suspect this is related to PCI passthrough as localAI is running inside a container on a proxmox host. PCI passthrough is enabled. localAI finds the GPU, but apparently is not able to use it.
I suspect that the numa_node error is the cause of this, because the OS reports numa devices in
/sys/bus/pci/devices/0000\:03\:00.0/numa_node
but apparently localAI looks for it in a different location/sys/class/drm/card0/device/numa_node
(see debug log below).How can I configure it correctly?
Has anyone encountered a similar situation?
Any ideas about how to resolve it?
The client container reports access to the GPU. nvidia-smi finds the GPU, but reports 0% GPU usage and no processes:
running local-ai Version: v2.24.2
Native binary installed from shell script (no docker)
Debug outputs the following:
When loading a model, it indicates that CUDA is not loaded:
Thanks for your help!
Beta Was this translation helpful? Give feedback.
All reactions