Why do I get an error when I start multiple GPU-resource pods simultaneously (concurrently) using vcuda?
In vcuda loader.c, I add ferror to print errno related error message, I get it

But when I start the pods sequentially, I don't have this problem. So I guess it may be caused by a gap between the kubelet startup container and the gpu-manager placing the libcuda.so file.