Feature description:
Support docker-swarm (with GPUs support) out-of-the-box.
Problem and motivation:
As here describes, CURRENTLY it is not possible to run ml-hub with GPU support across multiple machines (while every machine may have one or more GPU cards). Since it is not easy to build a kubernetes cluster with GPU support and management (and I'm not farmiliar with kubernetes), maybe a more lightweight solution (like docker-swarm?) would support it more seamlessly (via nvidia-docker).
Is this something you're interested in working on?
Yes