You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Nov 16, 2023. It is now read-only.
Copy file name to clipboardExpand all lines: edge_k8s_gpu_sharing/kubernetes_gpu_sharing_edge.md
+1-2Lines changed: 1 addition & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -48,7 +48,7 @@ in a deployment .yaml these entries would request an allocation of one gpu devic
48
48
nvidia.com/gpu: 1
49
49
...
50
50
51
-
To damonstrate this, let's deploy one of our previous models from [machine-learning-notebooks/deploying-on-k8s](../machine-learning-notebooks/deploying-on-k8s),
51
+
To damonstrate this, let's deploy one of our previous models from [machine-learning-notebooks/deploying-on-k8s](../machine-learning-notebooks/deploying-on-k8s/Readme.md),
52
52
you will need to run this notebook to create the container image: [machine-learning-notebooks/deploying-on-k8s/production-deploy-to-k8s-gpu.ipynb](../machine-learning-notebooks/deploying-on-k8s/production-deploy-to-k8s-gpu.ipynb).
53
53
54
54
`deploy_infer.yaml` will look like this:
@@ -366,5 +366,4 @@ To clean the environment from what we created, we need to delete the deployments
Copy file name to clipboardExpand all lines: edge_k8s_gpu_sharing/kubernetes_gpu_sharing_one_node.md
+19-5Lines changed: 19 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,10 +4,19 @@ This demo shows how to deploy multiple gpu-requiring workloads on a cluster with
4
4
5
5
## Pre-requisites
6
6
7
-
Please follow the instructions in [Deploying model to Kubernetes](../deploying-on-k8s/README.md)
7
+
To create a one-node gpu-capable Kubernetes cluster, you need a gpu-capable VM. During creation of the
8
+
VMs, you need to specify a GPU-capable VM Size(either at Portal, or in your deployment template).
9
+
10
+
Please follow the instructions in [Deploying model to Kubernetes](../machine-learning-notebooks/deploying-on-k8s/Readme.md)
8
11
to make sure you have a GPU-capable node on your vm.
9
12
10
-
Please see [NVIDIA webpage](https://docs.nvidia.com/datacenter/kubernetes/kubernetes-upstream/index.html#kubernetes-run-a-workload) if you have any problems. You should be able to run nvidia-smi:
13
+
If you need to install docker, follow the instructions at [Nvidia cloud native containers](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker).
14
+
15
+
And if you need to install the drivers, see [Azure VM driver setup](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/n-series-driver-setup) or related. You might have to upgrade your system and/or drivers to work.
16
+
17
+
Please see [NVIDIA webpage](https://docs.nvidia.com/datacenter/kubernetes/kubernetes-upstream/index.html#kubernetes-run-a-workload) if you have any problems.
18
+
19
+
Before moving forward, you should be able to run nvidia-smi:
11
20
12
21
$ sudo docker run --rm --runtime=nvidia nvidia/cuda nvidia-smi
Once you installed `microk8s` as in our demo [Deploying model to Kubernetes](../machine-learning-notebooks/deploying-on-k8s/README.md),
39
+
Once you installed `microk8s` as in our demo [Deploying model to Kubernetes](../machine-learning-notebooks/deploying-on-k8s/Readme.md),
31
40
you should also be able to see `nvidia-smi` from within a pod:
32
41
33
42
$ kubectl exec -it gpu-pod nvidia-smi
@@ -67,7 +76,7 @@ in a deployment .yaml these entries would request an allocation of one gpu devic
67
76
nvidia.com/gpu: 1
68
77
...
69
78
70
-
To damonstrate this, let's deploy one of our previous models from [machine-learning-notebooks/deploying-on-k8s](../machine-learning-notebooks/deploying-on-k8s),
79
+
To damonstrate this, let's deploy one of our previous models from [machine-learning-notebooks/deploying-on-k8s](../machine-learning-notebooks/deploying-on-k8s/Readme.md),
71
80
you will need to run this notebook to create the container image: [machine-learning-notebooks/deploying-on-k8s/production-deploy-to-k8s-gpu.ipynb](../machine-learning-notebooks/deploying-on-k8s/production-deploy-to-k8s-gpu.ipynb).
0 commit comments