@@ -125,7 +125,7 @@ Once you have configured the options above on all the GPU nodes in your
125125cluster, you can enable GPU support by deploying the following Daemonset:
126126
127127``` shell
128- $ kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.15.0-rc.1 /nvidia-device-plugin.yml
128+ $ kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.15.0-rc.2 /nvidia-device-plugin.yml
129129```
130130
131131** Note:** This is a simple static daemonset meant to demonstrate the basic
@@ -560,11 +560,11 @@ $ helm repo add nvdp https://nvidia.github.io/k8s-device-plugin
560560$ helm repo update
561561```
562562
563- Then verify that the latest release (` v0.15.0-rc.1 ` ) of the plugin is available:
563+ Then verify that the latest release (` v0.15.0-rc.2 ` ) of the plugin is available:
564564```
565565$ helm search repo nvdp --devel
566566NAME CHART VERSION APP VERSION DESCRIPTION
567- nvdp/nvidia-device-plugin 0.15.0-rc.1 0.15.0-rc.1 A Helm chart for ...
567+ nvdp/nvidia-device-plugin 0.15.0-rc.2 0.15.0-rc.2 A Helm chart for ...
568568```
569569
570570Once this repo is updated, you can begin installing packages from it to deploy
@@ -575,7 +575,7 @@ The most basic installation command without any options is then:
575575helm upgrade -i nvdp nvdp/nvidia-device-plugin \
576576 --namespace nvidia-device-plugin \
577577 --create-namespace \
578- --version 0.15.0-rc.1
578+ --version 0.15.0-rc.2
579579```
580580
581581** Note:** You only need the to pass the ` --devel ` flag to ` helm search repo `
@@ -584,7 +584,7 @@ version (e.g. `<version>-rc.1`). Full releases will be listed without this.
584584
585585### Configuring the device plugin's ` helm ` chart
586586
587- The ` helm ` chart for the latest release of the plugin (` v0.15.0-rc.1 ` ) includes
587+ The ` helm ` chart for the latest release of the plugin (` v0.15.0-rc.2 ` ) includes
588588a number of customizable values.
589589
590590Prior to ` v0.12.0 ` the most commonly used values were those that had direct
@@ -594,7 +594,7 @@ case of the original values is then to override an option from the `ConfigMap`
594594if desired. Both methods are discussed in more detail below.
595595
596596The full set of values that can be set are found here:
597- [ here] ( https://github.com/NVIDIA/k8s-device-plugin/blob/v0.15.0-rc.1 /deployments/helm/nvidia-device-plugin/values.yaml ) .
597+ [ here] ( https://github.com/NVIDIA/k8s-device-plugin/blob/v0.15.0-rc.2 /deployments/helm/nvidia-device-plugin/values.yaml ) .
598598
599599#### Passing configuration to the plugin via a ` ConfigMap ` .
600600
633633And deploy the device plugin via helm (pointing it at this config file and giving it a name):
634634```
635635$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
636- --version=0.15.0-rc.1 \
636+ --version=0.15.0-rc.2 \
637637 --namespace nvidia-device-plugin \
638638 --create-namespace \
639639 --set-file config.map.config=/tmp/dp-example-config0.yaml
@@ -655,7 +655,7 @@ $ kubectl create cm -n nvidia-device-plugin nvidia-plugin-configs \
655655```
656656```
657657$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
658- --version=0.15.0-rc.1 \
658+ --version=0.15.0-rc.2 \
659659 --namespace nvidia-device-plugin \
660660 --create-namespace \
661661 --set config.name=nvidia-plugin-configs
683683And redeploy the device plugin via helm (pointing it at both configs with a specified default).
684684```
685685$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
686- --version=0.15.0-rc.1 \
686+ --version=0.15.0-rc.2 \
687687 --namespace nvidia-device-plugin \
688688 --create-namespace \
689689 --set config.default=config0 \
@@ -702,7 +702,7 @@ $ kubectl create cm -n nvidia-device-plugin nvidia-plugin-configs \
702702```
703703```
704704$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
705- --version=0.15.0-rc.1 \
705+ --version=0.15.0-rc.2 \
706706 --namespace nvidia-device-plugin \
707707 --create-namespace \
708708 --set config.default=config0 \
@@ -785,7 +785,7 @@ chart values that are commonly overridden are:
785785```
786786
787787Please take a look in the
788- [ ` values.yaml ` ] ( https://github.com/NVIDIA/k8s-device-plugin/blob/v0.15.0-rc.1 /deployments/helm/nvidia-device-plugin/values.yaml )
788+ [ ` values.yaml ` ] ( https://github.com/NVIDIA/k8s-device-plugin/blob/v0.15.0-rc.2 /deployments/helm/nvidia-device-plugin/values.yaml )
789789file to see the full set of overridable parameters for the device plugin.
790790
791791Examples of setting these options include:
@@ -794,7 +794,7 @@ Enabling compatibility with the `CPUManager` and running with a request for
794794100ms of CPU time and a limit of 512MB of memory.
795795``` shell
796796$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
797- --version=0.15.0-rc.1 \
797+ --version=0.15.0-rc.2 \
798798 --namespace nvidia-device-plugin \
799799 --create-namespace \
800800 --set compatWithCPUManager=true \
@@ -805,7 +805,7 @@ $ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
805805Enabling compatibility with the ` CPUManager ` and the ` mixed ` ` migStrategy `
806806``` shell
807807$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
808- --version=0.15.0-rc.1 \
808+ --version=0.15.0-rc.2 \
809809 --namespace nvidia-device-plugin \
810810 --create-namespace \
811811 --set compatWithCPUManager=true \
@@ -824,7 +824,7 @@ Discovery to perform this labeling.
824824To enable it, simply set ` gfd.enabled=true ` during helm install.
825825```
826826helm upgrade -i nvdp nvdp/nvidia-device-plugin \
827- --version=0.15.0-rc.1 \
827+ --version=0.15.0-rc.2 \
828828 --namespace nvidia-device-plugin \
829829 --create-namespace \
830830 --set gfd.enabled=true
@@ -930,31 +930,31 @@ Using the default values for the flags:
930930$ helm upgrade -i nvdp \
931931 --namespace nvidia-device-plugin \
932932 --create-namespace \
933- https://nvidia.github.io/k8s-device-plugin/stable/nvidia-device-plugin-0.15.0-rc.1 .tgz
933+ https://nvidia.github.io/k8s-device-plugin/stable/nvidia-device-plugin-0.15.0-rc.2 .tgz
934934```
935935-->
936936## Building and Running Locally
937937
938938The next sections are focused on building the device plugin locally and running it.
939939It is intended purely for development and testing, and not required by most users.
940- It assumes you are pinning to the latest release tag (i.e. ` v0.15.0-rc.1 ` ), but can
940+ It assumes you are pinning to the latest release tag (i.e. ` v0.15.0-rc.2 ` ), but can
941941easily be modified to work with any available tag or branch.
942942
943943### With Docker
944944
945945#### Build
946946Option 1, pull the prebuilt image from [ Docker Hub] ( https://hub.docker.com/r/nvidia/k8s-device-plugin ) :
947947``` shell
948- $ docker pull nvcr.io/nvidia/k8s-device-plugin:v0.15.0-rc.1
949- $ docker tag nvcr.io/nvidia/k8s-device-plugin:v0.15.0-rc.1 nvcr.io/nvidia/k8s-device-plugin:devel
948+ $ docker pull nvcr.io/nvidia/k8s-device-plugin:v0.15.0-rc.2
949+ $ docker tag nvcr.io/nvidia/k8s-device-plugin:v0.15.0-rc.2 nvcr.io/nvidia/k8s-device-plugin:devel
950950```
951951
952952Option 2, build without cloning the repository:
953953``` shell
954954$ docker build \
955955 -t nvcr.io/nvidia/k8s-device-plugin:devel \
956956 -f deployments/container/Dockerfile.ubuntu \
957- https://github.com/NVIDIA/k8s-device-plugin.git#v0.15.0-rc.1
957+ https://github.com/NVIDIA/k8s-device-plugin.git#v0.15.0-rc.2
958958```
959959
960960Option 3, if you want to modify the code:
0 commit comments