|
| 1 | +# Running GPU Workload in a Kyma Cluster |
| 2 | + |
| 3 | +> [!Note] |
| 4 | +> This sample is based on [NVIDIA GPU Operator Installation Guide for Gardener](https://github.com/gardener/gardener-ai-conformance/blob/main/v1.33/NVIDIA-GPU-Operator.md). |
| 5 | +
|
| 6 | +## Prerequisites |
| 7 | + |
| 8 | +- Helm 3.x installed. For more information, see the [Kubernetes](https://github.com/SAP-samples/kyma-runtime-samples/tree/main/prerequisites#kubernetes) section. |
| 9 | +- kubectl installed and configured to access your Kyma cluster. For more information, see the [Kubernetes](https://github.com/SAP-samples/kyma-runtime-samples/tree/main/prerequisites#kubernetes) section. |
| 10 | +- You have an SAP BTP, Kyma runtime instance. |
| 11 | + |
| 12 | +## Procedure |
| 13 | + |
| 14 | +### Setting Up a GPU Worker Pool |
| 15 | + |
| 16 | +Follor these step, to set up a worker pool with GPU nodes available in your Kyma cluster. For more information, see [Additional Worker Node Pools](https://help.sap.com/docs/btp/sap-business-technology-platform/provisioning-and-update-parameters-in-kyma-environment?version=Cloud#additional-worker-node-pools). |
| 17 | + |
| 18 | +1. Go to the SAP BTP cockpit and update your Kyma instance by adding a new worker pool named `gpu`. |
| 19 | +2. Add some nodes with the GPU support, for example, `g6.xlarge`. |
| 20 | +3. Set auto-scaling min nodes to `0` and max nodes to a desired number, for example, `2`. This way, when no GPU workloads are running, the cluster scales down to zero GPU nodes, saving costs. |
| 21 | + |
| 22 | +### Installation |
| 23 | + |
| 24 | +1. Add the NVIDIA Helm repository. |
| 25 | + |
| 26 | + ```bash |
| 27 | + # Add the NVIDIA Helm repository |
| 28 | + helm repo add nvidia https://helm.ngc.nvidia.com/nvidia |
| 29 | + |
| 30 | + # Update repository information |
| 31 | + helm repo update |
| 32 | + |
| 33 | + # Verify repository is added |
| 34 | + helm search repo nvidia/gpu-operator |
| 35 | + ``` |
| 36 | + |
| 37 | +2. Install the GPU operator with Garden Linux configuration. |
| 38 | + |
| 39 | + The key to successful installation on Garden Linux is using the specialized values file that handles the Garden Linux-specific requirements. |
| 40 | + |
| 41 | + ```bash |
| 42 | + # Install GPU Operator with Garden Linux optimized values |
| 43 | + helm upgrade --install --create-namespace -n gpu-operator gpu-operator nvidia/gpu-operator --values \ |
| 44 | + https://raw.githubusercontent.com/SAP-samples/kyma-runtime-samples/refs/heads/main/gpu/gpu-operator-values.yaml |
| 45 | + |
| 46 | + # Wait for installation to complete |
| 47 | + helm status gpu-operator -n gpu-operator |
| 48 | + ``` |
| 49 | + |
| 50 | +> [!Note] |
| 51 | +> The [gpu-operator-values.yaml](gpu-operator-values.yaml) file is configured for driver version 570, which is compatible with current Garden Linux kernel versions in Kyma clusters. If you need a different driver version, adjust the `driver.version` field in the values file accordingly (download the file and modify it locally before installation). |
| 52 | +
|
| 53 | +3. The GPU operator deploys several components as DaemonSets and Deployments. Monitor the installation. |
| 54 | + |
| 55 | + ```bash |
| 56 | + # Watch all pods in gpu-operator namespace |
| 57 | + kubectl get pods -n gpu-operator -w |
| 58 | + |
| 59 | + # Check deployment status |
| 60 | + kubectl get all -n gpu-operator |
| 61 | + ``` |
| 62 | + |
| 63 | +### Installation Verification |
| 64 | + |
| 65 | +1. Deploy a simple GPU test workload. |
| 66 | + |
| 67 | + ```bash |
| 68 | + # Create test GPU workload |
| 69 | + cat <<EOF | kubectl apply -f - |
| 70 | + apiVersion: v1 |
| 71 | + kind: Pod |
| 72 | + metadata: |
| 73 | + name: gpu-test |
| 74 | + spec: |
| 75 | + containers: |
| 76 | + - name: gpu-test |
| 77 | + image: nvcr.io/nvidia/cuda:13.0.1-runtime-ubuntu24.04 |
| 78 | + command: ["nvidia-smi"] |
| 79 | + resources: |
| 80 | + limits: |
| 81 | + nvidia.com/gpu: 1 |
| 82 | + restartPolicy: Never |
| 83 | + EOF |
| 84 | + ``` |
| 85 | +
|
| 86 | + If your cluster does not have GPU resources available, the Pod remains in the `Pending` state for a while until a GPU node is provisioned. |
| 87 | +
|
| 88 | + Once the node is up, the NVIDIA GPU Operator deploys the device plugin DaemonSet, which then advertises `nvidia.com/gpu` resources on that node. |
| 89 | +
|
| 90 | +2. Check the autoscaler config to see if GPU nodes are being considered. |
| 91 | +
|
| 92 | + ```bash |
| 93 | + kubectl get configmap -n kube-system cluster-autoscaler-status -o yaml |
| 94 | + ``` |
| 95 | +
|
| 96 | + This is an example section from the ConfigMap showing a GPU worker pool with one node started: |
| 97 | + |
| 98 | + ```yaml |
| 99 | + - name: shoot--kyma--c-1f226cf-gpu-z1 |
| 100 | + health: |
| 101 | + status: Healthy |
| 102 | + nodeCounts: |
| 103 | + registered: |
| 104 | + total: 1 |
| 105 | + ready: 1 |
| 106 | + notStarted: 0 |
| 107 | + longUnregistered: 0 |
| 108 | + unregistered: 0 |
| 109 | + cloudProviderTarget: 1 |
| 110 | + minSize: 0 |
| 111 | + maxSize: 3 |
| 112 | + lastProbeTime: "2025-12-11T14:40:54.65491129Z" |
| 113 | + lastTransitionTime: "2025-12-11T02:13:08.790764467Z" |
| 114 | + scaleUp: |
| 115 | + status: NoActivity |
| 116 | + lastProbeTime: "2025-12-11T14:40:54.65491129Z" |
| 117 | + lastTransitionTime: "2025-12-11T12:12:13.016415154Z" |
| 118 | + scaleDown: |
| 119 | + status: NoCandidates |
| 120 | + lastProbeTime: "2025-12-11T14:40:54.65491129Z" |
| 121 | + lastTransitionTime: "2025-12-11T13:10:13.472558018Z" |
| 122 | + ``` |
| 123 | +
|
| 124 | +3. Observe this config map to see if the GPU worker pool is recognized and nodes are being provisioned as needed. When the GPU node is ready, the `nvidia.com/gpu` resource should be available for scheduling, and the test Pod should complete successfully. |
| 125 | +
|
| 126 | + You can run these commands to monitor the test Pod, check logs, and clean up afterward: |
| 127 | +
|
| 128 | + ```bash |
| 129 | + # Wait for pod to complete and check output |
| 130 | + kubectl wait --for=jsonpath='{.status.phase}'=Succeeded pod/gpu-test --timeout=300s |
| 131 | + kubectl logs gpu-test |
| 132 | +
|
| 133 | + # Clean up test pod |
| 134 | + kubectl delete pod gpu-test |
| 135 | + ``` |
| 136 | +
|
| 137 | +### More Spectacular GPU Demo - AI Image Generation |
| 138 | +
|
| 139 | +For a more impressive demonstration that showcases real GPU acceleration, follow these steps: |
| 140 | +
|
| 141 | +1. Deploy an AI image generation workload using fooocus and the Stable Diffusion XL model. |
| 142 | +
|
| 143 | + ```bash |
| 144 | + kubectl apply -f https://raw.githubusercontent.com/SAP-samples/kyma-runtime-samples/main/gpu/fooocus.yaml |
| 145 | + ``` |
| 146 | +
|
| 147 | + The web UI is exposed using an APIRule, and you can access it via browser using your cluster domain and fooocus subdomain, for example, `https://fooocus.xxxxxxxx.kyma.ondemand.com/`. |
| 148 | +
|
| 149 | +  |
| 150 | +
|
| 151 | +2. To delete the demo app, run: |
| 152 | +
|
| 153 | + ```bash |
| 154 | + kubectl delete -f https://raw.githubusercontent.com/SAP-samples/kyma-runtime-samples/main/gpu/fooocus.yaml |
| 155 | + ``` |
| 156 | +
|
| 157 | +### Cleanup |
| 158 | +
|
| 159 | +If you delete all the Pods that require GPU, your worker pool should be scaled down to zero nodes again, saving costs. You can check if cluster autoscaler recognizes that there are no GPU nodes needed by checking the cluster-autoscaler-status ConfigMap. |
| 160 | +
|
| 161 | +```bash |
| 162 | +kubectl get configmap -n kube-system cluster-autoscaler-status -o yaml |
| 163 | +``` |
| 164 | +
|
| 165 | +You should see candidates for scaling down in the GPU worker pool section. Bear in mind that scaling down takes 60 minutes (this is the Kyma cluster default setting). |
0 commit comments