You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: doc/source/cluster/kubernetes/k8s-ecosystem/kai-scheduler.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,8 +7,8 @@ This guide demonstrates how to use KAI Scheduler for setting up hierarchical que
7
7
## KAI Scheduler
8
8
9
9
[KAI Scheduler](https://github.com/NVIDIA/KAI-Scheduler) is a high-performance, scalable Kubernetes scheduler built for AI/ML workloads. Designed to orchestrate GPU clusters at massive scale, KAI optimizes GPU allocation and supports the full AI lifecycle - from interactive development to large distributed training and inference. Some of the key features are:
10
-
-**Bin packing and spread scheduling**: Optimize node usage either by minimizing fragmentation (bin packing) or increasing resiliency and load balancing (spread scheduling)
11
-
-**GPU sharing**: Allow KAI to pack multiple Ray workloads from across teams on the same GPU, letting your organization fit more work onto your existing hardware and reducing idle GPU time.
10
+
-**Bin packing and spread scheduling**: Optimize node usage either by minimizing fragmentation using bin packing or increasing resiliency and load balancing using spread scheduling.
11
+
-**GPU sharing**: Allow KAI to consolidate multiple Ray workloads from across teams on the same GPU, letting your organization fit more work onto your existing hardware and reducing idle GPU time.
12
12
-**Workload autoscaling**: Scale Ray replicas or workers within min/max while respecting gang constraints
13
13
-**Cluster autoscaling**: Compatible with dynamic cloud infrastructures (including auto-scalers like Karpenter)
14
14
-**Workload priorities**: Prioritize Ray workloads effectively within queues
@@ -18,7 +18,7 @@ For more details and key features, see [the documentation](https://github.com/NV
18
18
19
19
### Core components
20
20
21
-
1.**PodGroups**: PodGroups are atomic units for scheduling and represent one or more interdependent pods that the scheduler execute as a single unit, also known as gang scheduling. They are vital for distributed workloads. KAI Scheduler includes a **PodGrouper** that handles gang scheduling automatically.
21
+
1.**PodGroups**: PodGroups are atomic units for scheduling and represent one or more interdependent pods that the scheduler execute as a single unit, also known as gang scheduling. They're vital for distributed workloads. KAI Scheduler includes a **PodGrouper** that handles gang scheduling automatically.
22
22
23
23
**How PodGrouper works:**
24
24
```
@@ -44,7 +44,7 @@ You can arrange queues hierarchically for organizations with multiple teams, for
44
44
* Kubernetes cluster with GPU nodes
45
45
* NVIDIA GPU Operator
46
46
* kubectl configured to access your cluster
47
-
* Install KAI Scheduler with gpu-sharing enabled. Choose the desired release version from [KAI Scheduler releases](https://github.com/NVIDIA/KAI-Scheduler/releases) and replace the `<KAI_SCHEDULER_VERSION>` in the following command. It's recommended to choose v0.10.0 or higher version.
47
+
* Install KAI Scheduler with GPU-sharing enabled. Choose the desired release version from [KAI Scheduler releases](https://github.com/NVIDIA/KAI-Scheduler/releases) and replace the `<KAI_SCHEDULER_VERSION>` in the following command. It's recommended to choose v0.10.0 or higher version.
48
48
49
49
```bash
50
50
# Install KAI Scheduler
@@ -107,7 +107,7 @@ spec:
107
107
108
108
```
109
109
110
-
Note: To make this demo easier to follow, we combined these queue definitions with the RayCluster example in the next step. You can use the single combined YAML file and apply both queues and workloads at once.
110
+
Note: To make this demo easier to follow, it combined these queue definitions with the RayCluster example in the next step. You can use the single combined YAML file and apply both queues and workloads at once.
111
111
112
112
## Step 3: Gang scheduling with KAI Scheduler
113
113
@@ -163,7 +163,7 @@ KAI scheduler deployment comes with several predefined priority classes:
163
163
- build (100) - use for build/interactive workloads (non-preemptible)
164
164
- inference (125) - use for inference workloads (non-preemptible)
165
165
166
-
You can submit the same workload above with a specific priority. Modify the above example into a build class workload:
166
+
You can submit the same workload preceding with a specific priority. Modify the preceding example into a build class workload:
167
167
168
168
```yaml
169
169
labels:
@@ -174,7 +174,7 @@ See the [documentation](https://github.com/NVIDIA/KAI-Scheduler/tree/main/docs/p
174
174
175
175
## Step 4: Submitting Ray workers with GPU sharing
176
176
177
-
This example creates two workers that share a single GPU (0.5 each, with time-slicing) within a RayCluster. See the [YAML file](https://github.com/ray-project/kuberay/tree/master/ray-operator/config/samples/ray-cluster.kai-gpu-sharing.yaml)):
177
+
This example creates two workers that share a single GPU, 0.5 each with time-slicing, within a RayCluster. See the [YAML file](https://github.com/ray-project/kuberay/tree/master/ray-operator/config/samples/ray-cluster.kai-gpu-sharing.yaml)):
Note: GPU sharing with time slicing in this example occurs only at the Kubernetes layer, allowing multiple pods to share a single GPU device. The scheduler doesn't enforce memory isolation, so applications must manage their own usage to prevent interference. For other GPU sharing approaches (e.g., MPS), see the[the KAI documentation](https://github.com/NVIDIA/KAI-Scheduler/tree/main/docs/gpu-sharing).
204
+
Note: GPU sharing with time slicing in this example occurs only at the Kubernetes layer, allowing multiple pods to share a single GPU device. The scheduler doesn't enforce memory isolation, so applications must manage their own usage to prevent interference. For other GPU sharing approaches, for example, MPS, see [the KAI documentation](https://github.com/NVIDIA/KAI-Scheduler/tree/main/docs/gpu-sharing).
0 commit comments