Skip to content

Commit dbb3909

Browse files
[Docs][Kuberay] Update version to 1.5.0 (#58452)
Signed-off-by: Future-Outlier <[email protected]>
1 parent aad044a commit dbb3909

24 files changed

+72
-72
lines changed

doc/source/cluster/kubernetes/examples/mobilenet-rayservice.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ Note that the YAML file in this example uses `serveConfigV2`. You need KubeRay v
1919

2020
```sh
2121
# Create a RayService
22-
kubectl apply -f https://raw.githubusercontent.com/ray-project/kuberay/v1.4.2/ray-operator/config/samples/ray-service.mobilenet.yaml
22+
kubectl apply -f https://raw.githubusercontent.com/ray-project/kuberay/v1.5.0/ray-operator/config/samples/ray-service.mobilenet.yaml
2323
```
2424

2525
* The [mobilenet.py](https://github.com/ray-project/serve_config_examples/blob/master/mobilenet/mobilenet.py) file needs `tensorflow` as a dependency. Hence, the YAML file uses `rayproject/ray-ml` image instead of `rayproject/ray` image.

doc/source/cluster/kubernetes/examples/rayjob-batch-inference-example.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -37,12 +37,12 @@ The KubeRay operator Pod must be on the CPU node if you have set up the taint fo
3737

3838
## Step 2: Submit the RayJob
3939

40-
Create the RayJob custom resource with [ray-job.batch-inference.yaml](https://github.com/ray-project/kuberay/blob/v1.4.2/ray-operator/config/samples/ray-job.batch-inference.yaml).
40+
Create the RayJob custom resource with [ray-job.batch-inference.yaml](https://github.com/ray-project/kuberay/blob/v1.5.0/ray-operator/config/samples/ray-job.batch-inference.yaml).
4141

4242
Download the file with `curl`:
4343

4444
```bash
45-
curl -LO https://raw.githubusercontent.com/ray-project/kuberay/v1.4.2/ray-operator/config/samples/ray-job.batch-inference.yaml
45+
curl -LO https://raw.githubusercontent.com/ray-project/kuberay/v1.5.0/ray-operator/config/samples/ray-job.batch-inference.yaml
4646
```
4747

4848
Note that the `RayJob` spec contains a spec for the `RayCluster`. This tutorial uses a single-node cluster with 4 GPUs. For production use cases, use a multi-node cluster where the head node doesn't have GPUs, so that Ray can automatically schedule GPU workloads on worker nodes which won't interfere with critical Ray processes on the head node.

doc/source/cluster/kubernetes/getting-started/kuberay-operator-installation.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -17,15 +17,15 @@ kind create cluster --image=kindest/node:v1.26.0
1717
```sh
1818
helm repo add kuberay https://ray-project.github.io/kuberay-helm/
1919
helm repo update
20-
# Install both CRDs and KubeRay operator v1.4.2.
21-
helm install kuberay-operator kuberay/kuberay-operator --version 1.4.2
20+
# Install both CRDs and KubeRay operator v1.5.0.
21+
helm install kuberay-operator kuberay/kuberay-operator --version 1.5.0
2222
```
2323

2424
### Method 2: Kustomize
2525

2626
```sh
2727
# Install CRD and KubeRay operator.
28-
kubectl create -k "github.com/ray-project/kuberay/ray-operator/config/default?ref=v1.4.2"
28+
kubectl create -k "github.com/ray-project/kuberay/ray-operator/config/default?ref=v1.5.0"
2929
```
3030

3131
## Step 3: Validate Installation

doc/source/cluster/kubernetes/getting-started/raycluster-quick-start.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ Once the KubeRay operator is running, you're ready to deploy a RayCluster. Creat
2828

2929
```sh
3030
# Deploy a sample RayCluster CR from the KubeRay Helm chart repo:
31-
helm install raycluster kuberay/ray-cluster --version 1.4.2
31+
helm install raycluster kuberay/ray-cluster --version 1.5.0
3232
```
3333

3434

doc/source/cluster/kubernetes/getting-started/rayjob-quick-start.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -96,7 +96,7 @@ Follow the [KubeRay Operator Installation](kuberay-operator-deploy) to install t
9696
## Step 3: Install a RayJob
9797

9898
```sh
99-
kubectl apply -f https://raw.githubusercontent.com/ray-project/kuberay/v1.4.2/ray-operator/config/samples/ray-job.sample.yaml
99+
kubectl apply -f https://raw.githubusercontent.com/ray-project/kuberay/v1.5.0/ray-operator/config/samples/ray-job.sample.yaml
100100
```
101101

102102
## Step 4: Verify the Kubernetes cluster status
@@ -163,13 +163,13 @@ The Python script `sample_code.py` used by `entrypoint` is a simple Ray script t
163163
## Step 6: Delete the RayJob
164164

165165
```sh
166-
kubectl delete -f https://raw.githubusercontent.com/ray-project/kuberay/v1.4.2/ray-operator/config/samples/ray-job.sample.yaml
166+
kubectl delete -f https://raw.githubusercontent.com/ray-project/kuberay/v1.5.0/ray-operator/config/samples/ray-job.sample.yaml
167167
```
168168

169169
## Step 7: Create a RayJob with `shutdownAfterJobFinishes` set to true
170170

171171
```sh
172-
kubectl apply -f https://raw.githubusercontent.com/ray-project/kuberay/v1.4.2/ray-operator/config/samples/ray-job.shutdown.yaml
172+
kubectl apply -f https://raw.githubusercontent.com/ray-project/kuberay/v1.5.0/ray-operator/config/samples/ray-job.shutdown.yaml
173173
```
174174

175175
The `ray-job.shutdown.yaml` defines a RayJob custom resource with `shutdownAfterJobFinishes: true` and `ttlSecondsAfterFinished: 10`.
@@ -197,7 +197,7 @@ kubectl get raycluster
197197

198198
```sh
199199
# Step 10.1: Delete the RayJob
200-
kubectl delete -f https://raw.githubusercontent.com/ray-project/kuberay/v1.4.2/ray-operator/config/samples/ray-job.shutdown.yaml
200+
kubectl delete -f https://raw.githubusercontent.com/ray-project/kuberay/v1.5.0/ray-operator/config/samples/ray-job.shutdown.yaml
201201

202202
# Step 10.2: Delete the KubeRay operator
203203
helm uninstall kuberay-operator

doc/source/cluster/kubernetes/getting-started/rayservice-quick-start.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33

44
## Prerequisites
55

6-
This guide mainly focuses on the behavior of KubeRay v1.4.2 and Ray 2.46.0.
6+
This guide mainly focuses on the behavior of KubeRay v1.5.0 and Ray 2.46.0.
77

88
## What's a RayService?
99

@@ -35,7 +35,7 @@ Note that the YAML file in this example uses `serveConfigV2` to specify a multi-
3535
## Step 3: Install a RayService
3636

3737
```sh
38-
kubectl apply -f https://raw.githubusercontent.com/ray-project/kuberay/v1.4.2/ray-operator/config/samples/ray-service.sample.yaml
38+
kubectl apply -f https://raw.githubusercontent.com/ray-project/kuberay/v1.5.0/ray-operator/config/samples/ray-service.sample.yaml
3939
```
4040

4141
## Step 4: Verify the Kubernetes cluster status
@@ -129,7 +129,7 @@ curl -X POST -H 'Content-Type: application/json' rayservice-sample-serve-svc:800
129129

130130
```sh
131131
# Delete the RayService.
132-
kubectl delete -f https://raw.githubusercontent.com/ray-project/kuberay/v1.4.2/ray-operator/config/samples/ray-service.sample.yaml
132+
kubectl delete -f https://raw.githubusercontent.com/ray-project/kuberay/v1.5.0/ray-operator/config/samples/ray-service.sample.yaml
133133

134134
# Uninstall the KubeRay operator.
135135
helm uninstall kuberay-operator

doc/source/cluster/kubernetes/k8s-ecosystem/ingress.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -33,10 +33,10 @@ Four examples show how to use ingress to access your Ray cluster:
3333
# Step 1: Install KubeRay operator and CRD
3434
helm repo add kuberay https://ray-project.github.io/kuberay-helm/
3535
helm repo update
36-
helm install kuberay-operator kuberay/kuberay-operator --version 1.4.2
36+
helm install kuberay-operator kuberay/kuberay-operator --version 1.5.0
3737

3838
# Step 2: Install a RayCluster
39-
helm install raycluster kuberay/ray-cluster --version 1.4.2
39+
helm install raycluster kuberay/ray-cluster --version 1.5.0
4040

4141
# Step 3: Edit the `ray-operator/config/samples/ray-cluster-alb-ingress.yaml`
4242
#
@@ -123,10 +123,10 @@ Now run the following commands:
123123
# Step 1: Install KubeRay operator and CRD
124124
helm repo add kuberay https://ray-project.github.io/kuberay-helm/
125125
helm repo update
126-
helm install kuberay-operator kuberay/kuberay-operator --version 1.4.2
126+
helm install kuberay-operator kuberay/kuberay-operator --version 1.5.0
127127

128128
# Step 2: Install a RayCluster
129-
helm install raycluster kuberay/ray-cluster --version 1.4.2
129+
helm install raycluster kuberay/ray-cluster --version 1.5.0
130130

131131
# Step 3: Edit ray-cluster-gclb-ingress.yaml to replace the service name with the name of the head service from the RayCluster. (Output of `kubectl get svc`)
132132

@@ -186,12 +186,12 @@ kubectl wait --namespace ingress-nginx \
186186
# Step 3: Install KubeRay operator and CRD
187187
helm repo add kuberay https://ray-project.github.io/kuberay-helm/
188188
helm repo update
189-
helm install kuberay-operator kuberay/kuberay-operator --version 1.4.2
189+
helm install kuberay-operator kuberay/kuberay-operator --version 1.5.0
190190

191191
# Step 4: Install RayCluster and create an ingress separately.
192192
# More information about change of setting was documented in https://github.com/ray-project/kuberay/pull/699
193193
# and `ray-operator/config/samples/ray-cluster.separate-ingress.yaml`
194-
curl -LO https://raw.githubusercontent.com/ray-project/kuberay/v1.4.2/ray-operator/config/samples/ray-cluster.separate-ingress.yaml
194+
curl -LO https://raw.githubusercontent.com/ray-project/kuberay/v1.5.0/ray-operator/config/samples/ray-cluster.separate-ingress.yaml
195195
kubectl apply -f ray-cluster.separate-ingress.yaml
196196

197197
# Step 5: Check the ingress created in Step 4.
@@ -230,10 +230,10 @@ kubectl describe ingress raycluster-ingress-head-ingress
230230
# Step 1: Install KubeRay operator and CRD
231231
helm repo add kuberay https://ray-project.github.io/kuberay-helm/
232232
helm repo update
233-
helm install kuberay-operator kuberay/kuberay-operator --version 1.4.2
233+
helm install kuberay-operator kuberay/kuberay-operator --version 1.5.0
234234

235235
# Step 2: Install a RayCluster
236-
helm install raycluster kuberay/ray-cluster --version 1.4.2
236+
helm install raycluster kuberay/ray-cluster --version 1.5.0
237237

238238
# Step 3: Edit the `ray-operator/config/samples/ray-cluster-agc-gatewayapi.yaml`
239239
#

doc/source/cluster/kubernetes/k8s-ecosystem/istio.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ In this mode, you _must_ disable the KubeRay init container injection by setting
6666

6767
```bash
6868
# Set ENABLE_INIT_CONTAINER_INJECTION=false on the KubeRay operator.
69-
helm upgrade kuberay-operator kuberay/kuberay-operator --version 1.4.2 \
69+
helm upgrade kuberay-operator kuberay/kuberay-operator --version 1.5.0 \
7070
--set env\[0\].name=ENABLE_INIT_CONTAINER_INJECTION \
7171
--set-string env\[0\].value=false
7272

doc/source/cluster/kubernetes/k8s-ecosystem/prometheus-grafana.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ kubectl get all -n prometheus-system
5656
* Set `metrics.serviceMonitor.enabled=true` when installing the KubeRay operator with Helm to create a ServiceMonitor that scrapes metrics exposed by the KubeRay operator's service.
5757
```sh
5858
# Enable the ServiceMonitor and set the label `release: prometheus` to the ServiceMonitor so that Prometheus can discover it
59-
helm install kuberay-operator kuberay/kuberay-operator --version 1.4.2 \
59+
helm install kuberay-operator kuberay/kuberay-operator --version 1.5.0 \
6060
--set metrics.serviceMonitor.enabled=true \
6161
--set metrics.serviceMonitor.selector.release=prometheus
6262
```
@@ -104,7 +104,7 @@ curl localhost:8080
104104
* `# HELP`: Describe the meaning of this metric.
105105
* `# TYPE`: See [this document](https://prometheus.io/docs/concepts/metric_types/) for more details.
106106

107-
* Three required environment variables are defined in [ray-cluster.embed-grafana.yaml](https://github.com/ray-project/kuberay/blob/v1.4.2/ray-operator/config/samples/ray-cluster.embed-grafana.yaml). See [Configuring and Managing Ray Dashboard](https://docs.ray.io/en/latest/cluster/configure-manage-dashboard.html) for more details about these environment variables.
107+
* Three required environment variables are defined in [ray-cluster.embed-grafana.yaml](https://github.com/ray-project/kuberay/blob/v1.5.0/ray-operator/config/samples/ray-cluster.embed-grafana.yaml). See [Configuring and Managing Ray Dashboard](https://docs.ray.io/en/latest/cluster/configure-manage-dashboard.html) for more details about these environment variables.
108108
```yaml
109109
env:
110110
- name: RAY_GRAFANA_IFRAME_HOST

doc/source/cluster/kubernetes/k8s-ecosystem/scheduler-plugins.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ You need to have the access to configure Kubernetes control plane to replace the
2929
KubeRay v1.4.0 and later versions support scheduler plugins.
3030

3131
```sh
32-
helm install kuberay-operator kuberay/kuberay-operator --version 1.4.2 --set batchScheduler.name=scheduler-plugins
32+
helm install kuberay-operator kuberay/kuberay-operator --version 1.5.0 --set batchScheduler.name=scheduler-plugins
3333
```
3434

3535
## Step 4: Deploy a RayCluster with gang scheduling

0 commit comments

Comments
 (0)