Skip to content

Commit a8cde92

Browse files
committed
[SPARK-52273] Use Apache Spark 4.0.0 docker image instead of 4.0.0-preview2
### What changes were proposed in this pull request? This PR aims to use Apache Spark `4.0.0` docker image instead of `4.0.0-preview2`. ### Why are the changes needed? To use the latest Apache Spark. ### Does this PR introduce _any_ user-facing change? No behavior change. ### How was this patch tested? Pass the CIs. ### Was this patch authored or co-authored using generative AI tooling? No. Closes apache#222 from dongjoon-hyun/SPARK-52273. Authored-by: Dongjoon Hyun <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]>
1 parent bcdfd76 commit a8cde92

25 files changed

+51
-51
lines changed

README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ $ ./examples/submit-pi-to-prod.sh
7575
{
7676
"action" : "CreateSubmissionResponse",
7777
"message" : "Driver successfully submitted as driver-20240821181327-0000",
78-
"serverSparkVersion" : "4.0.0-preview2",
78+
"serverSparkVersion" : "4.0.0",
7979
"submissionId" : "driver-20240821181327-0000",
8080
"success" : true
8181
}
@@ -84,7 +84,7 @@ $ curl http://localhost:6066/v1/submissions/status/driver-20240821181327-0000/
8484
{
8585
"action" : "SubmissionStatusResponse",
8686
"driverState" : "FINISHED",
87-
"serverSparkVersion" : "4.0.0-preview2",
87+
"serverSparkVersion" : "4.0.0",
8888
"submissionId" : "driver-20240821181327-0000",
8989
"success" : true,
9090
"workerHostPort" : "10.1.5.188:42099",
@@ -122,7 +122,7 @@ Events:
122122
Normal Scheduled 14s yunikorn Successfully assigned default/pi-on-yunikorn-0-driver to node docker-desktop
123123
Normal PodBindSuccessful 14s yunikorn Pod default/pi-on-yunikorn-0-driver is successfully bound to node docker-desktop
124124
Normal TaskCompleted 6s yunikorn Task default/pi-on-yunikorn-0-driver is completed
125-
Normal Pulled 13s kubelet Container image "apache/spark:4.0.0-preview2" already present on machine
125+
Normal Pulled 13s kubelet Container image "apache/spark:4.0.0" already present on machine
126126
Normal Created 13s kubelet Created container spark-kubernetes-driver
127127
Normal Started 13s kubelet Started container spark-kubernetes-driver
128128

build-tools/helm/spark-kubernetes-operator/templates/workload-rbac.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -173,7 +173,7 @@ metadata:
173173
{{- template "spark-operator.workloadAnnotations" $ }}
174174
spec:
175175
runtimeVersions:
176-
sparkVersion: 4.0.0-preview2
176+
sparkVersion: 4.0.0
177177
scalaVersion: "2.13"
178178
{{- end }}
179179
---

docs/spark_custom_resources.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -48,12 +48,12 @@ spec:
4848
spark.dynamicAllocation.shuffleTracking.enabled: "true"
4949
spark.dynamicAllocation.maxExecutors: "3"
5050
spark.kubernetes.authenticate.driver.serviceAccountName: "spark"
51-
spark.kubernetes.container.image: "apache/spark:4.0.0-preview2"
51+
spark.kubernetes.container.image: "apache/spark:4.0.0"
5252
applicationTolerations:
5353
resourceRetainPolicy: OnFailure
5454
runtimeVersions:
5555
scalaVersion: "2.13"
56-
sparkVersion: "4.0.0-preview2"
56+
sparkVersion: "4.0.0"
5757
```
5858
5959
After application is submitted, Operator will add status information to your application based on

examples/cluster-java21.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,14 +18,14 @@ metadata:
1818
name: cluster-java21
1919
spec:
2020
runtimeVersions:
21-
sparkVersion: "4.0.0-preview2"
21+
sparkVersion: "4.0.0"
2222
clusterTolerations:
2323
instanceConfig:
2424
initWorkers: 3
2525
minWorkers: 3
2626
maxWorkers: 3
2727
sparkConf:
28-
spark.kubernetes.container.image: "apache/spark:4.0.0-preview2-java21"
28+
spark.kubernetes.container.image: "apache/spark:4.0.0-java21"
2929
spark.master.ui.title: "Prod Spark Cluster (Java 21)"
3030
spark.master.rest.enabled: "true"
3131
spark.master.rest.host: "0.0.0.0"

examples/cluster-on-yunikorn.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,14 +18,14 @@ metadata:
1818
name: cluster-on-yunikorn
1919
spec:
2020
runtimeVersions:
21-
sparkVersion: "4.0.0-preview2"
21+
sparkVersion: "4.0.0"
2222
clusterTolerations:
2323
instanceConfig:
2424
initWorkers: 1
2525
minWorkers: 1
2626
maxWorkers: 1
2727
sparkConf:
28-
spark.kubernetes.container.image: "apache/spark:4.0.0-preview2"
28+
spark.kubernetes.container.image: "apache/spark:4.0.0"
2929
spark.kubernetes.scheduler.name: "yunikorn"
3030
spark.master.ui.title: "Spark Cluster on YuniKorn Scheduler"
3131
spark.master.rest.enabled: "true"

examples/cluster-with-hpa-template.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ metadata:
1818
name: cluster-with-hpa-template
1919
spec:
2020
runtimeVersions:
21-
sparkVersion: "4.0.0-preview2"
21+
sparkVersion: "4.0.0"
2222
clusterTolerations:
2323
instanceConfig:
2424
initWorkers: 1
@@ -58,7 +58,7 @@ spec:
5858
value: 1
5959
periodSeconds: 1200
6060
sparkConf:
61-
spark.kubernetes.container.image: "apache/spark:4.0.0-preview2-java21"
61+
spark.kubernetes.container.image: "apache/spark:4.0.0-java21"
6262
spark.master.ui.title: "Cluster with HorizontalPodAutoscaler Template"
6363
spark.master.rest.enabled: "true"
6464
spark.master.rest.host: "0.0.0.0"

examples/cluster-with-hpa.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ metadata:
1818
name: cluster-with-hpa
1919
spec:
2020
runtimeVersions:
21-
sparkVersion: "4.0.0-preview2"
21+
sparkVersion: "4.0.0"
2222
clusterTolerations:
2323
instanceConfig:
2424
initWorkers: 3
@@ -38,7 +38,7 @@ spec:
3838
cpu: "3"
3939
memory: "3Gi"
4040
sparkConf:
41-
spark.kubernetes.container.image: "apache/spark:4.0.0-preview2-java21"
41+
spark.kubernetes.container.image: "apache/spark:4.0.0-java21"
4242
spark.master.ui.title: "Cluster with HorizontalPodAutoscaler"
4343
spark.master.rest.enabled: "true"
4444
spark.master.rest.host: "0.0.0.0"

examples/cluster-with-template.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ metadata:
1818
name: cluster-with-template
1919
spec:
2020
runtimeVersions:
21-
sparkVersion: "4.0.0-preview2"
21+
sparkVersion: "4.0.0"
2222
clusterTolerations:
2323
instanceConfig:
2424
initWorkers: 1
@@ -93,7 +93,7 @@ spec:
9393
annotations:
9494
customAnnotation: "annotation"
9595
sparkConf:
96-
spark.kubernetes.container.image: "apache/spark:4.0.0-preview2"
96+
spark.kubernetes.container.image: "apache/spark:4.0.0"
9797
spark.master.ui.title: "Spark Cluster with Template"
9898
spark.master.rest.enabled: "true"
9999
spark.master.rest.host: "0.0.0.0"

examples/pi-java21.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -24,9 +24,9 @@ spec:
2424
spark.dynamicAllocation.shuffleTracking.enabled: "true"
2525
spark.dynamicAllocation.maxExecutors: "3"
2626
spark.kubernetes.authenticate.driver.serviceAccountName: "spark"
27-
spark.kubernetes.container.image: "apache/spark:4.0.0-preview2-java21-scala"
27+
spark.kubernetes.container.image: "apache/spark:4.0.0-java21-scala"
2828
applicationTolerations:
2929
resourceRetainPolicy: OnFailure
3030
runtimeVersions:
3131
scalaVersion: "2.13"
32-
sparkVersion: "4.0.0-preview2"
32+
sparkVersion: "4.0.0"

examples/pi-on-yunikorn.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ spec:
2525
spark.dynamicAllocation.shuffleTracking.enabled: "true"
2626
spark.dynamicAllocation.maxExecutors: "3"
2727
spark.kubernetes.authenticate.driver.serviceAccountName: "spark"
28-
spark.kubernetes.container.image: "apache/spark:4.0.0-preview2"
28+
spark.kubernetes.container.image: "apache/spark:4.0.0"
2929
spark.kubernetes.scheduler.name: "yunikorn"
3030
spark.kubernetes.driver.label.queue: "root.default"
3131
spark.kubernetes.executor.label.queue: "root.default"
@@ -35,4 +35,4 @@ spec:
3535
resourceRetainPolicy: OnFailure
3636
runtimeVersions:
3737
scalaVersion: "2.13"
38-
sparkVersion: "4.0.0-preview2"
38+
sparkVersion: "4.0.0"

0 commit comments

Comments
 (0)