Skip to content

Commit 997c882

Browse files
Sahil Prasadash211
authored andcommitted
Bumping versions to v2.2.0-kubernetes-0.3.0
(cherry picked from commit 0c160f5)
1 parent 8ad97b2 commit 997c882

File tree

3 files changed

+22
-22
lines changed

3 files changed

+22
-22
lines changed

conf/kubernetes-resource-staging-server.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ spec:
3232
name: spark-resource-staging-server-config
3333
containers:
3434
- name: spark-resource-staging-server
35-
image: kubespark/spark-resource-staging-server:v2.1.0-kubernetes-0.2.0
35+
image: kubespark/spark-resource-staging-server:v2.2.0-kubernetes-0.3.0
3636
resources:
3737
requests:
3838
cpu: 100m

conf/kubernetes-shuffle-service.yaml

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -20,14 +20,14 @@ kind: DaemonSet
2020
metadata:
2121
labels:
2222
app: spark-shuffle-service
23-
spark-version: 2.1.0
23+
spark-version: 2.2.0
2424
name: shuffle
2525
spec:
2626
template:
2727
metadata:
2828
labels:
2929
app: spark-shuffle-service
30-
spark-version: 2.1.0
30+
spark-version: 2.2.0
3131
spec:
3232
volumes:
3333
- name: temp-volume
@@ -38,7 +38,7 @@ spec:
3838
# This is an official image that is built
3939
# from the dockerfiles/shuffle directory
4040
# in the spark distribution.
41-
image: kubespark/spark-shuffle:v2.1.0-kubernetes-0.2.0
41+
image: kubespark/spark-shuffle:v2.2.0-kubernetes-0.3.0
4242
imagePullPolicy: IfNotPresent
4343
volumeMounts:
4444
- mountPath: '/tmp'
@@ -51,4 +51,4 @@ spec:
5151
requests:
5252
cpu: "1"
5353
limits:
54-
cpu: "1"
54+
cpu: "1"

docs/running-on-kubernetes.md

Lines changed: 17 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -38,15 +38,15 @@ If you wish to use pre-built docker images, you may use the images published in
3838
<tr><th>Component</th><th>Image</th></tr>
3939
<tr>
4040
<td>Spark Driver Image</td>
41-
<td><code>kubespark/spark-driver:v2.1.0-kubernetes-0.2.0</code></td>
41+
<td><code>kubespark/spark-driver:v2.2.0-kubernetes-0.3.0</code></td>
4242
</tr>
4343
<tr>
4444
<td>Spark Executor Image</td>
45-
<td><code>kubespark/spark-executor:v2.1.0-kubernetes-0.2.0</code></td>
45+
<td><code>kubespark/spark-executor:v2.2.0-kubernetes-0.3.0</code></td>
4646
</tr>
4747
<tr>
4848
<td>Spark Initialization Image</td>
49-
<td><code>kubespark/spark-init:v2.1.0-kubernetes-0.2.0</code></td>
49+
<td><code>kubespark/spark-init:v2.2.0-kubernetes-0.3.0</code></td>
5050
</tr>
5151
</table>
5252

@@ -82,9 +82,9 @@ are set up as described above:
8282
--kubernetes-namespace default \
8383
--conf spark.executor.instances=5 \
8484
--conf spark.app.name=spark-pi \
85-
--conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.1.0-kubernetes-0.2.0 \
86-
--conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.1.0-kubernetes-0.2.0 \
87-
--conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.1.0-kubernetes-0.2.0 \
85+
--conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.2.0-kubernetes-0.3.0 \
86+
--conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.2.0-kubernetes-0.3.0 \
87+
--conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.2.0-kubernetes-0.3.0 \
8888
local:///opt/spark/examples/jars/spark_examples_2.11-2.2.0.jar
8989

9090
The Spark master, specified either via passing the `--master` command line argument to `spark-submit` or by setting
@@ -143,9 +143,9 @@ and then you can compute the value of Pi as follows:
143143
--kubernetes-namespace default \
144144
--conf spark.executor.instances=5 \
145145
--conf spark.app.name=spark-pi \
146-
--conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.1.0-kubernetes-0.2.0 \
147-
--conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.1.0-kubernetes-0.2.0 \
148-
--conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.1.0-kubernetes-0.2.0 \
146+
--conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.2.0-kubernetes-0.3.0 \
147+
--conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.2.0-kubernetes-0.3.0 \
148+
--conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.2.0-kubernetes-0.3.0 \
149149
--conf spark.kubernetes.resourceStagingServer.uri=http://<address-of-any-cluster-node>:31000 \
150150
examples/jars/spark_examples_2.11-2.2.0.jar
151151

@@ -186,9 +186,9 @@ If our local proxy were listening on port 8001, we would have our submission loo
186186
--kubernetes-namespace default \
187187
--conf spark.executor.instances=5 \
188188
--conf spark.app.name=spark-pi \
189-
--conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.1.0-kubernetes-0.2.0 \
190-
--conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.1.0-kubernetes-0.2.0 \
191-
--conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.1.0-kubernetes-0.2.0 \
189+
--conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.2.0-kubernetes-0.3.0 \
190+
--conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.2.0-kubernetes-0.3.0 \
191+
--conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.2.0-kubernetes-0.3.0 \
192192
local:///opt/spark/examples/jars/spark_examples_2.11-2.2.0.jar
193193

194194
Communication between Spark and Kubernetes clusters is performed using the fabric8 kubernetes-client library.
@@ -236,7 +236,7 @@ service because there may be multiple shuffle service instances running in a clu
236236
a way to target a particular shuffle service.
237237

238238
For example, if the shuffle service we want to use is in the default namespace, and
239-
has pods with labels `app=spark-shuffle-service` and `spark-version=2.1.0`, we can
239+
has pods with labels `app=spark-shuffle-service` and `spark-version=2.2.0`, we can
240240
use those tags to target that particular shuffle service at job launch time. In order to run a job with dynamic allocation enabled,
241241
the command may then look like the following:
242242

@@ -251,7 +251,7 @@ the command may then look like the following:
251251
--conf spark.dynamicAllocation.enabled=true \
252252
--conf spark.shuffle.service.enabled=true \
253253
--conf spark.kubernetes.shuffle.namespace=default \
254-
--conf spark.kubernetes.shuffle.labels="app=spark-shuffle-service,spark-version=2.1.0" \
254+
--conf spark.kubernetes.shuffle.labels="app=spark-shuffle-service,spark-version=2.2.0" \
255255
local:///opt/spark/examples/jars/spark_examples_2.11-2.2.0.jar 10 400000 2
256256

257257
## Advanced
@@ -328,9 +328,9 @@ communicate with the resource staging server over TLS. The trustStore can be set
328328
--kubernetes-namespace default \
329329
--conf spark.executor.instances=5 \
330330
--conf spark.app.name=spark-pi \
331-
--conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.1.0-kubernetes-0.2.0 \
332-
--conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.1.0-kubernetes-0.2.0 \
333-
--conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.1.0-kubernetes-0.2.0 \
331+
--conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.2.0-kubernetes-0.3.0 \
332+
--conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.2.0-kubernetes-0.3.0 \
333+
--conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.2.0-kubernetes-0.3.0 \
334334
--conf spark.kubernetes.resourceStagingServer.uri=https://<address-of-any-cluster-node>:31000 \
335335
--conf spark.ssl.kubernetes.resourceStagingServer.enabled=true \
336336
--conf spark.ssl.kubernetes.resourceStagingServer.clientCertPem=/home/myuser/cert.pem \

0 commit comments

Comments
 (0)