@@ -38,15 +38,15 @@ If you wish to use pre-built docker images, you may use the images published in
38
38
<tr ><th >Component</th ><th >Image</th ></tr >
39
39
<tr >
40
40
<td >Spark Driver Image</td >
41
- <td ><code >kubespark/spark-driver:v2.1 .0-kubernetes-0.2 .0</code ></td >
41
+ <td ><code >kubespark/spark-driver:v2.2 .0-kubernetes-0.3 .0</code ></td >
42
42
</tr >
43
43
<tr >
44
44
<td >Spark Executor Image</td >
45
- <td ><code >kubespark/spark-executor:v2.1 .0-kubernetes-0.2 .0</code ></td >
45
+ <td ><code >kubespark/spark-executor:v2.2 .0-kubernetes-0.3 .0</code ></td >
46
46
</tr >
47
47
<tr >
48
48
<td >Spark Initialization Image</td >
49
- <td ><code >kubespark/spark-init:v2.1 .0-kubernetes-0.2 .0</code ></td >
49
+ <td ><code >kubespark/spark-init:v2.2 .0-kubernetes-0.3 .0</code ></td >
50
50
</tr >
51
51
</table >
52
52
@@ -82,9 +82,9 @@ are set up as described above:
82
82
--kubernetes-namespace default \
83
83
--conf spark.executor.instances=5 \
84
84
--conf spark.app.name=spark-pi \
85
- --conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.1 .0-kubernetes-0.2 .0 \
86
- --conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.1 .0-kubernetes-0.2 .0 \
87
- --conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.1 .0-kubernetes-0.2 .0 \
85
+ --conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.2 .0-kubernetes-0.3 .0 \
86
+ --conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.2 .0-kubernetes-0.3 .0 \
87
+ --conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.2 .0-kubernetes-0.3 .0 \
88
88
local:///opt/spark/examples/jars/spark_examples_2.11-2.2.0.jar
89
89
90
90
The Spark master, specified either via passing the ` --master ` command line argument to ` spark-submit ` or by setting
@@ -143,9 +143,9 @@ and then you can compute the value of Pi as follows:
143
143
--kubernetes-namespace default \
144
144
--conf spark.executor.instances=5 \
145
145
--conf spark.app.name=spark-pi \
146
- --conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.1 .0-kubernetes-0.2 .0 \
147
- --conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.1 .0-kubernetes-0.2 .0 \
148
- --conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.1 .0-kubernetes-0.2 .0 \
146
+ --conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.2 .0-kubernetes-0.3 .0 \
147
+ --conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.2 .0-kubernetes-0.3 .0 \
148
+ --conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.2 .0-kubernetes-0.3 .0 \
149
149
--conf spark.kubernetes.resourceStagingServer.uri=http://<address-of-any-cluster-node>:31000 \
150
150
examples/jars/spark_examples_2.11-2.2.0.jar
151
151
@@ -186,9 +186,9 @@ If our local proxy were listening on port 8001, we would have our submission loo
186
186
--kubernetes-namespace default \
187
187
--conf spark.executor.instances=5 \
188
188
--conf spark.app.name=spark-pi \
189
- --conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.1 .0-kubernetes-0.2 .0 \
190
- --conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.1 .0-kubernetes-0.2 .0 \
191
- --conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.1 .0-kubernetes-0.2 .0 \
189
+ --conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.2 .0-kubernetes-0.3 .0 \
190
+ --conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.2 .0-kubernetes-0.3 .0 \
191
+ --conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.2 .0-kubernetes-0.3 .0 \
192
192
local:///opt/spark/examples/jars/spark_examples_2.11-2.2.0.jar
193
193
194
194
Communication between Spark and Kubernetes clusters is performed using the fabric8 kubernetes-client library.
@@ -236,7 +236,7 @@ service because there may be multiple shuffle service instances running in a clu
236
236
a way to target a particular shuffle service.
237
237
238
238
For example, if the shuffle service we want to use is in the default namespace, and
239
- has pods with labels ` app=spark-shuffle-service ` and ` spark-version=2.1 .0 ` , we can
239
+ has pods with labels ` app=spark-shuffle-service ` and ` spark-version=2.2 .0 ` , we can
240
240
use those tags to target that particular shuffle service at job launch time. In order to run a job with dynamic allocation enabled,
241
241
the command may then look like the following:
242
242
@@ -251,7 +251,7 @@ the command may then look like the following:
251
251
--conf spark.dynamicAllocation.enabled=true \
252
252
--conf spark.shuffle.service.enabled=true \
253
253
--conf spark.kubernetes.shuffle.namespace=default \
254
- --conf spark.kubernetes.shuffle.labels="app=spark-shuffle-service,spark-version=2.1 .0" \
254
+ --conf spark.kubernetes.shuffle.labels="app=spark-shuffle-service,spark-version=2.2 .0" \
255
255
local:///opt/spark/examples/jars/spark_examples_2.11-2.2.0.jar 10 400000 2
256
256
257
257
## Advanced
@@ -328,9 +328,9 @@ communicate with the resource staging server over TLS. The trustStore can be set
328
328
--kubernetes-namespace default \
329
329
--conf spark.executor.instances=5 \
330
330
--conf spark.app.name=spark-pi \
331
- --conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.1 .0-kubernetes-0.2 .0 \
332
- --conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.1 .0-kubernetes-0.2 .0 \
333
- --conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.1 .0-kubernetes-0.2 .0 \
331
+ --conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.2 .0-kubernetes-0.3 .0 \
332
+ --conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.2 .0-kubernetes-0.3 .0 \
333
+ --conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.2 .0-kubernetes-0.3 .0 \
334
334
--conf spark.kubernetes.resourceStagingServer.uri=https://<address-of-any-cluster-node>:31000 \
335
335
--conf spark.ssl.kubernetes.resourceStagingServer.enabled=true \
336
336
--conf spark.ssl.kubernetes.resourceStagingServer.clientCertPem=/home/myuser/cert.pem \
0 commit comments