@@ -36,15 +36,15 @@ If you wish to use pre-built docker images, you may use the images published in
36
36
<tr ><th >Component</th ><th >Image</th ></tr >
37
37
<tr >
38
38
<td >Spark Driver Image</td >
39
- <td ><code >kubespark/spark-driver:v2.1 .0-kubernetes-0.2 .0</code ></td >
39
+ <td ><code >kubespark/spark-driver:v2.2 .0-kubernetes-0.3 .0</code ></td >
40
40
</tr >
41
41
<tr >
42
42
<td >Spark Executor Image</td >
43
- <td ><code >kubespark/spark-executor:v2.1 .0-kubernetes-0.2 .0</code ></td >
43
+ <td ><code >kubespark/spark-executor:v2.2 .0-kubernetes-0.3 .0</code ></td >
44
44
</tr >
45
45
<tr >
46
46
<td >Spark Initialization Image</td >
47
- <td ><code >kubespark/spark-init:v2.1 .0-kubernetes-0.2 .0</code ></td >
47
+ <td ><code >kubespark/spark-init:v2.2 .0-kubernetes-0.3 .0</code ></td >
48
48
</tr >
49
49
</table >
50
50
@@ -80,9 +80,9 @@ are set up as described above:
80
80
--kubernetes-namespace default \
81
81
--conf spark.executor.instances=5 \
82
82
--conf spark.app.name=spark-pi \
83
- --conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.1 .0-kubernetes-0.2 .0 \
84
- --conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.1 .0-kubernetes-0.2 .0 \
85
- --conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.1 .0-kubernetes-0.2 .0 \
83
+ --conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.2 .0-kubernetes-0.3 .0 \
84
+ --conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.2 .0-kubernetes-0.3 .0 \
85
+ --conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.2 .0-kubernetes-0.3 .0 \
86
86
local:///opt/spark/examples/jars/spark_examples_2.11-2.2.0.jar
87
87
88
88
The Spark master, specified either via passing the ` --master ` command line argument to ` spark-submit ` or by setting
@@ -129,9 +129,9 @@ and then you can compute the value of Pi as follows:
129
129
--kubernetes-namespace default \
130
130
--conf spark.executor.instances=5 \
131
131
--conf spark.app.name=spark-pi \
132
- --conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.1 .0-kubernetes-0.2 .0 \
133
- --conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.1 .0-kubernetes-0.2 .0 \
134
- --conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.1 .0-kubernetes-0.2 .0 \
132
+ --conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.2 .0-kubernetes-0.3 .0 \
133
+ --conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.2 .0-kubernetes-0.3 .0 \
134
+ --conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.2 .0-kubernetes-0.3 .0 \
135
135
--conf spark.kubernetes.resourceStagingServer.uri=http://<address-of-any-cluster-node>:31000 \
136
136
examples/jars/spark_examples_2.11-2.2.0.jar
137
137
@@ -170,9 +170,9 @@ If our local proxy were listening on port 8001, we would have our submission loo
170
170
--kubernetes-namespace default \
171
171
--conf spark.executor.instances=5 \
172
172
--conf spark.app.name=spark-pi \
173
- --conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.1 .0-kubernetes-0.2 .0 \
174
- --conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.1 .0-kubernetes-0.2 .0 \
175
- --conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.1 .0-kubernetes-0.2 .0 \
173
+ --conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.2 .0-kubernetes-0.3 .0 \
174
+ --conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.2 .0-kubernetes-0.3 .0 \
175
+ --conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.2 .0-kubernetes-0.3 .0 \
176
176
local:///opt/spark/examples/jars/spark_examples_2.11-2.2.0.jar
177
177
178
178
Communication between Spark and Kubernetes clusters is performed using the fabric8 kubernetes-client library.
@@ -220,7 +220,7 @@ service because there may be multiple shuffle service instances running in a clu
220
220
a way to target a particular shuffle service.
221
221
222
222
For example, if the shuffle service we want to use is in the default namespace, and
223
- has pods with labels ` app=spark-shuffle-service ` and ` spark-version=2.1 .0 ` , we can
223
+ has pods with labels ` app=spark-shuffle-service ` and ` spark-version=2.2 .0 ` , we can
224
224
use those tags to target that particular shuffle service at job launch time. In order to run a job with dynamic allocation enabled,
225
225
the command may then look like the following:
226
226
@@ -235,7 +235,7 @@ the command may then look like the following:
235
235
--conf spark.dynamicAllocation.enabled=true \
236
236
--conf spark.shuffle.service.enabled=true \
237
237
--conf spark.kubernetes.shuffle.namespace=default \
238
- --conf spark.kubernetes.shuffle.labels="app=spark-shuffle-service,spark-version=2.1 .0" \
238
+ --conf spark.kubernetes.shuffle.labels="app=spark-shuffle-service,spark-version=2.2 .0" \
239
239
local:///opt/spark/examples/jars/spark_examples_2.11-2.2.0.jar 10 400000 2
240
240
241
241
## Advanced
@@ -312,9 +312,9 @@ communicate with the resource staging server over TLS. The trustStore can be set
312
312
--kubernetes-namespace default \
313
313
--conf spark.executor.instances=5 \
314
314
--conf spark.app.name=spark-pi \
315
- --conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.1 .0-kubernetes-0.2 .0 \
316
- --conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.1 .0-kubernetes-0.2 .0 \
317
- --conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.1 .0-kubernetes-0.2 .0 \
315
+ --conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.2 .0-kubernetes-0.3 .0 \
316
+ --conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.2 .0-kubernetes-0.3 .0 \
317
+ --conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.2 .0-kubernetes-0.3 .0 \
318
318
--conf spark.kubernetes.resourceStagingServer.uri=https://<address-of-any-cluster-node>:31000 \
319
319
--conf spark.ssl.kubernetes.resourceStagingServer.enabled=true \
320
320
--conf spark.ssl.kubernetes.resourceStagingServer.clientCertPem=/home/myuser/cert.pem \
0 commit comments