@@ -85,7 +85,7 @@ are set up as described above:
85
85
--conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.2.0-kubernetes-0.3.0 \
86
86
--conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.2.0-kubernetes-0.3.0 \
87
87
--conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.2.0-kubernetes-0.3.0 \
88
- local:///opt/spark/examples/jars/spark_examples_2 .11-2.2.0.jar
88
+ local:///opt/spark/examples/jars/spark-examples_2 .11-2.2.0-k8s-0.3 .0.jar
89
89
90
90
The Spark master, specified either via passing the ` --master ` command line argument to ` spark-submit ` or by setting
91
91
` spark.master ` in the application's configuration, must be a URL with the format ` k8s://<api_server_url> ` . Prefixing the
@@ -147,7 +147,7 @@ and then you can compute the value of Pi as follows:
147
147
--conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.2.0-kubernetes-0.3.0 \
148
148
--conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.2.0-kubernetes-0.3.0 \
149
149
--conf spark.kubernetes.resourceStagingServer.uri=http://<address-of-any-cluster-node>:31000 \
150
- examples/jars/spark_examples_2 .11-2.2.0.jar
150
+ examples/jars/spark-examples_2 .11-2.2.0-k8s-0.3 .0.jar
151
151
152
152
The Docker image for the resource staging server may also be built from source, in a similar manner to the driver
153
153
and executor images. The Dockerfile is provided in ` dockerfiles/resource-staging-server/Dockerfile ` .
@@ -189,7 +189,7 @@ If our local proxy were listening on port 8001, we would have our submission loo
189
189
--conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.2.0-kubernetes-0.3.0 \
190
190
--conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.2.0-kubernetes-0.3.0 \
191
191
--conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.2.0-kubernetes-0.3.0 \
192
- local:///opt/spark/examples/jars/spark_examples_2 .11-2.2.0.jar
192
+ local:///opt/spark/examples/jars/spark-examples_2 .11-2.2.0-k8s-0.3 .0.jar
193
193
194
194
Communication between Spark and Kubernetes clusters is performed using the fabric8 kubernetes-client library.
195
195
The above mechanism using ` kubectl proxy ` can be used when we have authentication providers that the fabric8
@@ -252,7 +252,7 @@ the command may then look like the following:
252
252
--conf spark.shuffle.service.enabled=true \
253
253
--conf spark.kubernetes.shuffle.namespace=default \
254
254
--conf spark.kubernetes.shuffle.labels="app=spark-shuffle-service,spark-version=2.2.0" \
255
- local:///opt/spark/examples/jars/spark_examples_2 .11-2.2.0.jar 10 400000 2
255
+ local:///opt/spark/examples/jars/spark-examples_2 .11-2.2.0-k8s-0.3 .0.jar 10 400000 2
256
256
257
257
## Advanced
258
258
@@ -334,7 +334,7 @@ communicate with the resource staging server over TLS. The trustStore can be set
334
334
--conf spark.kubernetes.resourceStagingServer.uri=https://<address-of-any-cluster-node>:31000 \
335
335
--conf spark.ssl.kubernetes.resourceStagingServer.enabled=true \
336
336
--conf spark.ssl.kubernetes.resourceStagingServer.clientCertPem=/home/myuser/cert.pem \
337
- examples/jars/spark_examples_2 .11-2.2.0.jar
337
+ examples/jars/spark-examples_2 .11-2.2.0-k8s-0.3 .0.jar
338
338
339
339
### Spark Properties
340
340
0 commit comments