@@ -85,7 +85,7 @@ are set up as described above:
85
85
--conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.2.0-kubernetes-0.3.0 \
86
86
--conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.2.0-kubernetes-0.3.0 \
87
87
--conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.2.0-kubernetes-0.3.0 \
88
- local:///opt/spark/examples/jars/spark_examples_2 .11-2.2.0.jar
88
+ local:///opt/spark/examples/jars/spark-examples_2 .11-2.2.0-k8s-0.3 .0.jar
89
89
90
90
The Spark master, specified either via passing the ` --master ` command line argument to ` spark-submit ` or by setting
91
91
` spark.master ` in the application's configuration, must be a URL with the format ` k8s://<api_server_url> ` . Prefixing the
@@ -147,7 +147,7 @@ and then you can compute the value of Pi as follows:
147
147
--conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.2.0-kubernetes-0.3.0 \
148
148
--conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.2.0-kubernetes-0.3.0 \
149
149
--conf spark.kubernetes.resourceStagingServer.uri=http://<address-of-any-cluster-node>:31000 \
150
- examples/jars/spark_examples_2 .11-2.2.0.jar
150
+ examples/jars/spark-examples_2 .11-2.2.0-k8s-0.3 .0.jar
151
151
152
152
The Docker image for the resource staging server may also be built from source, in a similar manner to the driver
153
153
and executor images. The Dockerfile is provided in ` dockerfiles/resource-staging-server/Dockerfile ` .
@@ -187,7 +187,7 @@ If our local proxy were listening on port 8001, we would have our submission loo
187
187
--conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.2.0-kubernetes-0.3.0 \
188
188
--conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.2.0-kubernetes-0.3.0 \
189
189
--conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.2.0-kubernetes-0.3.0 \
190
- local:///opt/spark/examples/jars/spark_examples_2 .11-2.2.0.jar
190
+ local:///opt/spark/examples/jars/spark-examples_2 .11-2.2.0-k8s-0.3 .0.jar
191
191
192
192
Communication between Spark and Kubernetes clusters is performed using the fabric8 kubernetes-client library.
193
193
The above mechanism using ` kubectl proxy ` can be used when we have authentication providers that the fabric8
@@ -250,7 +250,7 @@ the command may then look like the following:
250
250
--conf spark.shuffle.service.enabled=true \
251
251
--conf spark.kubernetes.shuffle.namespace=default \
252
252
--conf spark.kubernetes.shuffle.labels="app=spark-shuffle-service,spark-version=2.2.0" \
253
- local:///opt/spark/examples/jars/spark_examples_2 .11-2.2.0.jar 10 400000 2
253
+ local:///opt/spark/examples/jars/spark-examples_2 .11-2.2.0-k8s-0.3 .0.jar 10 400000 2
254
254
255
255
## Advanced
256
256
@@ -332,7 +332,7 @@ communicate with the resource staging server over TLS. The trustStore can be set
332
332
--conf spark.kubernetes.resourceStagingServer.uri=https://<address-of-any-cluster-node>:31000 \
333
333
--conf spark.ssl.kubernetes.resourceStagingServer.enabled=true \
334
334
--conf spark.ssl.kubernetes.resourceStagingServer.clientCertPem=/home/myuser/cert.pem \
335
- examples/jars/spark_examples_2 .11-2.2.0.jar
335
+ examples/jars/spark-examples_2 .11-2.2.0-k8s-0.3 .0.jar
336
336
337
337
### Spark Properties
338
338
0 commit comments