Skip to content
This repository was archived by the owner on Jan 9, 2020. It is now read-only.

Commit 1e63a60

Browse files
silenceshellash211
authored andcommitted
spark-examples jar filename misses k8s-0.3.0 (#476)
`spark_examples_2.11-2.2.0.jar` should be `spark-examples_2.11-2.2.0-k8s-0.3.0.jar`
1 parent c6bc19d commit 1e63a60

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

docs/running-on-kubernetes.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,7 @@ are set up as described above:
8585
--conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.2.0-kubernetes-0.3.0 \
8686
--conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.2.0-kubernetes-0.3.0 \
8787
--conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.2.0-kubernetes-0.3.0 \
88-
local:///opt/spark/examples/jars/spark_examples_2.11-2.2.0.jar
88+
local:///opt/spark/examples/jars/spark-examples_2.11-2.2.0-k8s-0.3.0.jar
8989

9090
The Spark master, specified either via passing the `--master` command line argument to `spark-submit` or by setting
9191
`spark.master` in the application's configuration, must be a URL with the format `k8s://<api_server_url>`. Prefixing the
@@ -147,7 +147,7 @@ and then you can compute the value of Pi as follows:
147147
--conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.2.0-kubernetes-0.3.0 \
148148
--conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.2.0-kubernetes-0.3.0 \
149149
--conf spark.kubernetes.resourceStagingServer.uri=http://<address-of-any-cluster-node>:31000 \
150-
examples/jars/spark_examples_2.11-2.2.0.jar
150+
examples/jars/spark-examples_2.11-2.2.0-k8s-0.3.0.jar
151151

152152
The Docker image for the resource staging server may also be built from source, in a similar manner to the driver
153153
and executor images. The Dockerfile is provided in `dockerfiles/resource-staging-server/Dockerfile`.
@@ -187,7 +187,7 @@ If our local proxy were listening on port 8001, we would have our submission loo
187187
--conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.2.0-kubernetes-0.3.0 \
188188
--conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.2.0-kubernetes-0.3.0 \
189189
--conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.2.0-kubernetes-0.3.0 \
190-
local:///opt/spark/examples/jars/spark_examples_2.11-2.2.0.jar
190+
local:///opt/spark/examples/jars/spark-examples_2.11-2.2.0-k8s-0.3.0.jar
191191

192192
Communication between Spark and Kubernetes clusters is performed using the fabric8 kubernetes-client library.
193193
The above mechanism using `kubectl proxy` can be used when we have authentication providers that the fabric8
@@ -250,7 +250,7 @@ the command may then look like the following:
250250
--conf spark.shuffle.service.enabled=true \
251251
--conf spark.kubernetes.shuffle.namespace=default \
252252
--conf spark.kubernetes.shuffle.labels="app=spark-shuffle-service,spark-version=2.2.0" \
253-
local:///opt/spark/examples/jars/spark_examples_2.11-2.2.0.jar 10 400000 2
253+
local:///opt/spark/examples/jars/spark-examples_2.11-2.2.0-k8s-0.3.0.jar 10 400000 2
254254

255255
## Advanced
256256

@@ -332,7 +332,7 @@ communicate with the resource staging server over TLS. The trustStore can be set
332332
--conf spark.kubernetes.resourceStagingServer.uri=https://<address-of-any-cluster-node>:31000 \
333333
--conf spark.ssl.kubernetes.resourceStagingServer.enabled=true \
334334
--conf spark.ssl.kubernetes.resourceStagingServer.clientCertPem=/home/myuser/cert.pem \
335-
examples/jars/spark_examples_2.11-2.2.0.jar
335+
examples/jars/spark-examples_2.11-2.2.0-k8s-0.3.0.jar
336336

337337
### Spark Properties
338338

0 commit comments

Comments
 (0)