Skip to content

Commit 80ad594

Browse files
silenceshellash211
authored andcommitted
spark-examples jar filename misses k8s-0.3.0
`spark_examples_2.11-2.2.0.jar` should be `spark-examples_2.11-2.2.0-k8s-0.3.0.jar` (cherry picked from commit 1e63a60)
1 parent 0f31d8a commit 80ad594

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

docs/running-on-kubernetes.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,7 @@ are set up as described above:
8585
--conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.2.0-kubernetes-0.3.0 \
8686
--conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.2.0-kubernetes-0.3.0 \
8787
--conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.2.0-kubernetes-0.3.0 \
88-
local:///opt/spark/examples/jars/spark_examples_2.11-2.2.0.jar
88+
local:///opt/spark/examples/jars/spark-examples_2.11-2.2.0-k8s-0.3.0.jar
8989

9090
The Spark master, specified either via passing the `--master` command line argument to `spark-submit` or by setting
9191
`spark.master` in the application's configuration, must be a URL with the format `k8s://<api_server_url>`. Prefixing the
@@ -147,7 +147,7 @@ and then you can compute the value of Pi as follows:
147147
--conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.2.0-kubernetes-0.3.0 \
148148
--conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.2.0-kubernetes-0.3.0 \
149149
--conf spark.kubernetes.resourceStagingServer.uri=http://<address-of-any-cluster-node>:31000 \
150-
examples/jars/spark_examples_2.11-2.2.0.jar
150+
examples/jars/spark-examples_2.11-2.2.0-k8s-0.3.0.jar
151151

152152
The Docker image for the resource staging server may also be built from source, in a similar manner to the driver
153153
and executor images. The Dockerfile is provided in `dockerfiles/resource-staging-server/Dockerfile`.
@@ -189,7 +189,7 @@ If our local proxy were listening on port 8001, we would have our submission loo
189189
--conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.2.0-kubernetes-0.3.0 \
190190
--conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.2.0-kubernetes-0.3.0 \
191191
--conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.2.0-kubernetes-0.3.0 \
192-
local:///opt/spark/examples/jars/spark_examples_2.11-2.2.0.jar
192+
local:///opt/spark/examples/jars/spark-examples_2.11-2.2.0-k8s-0.3.0.jar
193193

194194
Communication between Spark and Kubernetes clusters is performed using the fabric8 kubernetes-client library.
195195
The above mechanism using `kubectl proxy` can be used when we have authentication providers that the fabric8
@@ -252,7 +252,7 @@ the command may then look like the following:
252252
--conf spark.shuffle.service.enabled=true \
253253
--conf spark.kubernetes.shuffle.namespace=default \
254254
--conf spark.kubernetes.shuffle.labels="app=spark-shuffle-service,spark-version=2.2.0" \
255-
local:///opt/spark/examples/jars/spark_examples_2.11-2.2.0.jar 10 400000 2
255+
local:///opt/spark/examples/jars/spark-examples_2.11-2.2.0-k8s-0.3.0.jar 10 400000 2
256256

257257
## Advanced
258258

@@ -334,7 +334,7 @@ communicate with the resource staging server over TLS. The trustStore can be set
334334
--conf spark.kubernetes.resourceStagingServer.uri=https://<address-of-any-cluster-node>:31000 \
335335
--conf spark.ssl.kubernetes.resourceStagingServer.enabled=true \
336336
--conf spark.ssl.kubernetes.resourceStagingServer.clientCertPem=/home/myuser/cert.pem \
337-
examples/jars/spark_examples_2.11-2.2.0.jar
337+
examples/jars/spark-examples_2.11-2.2.0-k8s-0.3.0.jar
338338

339339
### Spark Properties
340340

0 commit comments

Comments
 (0)