Skip to content
This repository was archived by the owner on Jan 9, 2020. It is now read-only.

Commit 4ac0de1

Browse files
authored
Updating images in doc (#219)
1 parent ba151c0 commit 4ac0de1

File tree

1 file changed

+8
-8
lines changed

1 file changed

+8
-8
lines changed

docs/running-on-kubernetes.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -25,11 +25,11 @@ If you wish to use pre-built docker images, you may use the images published in
2525
<tr><th>Component</th><th>Image</th></tr>
2626
<tr>
2727
<td>Spark Driver Image</td>
28-
<td><code>kubespark/spark-driver:v2.1.0-kubernetes-0.1.0-rc1</code></td>
28+
<td><code>kubespark/spark-driver:v2.1.0-kubernetes-0.1.0-alpha.2</code></td>
2929
</tr>
3030
<tr>
3131
<td>Spark Executor Image</td>
32-
<td><code>kubespark/spark-executor:v2.1.0-kubernetes-0.1.0-rc1</code></td>
32+
<td><code>kubespark/spark-executor:v2.1.0-kubernetes-0.1.0-alpha.2</code></td>
3333
</tr>
3434
</table>
3535

@@ -45,7 +45,7 @@ For example, if the registry host is `registry-host` and the registry is listeni
4545
docker build -t registry-host:5000/spark-executor:latest -f dockerfiles/executor/Dockerfile .
4646
docker push registry-host:5000/spark-driver:latest
4747
docker push registry-host:5000/spark-executor:latest
48-
48+
4949
## Submitting Applications to Kubernetes
5050

5151
Kubernetes applications can be executed via `spark-submit`. For example, to compute the value of pi, assuming the images
@@ -58,8 +58,8 @@ are set up as described above:
5858
--kubernetes-namespace default \
5959
--conf spark.executor.instances=5 \
6060
--conf spark.app.name=spark-pi \
61-
--conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.1.0-kubernetes-0.1.0-rc1 \
62-
--conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.1.0-kubernetes-0.1.0-rc1 \
61+
--conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.1.0-kubernetes-0.1.0-alpha.2 \
62+
--conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.1.0-kubernetes-0.1.0-alpha.2 \
6363
examples/jars/spark_examples_2.11-2.2.0.jar
6464

6565
The Spark master, specified either via passing the `--master` command line argument to `spark-submit` or by setting
@@ -79,7 +79,7 @@ In the above example, the specific Kubernetes cluster can be used with spark sub
7979

8080
Note that applications can currently only be executed in cluster mode, where the driver and its executors are running on
8181
the cluster.
82-
82+
8383
### Specifying input files
8484

8585
Spark supports specifying JAR paths that are either on the submitting host's disk, or are located on the disk of the
@@ -109,8 +109,8 @@ If our local proxy were listening on port 8001, we would have our submission loo
109109
--kubernetes-namespace default \
110110
--conf spark.executor.instances=5 \
111111
--conf spark.app.name=spark-pi \
112-
--conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.1.0-kubernetes-0.1.0-rc1 \
113-
--conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.1.0-kubernetes-0.1.0-rc1 \
112+
--conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.1.0-kubernetes-0.1.0-alpha.2 \
113+
--conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.1.0-kubernetes-0.1.0-alpha.2 \
114114
examples/jars/spark_examples_2.11-2.2.0.jar
115115

116116
Communication between Spark and Kubernetes clusters is performed using the fabric8 kubernetes-client library.

0 commit comments

Comments
 (0)