@@ -25,11 +25,11 @@ If you wish to use pre-built docker images, you may use the images published in
25
25
<tr ><th >Component</th ><th >Image</th ></tr >
26
26
<tr >
27
27
<td >Spark Driver Image</td >
28
- <td ><code >kubespark/spark-driver:v2.1.0-kubernetes-0.1.0-rc1 </code ></td >
28
+ <td ><code >kubespark/spark-driver:v2.1.0-kubernetes-0.1.0-alpha.2 </code ></td >
29
29
</tr >
30
30
<tr >
31
31
<td >Spark Executor Image</td >
32
- <td ><code >kubespark/spark-executor:v2.1.0-kubernetes-0.1.0-rc1 </code ></td >
32
+ <td ><code >kubespark/spark-executor:v2.1.0-kubernetes-0.1.0-alpha.2 </code ></td >
33
33
</tr >
34
34
</table >
35
35
@@ -45,7 +45,7 @@ For example, if the registry host is `registry-host` and the registry is listeni
45
45
docker build -t registry-host:5000/spark-executor:latest -f dockerfiles/executor/Dockerfile .
46
46
docker push registry-host:5000/spark-driver:latest
47
47
docker push registry-host:5000/spark-executor:latest
48
-
48
+
49
49
## Submitting Applications to Kubernetes
50
50
51
51
Kubernetes applications can be executed via ` spark-submit ` . For example, to compute the value of pi, assuming the images
@@ -58,8 +58,8 @@ are set up as described above:
58
58
--kubernetes-namespace default \
59
59
--conf spark.executor.instances=5 \
60
60
--conf spark.app.name=spark-pi \
61
- --conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.1.0-kubernetes-0.1.0-rc1 \
62
- --conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.1.0-kubernetes-0.1.0-rc1 \
61
+ --conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.1.0-kubernetes-0.1.0-alpha.2 \
62
+ --conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.1.0-kubernetes-0.1.0-alpha.2 \
63
63
examples/jars/spark_examples_2.11-2.2.0.jar
64
64
65
65
The Spark master, specified either via passing the ` --master ` command line argument to ` spark-submit ` or by setting
@@ -79,7 +79,7 @@ In the above example, the specific Kubernetes cluster can be used with spark sub
79
79
80
80
Note that applications can currently only be executed in cluster mode, where the driver and its executors are running on
81
81
the cluster.
82
-
82
+
83
83
### Specifying input files
84
84
85
85
Spark supports specifying JAR paths that are either on the submitting host's disk, or are located on the disk of the
@@ -109,8 +109,8 @@ If our local proxy were listening on port 8001, we would have our submission loo
109
109
--kubernetes-namespace default \
110
110
--conf spark.executor.instances=5 \
111
111
--conf spark.app.name=spark-pi \
112
- --conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.1.0-kubernetes-0.1.0-rc1 \
113
- --conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.1.0-kubernetes-0.1.0-rc1 \
112
+ --conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.1.0-kubernetes-0.1.0-alpha.2 \
113
+ --conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.1.0-kubernetes-0.1.0-alpha.2 \
114
114
examples/jars/spark_examples_2.11-2.2.0.jar
115
115
116
116
Communication between Spark and Kubernetes clusters is performed using the fabric8 kubernetes-client library.
0 commit comments