Skip to content
This repository was archived by the owner on Jan 9, 2020. It is now read-only.

Commit 4ccf59b

Browse files
committed
Address comments
1 parent 2720c88 commit 4ccf59b

File tree

2 files changed

+17
-15
lines changed

2 files changed

+17
-15
lines changed

docs/running-on-kubernetes.md

Lines changed: 16 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ title: Running Spark on Kubernetes
55
* This will become a table of contents (this text will be scraped).
66
{:toc}
77

8-
Spark can run on clusters managed by [Kubernetes](https://kubernetes.io). This features makes use of the new experimental native
8+
Spark can run on clusters managed by [Kubernetes](https://kubernetes.io). This feature makes use of the new experimental native
99
Kubernetes scheduler that has been added to Spark.
1010

1111
# Prerequisites
@@ -31,7 +31,7 @@ by running `kubectl auth can-i <list|create|edit|delete> pods`.
3131
spark-submit can be directly used to submit a Spark application to a Kubernetes cluster. The mechanism by which spark-submit happens is as follows:
3232

3333
* Spark creates a spark driver running within a [Kubernetes pod](https://kubernetes.io/docs/concepts/workloads/pods/pod/).
34-
* The driver creates executors which are also Kubernetes pods and connects to them, and executes application code.
34+
* The driver creates executors which are also running within Kubernetes pods and connects to them, and executes application code.
3535
* When the application completes, the executor pods terminate and are cleaned up, but the driver pod persists
3636
logs and remains in "completed" state in the Kubernetes API till it's eventually garbage collected or manually cleaned up.
3737

@@ -68,16 +68,18 @@ building using the supplied script, or manually.
6868

6969
To launch Spark Pi in cluster mode,
7070

71-
bin/spark-submit \
72-
--deploy-mode cluster \
73-
--class org.apache.spark.examples.SparkPi \
74-
--master k8s://https://<k8s-apiserver-host>:<k8s-apiserver-port> \
75-
--conf spark.kubernetes.namespace=default \
76-
--conf spark.executor.instances=5 \
77-
--conf spark.app.name=spark-pi \
78-
--conf spark.kubernetes.driver.docker.image=<driver-image> \
79-
--conf spark.kubernetes.executor.docker.image=<executor-image> \
80-
local:///opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar
71+
{% highlight bash %}
72+
$ bin/spark-submit \
73+
--deploy-mode cluster \
74+
--class org.apache.spark.examples.SparkPi \
75+
--master k8s://https://<k8s-apiserver-host>:<k8s-apiserver-port> \
76+
--conf spark.kubernetes.namespace=default \
77+
--conf spark.executor.instances=5 \
78+
--conf spark.app.name=spark-pi \
79+
--conf spark.kubernetes.driver.docker.image=<driver-image> \
80+
--conf spark.kubernetes.executor.docker.image=<executor-image> \
81+
local:///opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar
82+
{% endhighlight %}
8183

8284
The Spark master, specified either via passing the `--master` command line argument to `spark-submit` or by setting
8385
`spark.master` in the application's configuration, must be a URL with the format `k8s://<api_server_url>`. Prefixing the
@@ -170,7 +172,7 @@ the spark application.
170172
### Namespaces
171173

172174
Kubernetes has the concept of [namespaces](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/).
173-
Namespaces are a way to divide cluster resources between multiple users (via resource quota). Spark on Kubernetes can
175+
Namespaces are ways to divide cluster resources between multiple users (via resource quota). Spark on Kubernetes can
174176
use namespaces to launch spark applications. This is through the `--conf spark.kubernetes.namespace` argument to spark-submit.
175177

176178
Kubernetes allows using [ResourceQuota](https://kubernetes.io/docs/concepts/policy/resource-quotas/) to set limits on
@@ -250,7 +252,7 @@ and provide feedback to the development team.
250252

251253
# Configuration
252254

253-
See the [configuration page](configuration.html) for information on Spark configurations. The following configuration is
255+
See the [configuration page](configuration.html) for information on Spark configurations. The following configurations are
254256
specific to Spark on Kubernetes.
255257

256258
#### Spark Properties

sbin/build-push-docker-images.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -64,4 +64,4 @@ else
6464
push) push;;
6565
*) usage;;
6666
esac
67-
fi
67+
fi

0 commit comments

Comments
 (0)