Skip to content
This repository was archived by the owner on Jan 9, 2020. It is now read-only.

Commit 14bee00

Browse files
committed
Adding links to running-on-kubernetes.md
1 parent 4ccf59b commit 14bee00

File tree

7 files changed

+30
-8
lines changed

7 files changed

+30
-8
lines changed

docs/_layouts/global.html

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -99,6 +99,7 @@
9999
<li><a href="spark-standalone.html">Spark Standalone</a></li>
100100
<li><a href="running-on-mesos.html">Mesos</a></li>
101101
<li><a href="running-on-yarn.html">YARN</a></li>
102+
<li><a href="running-on-kubernetes.html">Kubernetes</a></li>
102103
</ul>
103104
</li>
104105

docs/building-spark.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ To create a Spark distribution like those distributed by the
4949
to be runnable, use `./dev/make-distribution.sh` in the project root directory. It can be configured
5050
with Maven profile settings and so on like the direct Maven build. Example:
5151

52-
./dev/make-distribution.sh --name custom-spark --pip --r --tgz -Psparkr -Phadoop-2.7 -Phive -Phive-thriftserver -Pmesos -Pyarn
52+
./dev/make-distribution.sh --name custom-spark --pip --r --tgz -Psparkr -Phadoop-2.7 -Phive -Phive-thriftserver -Pmesos -Pyarn -Pkubernetes
5353

5454
This will build Spark distribution along with Python pip and R packages. For more information on usage, run `./dev/make-distribution.sh --help`
5555

@@ -90,6 +90,10 @@ like ZooKeeper and Hadoop itself.
9090
## Building with Mesos support
9191

9292
./build/mvn -Pmesos -DskipTests clean package
93+
94+
## Building with Kubernetes support
95+
96+
./build/mvn -Pkubernetes -DskipTests clean package
9397

9498
## Building with Kafka 0.8 support
9599

docs/cluster-overview.md

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -52,11 +52,8 @@ The system currently supports three cluster managers:
5252
* [Apache Mesos](running-on-mesos.html) -- a general cluster manager that can also run Hadoop MapReduce
5353
and service applications.
5454
* [Hadoop YARN](running-on-yarn.html) -- the resource manager in Hadoop 2.
55-
* [Kubernetes (experimental)](https://github.com/apache-spark-on-k8s/spark) -- In addition to the above,
56-
there is experimental support for Kubernetes. Kubernetes is an open-source platform
57-
for providing container-centric infrastructure. Kubernetes support is being actively
58-
developed in an [apache-spark-on-k8s](https://github.com/apache-spark-on-k8s/) Github organization.
59-
For documentation, refer to that project's README.
55+
* [Kubernetes](running-on-kubernetes.html) -- [Kubernetes](https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/)
56+
is an open-source platform that provides container-centric infrastructure.
6057

6158
A third-party project (not supported by the Spark project) exists to add support for
6259
[Nomad](https://github.com/hashicorp/nomad-spark) as a cluster manager.

docs/configuration.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2376,6 +2376,8 @@ can be found on the pages for each mode:
23762376

23772377
#### [Mesos](running-on-mesos.html#configuration)
23782378

2379+
#### [Kubernetes](running-on-kubernetes.html#configuration)
2380+
23792381
#### [Standalone Mode](spark-standalone.html#cluster-launch-scripts)
23802382

23812383
# Environment Variables

docs/index.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -81,6 +81,7 @@ options for deployment:
8181
* [Standalone Deploy Mode](spark-standalone.html): simplest way to deploy Spark on a private cluster
8282
* [Apache Mesos](running-on-mesos.html)
8383
* [Hadoop YARN](running-on-yarn.html)
84+
* [Kubernetes](running-on-kubernetes.html)
8485

8586
# Where to Go from Here
8687

@@ -112,7 +113,7 @@ options for deployment:
112113
* [Mesos](running-on-mesos.html): deploy a private cluster using
113114
[Apache Mesos](http://mesos.apache.org)
114115
* [YARN](running-on-yarn.html): deploy Spark on top of Hadoop NextGen (YARN)
115-
* [Kubernetes (experimental)](https://github.com/apache-spark-on-k8s/spark): deploy Spark on top of Kubernetes
116+
* [Kubernetes (experimental)](running-on-kubernetes.html): deploy Spark on top of Kubernetes
116117

117118
**Other Documents:**
118119

docs/running-on-yarn.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,8 @@ Spark application's configuration (driver, executors, and the AM when running in
1818

1919
There are two deploy modes that can be used to launch Spark applications on YARN. In `cluster` mode, the Spark driver runs inside an application master process which is managed by YARN on the cluster, and the client can go away after initiating the application. In `client` mode, the driver runs in the client process, and the application master is only used for requesting resources from YARN.
2020

21-
Unlike [Spark standalone](spark-standalone.html) and [Mesos](running-on-mesos.html) modes, in which the master's address is specified in the `--master` parameter, in YARN mode the ResourceManager's address is picked up from the Hadoop configuration. Thus, the `--master` parameter is `yarn`.
21+
Unlike [Spark standalone](spark-standalone.html), [Mesos](running-on-mesos.html) and [Kubernetes](running-on-kubernetes.html) modes,
22+
in which the master's address is specified in the `--master` parameter, in YARN mode the ResourceManager's address is picked up from the Hadoop configuration. Thus, the `--master` parameter is `yarn`.
2223

2324
To launch a Spark application in `cluster` mode:
2425

docs/submitting-applications.md

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -127,6 +127,16 @@ export HADOOP_CONF_DIR=XXX
127127
http://path/to/examples.jar \
128128
1000
129129

130+
# Run on a Kubernetes cluster in cluster deploy mode
131+
./bin/spark-submit \
132+
--class org.apache.spark.examples.SparkPi \
133+
--master k8s://xx.yy.zz.ww:443 \
134+
--deploy-mode cluster \
135+
--executor-memory 20G \
136+
--num-executors 50 \
137+
http://path/to/examples.jar \
138+
1000
139+
130140
{% endhighlight %}
131141

132142
# Master URLs
@@ -155,6 +165,12 @@ The master URL passed to Spark can be in one of the following formats:
155165
<code>client</code> or <code>cluster</code> mode depending on the value of <code>--deploy-mode</code>.
156166
The cluster location will be found based on the <code>HADOOP_CONF_DIR</code> or <code>YARN_CONF_DIR</code> variable.
157167
</td></tr>
168+
<tr><td> <code>k8s://HOST:PORT</code> </td><td> Connect to a <a href="running-on-kubernetes.html"> Kubernetes </a> cluster in
169+
<code>cluster</code> mode. Client mode is currently unsupported and will be supported in future releases.
170+
The <code>HOST</code> and <code>PORT</code> refer to the [Kubernetes API Server](https://kubernetes.io/docs/reference/generated/kube-apiserver/).
171+
It connects using TLS by default. In order to force it to use an unsecured connection, you can use
172+
<code>k8s://http://HOST:PORT</code>.
173+
</td></tr>
158174
</table>
159175

160176

0 commit comments

Comments
 (0)