Skip to content
This repository was archived by the owner on Jan 9, 2020. It is now read-only.

Commit 67abb93

Browse files
committed
Review comments
1 parent 679b5c7 commit 67abb93

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

docs/running-on-kubernetes.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -111,7 +111,7 @@ This URI is the location of the example jar that is already in the Docker image.
111111

112112
## Dependency Management
113113

114-
If your application's dependencies are all hosted in remote locations like HDFS or http servers, they may be referred to
114+
If your application's dependencies are all hosted in remote locations like HDFS or HTTP servers, they may be referred to
115115
by their appropriate remote URIs. Also, application dependencies can be pre-mounted into custom-built Docker images.
116116
Those dependencies can be added to the classpath by referencing them with `local://` URIs and/or setting the
117117
`SPARK_EXTRA_CLASSPATH` environment variable in your Dockerfiles.

docs/running-on-yarn.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ Spark application's configuration (driver, executors, and the AM when running in
1818

1919
There are two deploy modes that can be used to launch Spark applications on YARN. In `cluster` mode, the Spark driver runs inside an application master process which is managed by YARN on the cluster, and the client can go away after initiating the application. In `client` mode, the driver runs in the client process, and the application master is only used for requesting resources from YARN.
2020

21-
Unlike [Spark standalone](spark-standalone.html), [Mesos](running-on-mesos.html) and [Kubernetes](running-on-kubernetes.html) modes,
21+
Unlike other cluster managers supported by Spark
2222
in which the master's address is specified in the `--master` parameter, in YARN mode the ResourceManager's address is picked up from the Hadoop configuration. Thus, the `--master` parameter is `yarn`.
2323

2424
To launch a Spark application in `cluster` mode:

0 commit comments

Comments
 (0)