Skip to content
This repository was archived by the owner on Jan 9, 2020. It is now read-only.

Commit 96c04f1

Browse files
Marcelo Vanzincloud-fan
authored andcommitted
[SPARK-21159][CORE] Don't try to connect to launcher in standalone cluster mode.
Monitoring for standalone cluster mode is not implemented (see SPARK-11033), but the same scheduler implementation is used, and if it tries to connect to the launcher it will fail. So fix the scheduler so it only tries that in client mode; cluster mode applications will be correctly launched and will work, but monitoring through the launcher handle will not be available. Tested by running a cluster mode app with "SparkLauncher.startApplication". Author: Marcelo Vanzin <[email protected]> Closes apache#18397 from vanzin/SPARK-21159. (cherry picked from commit bfd73a7) Signed-off-by: Wenchen Fan <[email protected]>
1 parent a3088d2 commit 96c04f1

File tree

1 file changed

+7
-1
lines changed

1 file changed

+7
-1
lines changed

core/src/main/scala/org/apache/spark/scheduler/cluster/StandaloneSchedulerBackend.scala

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,13 @@ private[spark] class StandaloneSchedulerBackend(
5858

5959
override def start() {
6060
super.start()
61-
launcherBackend.connect()
61+
62+
// SPARK-21159. The scheduler backend should only try to connect to the launcher when in client
63+
// mode. In cluster mode, the code that submits the application to the Master needs to connect
64+
// to the launcher instead.
65+
if (sc.deployMode == "client") {
66+
launcherBackend.connect()
67+
}
6268

6369
// The endpoint for executors to talk to us
6470
val driverUrl = RpcEndpointAddress(

0 commit comments

Comments
 (0)