Skip to content
This repository was archived by the owner on Jan 9, 2020. It is now read-only.

Commit cf10fa8

Browse files
sharkdtuMarcelo Vanzin
authored andcommitted
[SPARK-21138][YARN] Cannot delete staging dir when the clusters of "spark.yarn.stagingDir" and "spark.hadoop.fs.defaultFS" are different
## What changes were proposed in this pull request? When I set different clusters for "spark.hadoop.fs.defaultFS" and "spark.yarn.stagingDir" as follows: ``` spark.hadoop.fs.defaultFS hdfs://tl-nn-tdw.tencent-distribute.com:54310 spark.yarn.stagingDir hdfs://ss-teg-2-v2/tmp/spark ``` The staging dir can not be deleted, it will prompt following message: ``` java.lang.IllegalArgumentException: Wrong FS: hdfs://ss-teg-2-v2/tmp/spark/.sparkStaging/application_1496819138021_77618, expected: hdfs://tl-nn-tdw.tencent-distribute.com:54310 ``` ## How was this patch tested? Existing tests Author: sharkdtu <[email protected]> Closes apache#18352 from sharkdtu/master. (cherry picked from commit 3d4d11a) Signed-off-by: Marcelo Vanzin <[email protected]>
1 parent e329bea commit cf10fa8

File tree

1 file changed

+3
-4
lines changed

1 file changed

+3
-4
lines changed

resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -209,8 +209,6 @@ private[spark] class ApplicationMaster(
209209

210210
logInfo("ApplicationAttemptId: " + appAttemptId)
211211

212-
val fs = FileSystem.get(yarnConf)
213-
214212
// This shutdown hook should run *after* the SparkContext is shut down.
215213
val priority = ShutdownHookManager.SPARK_CONTEXT_SHUTDOWN_PRIORITY - 1
216214
ShutdownHookManager.addShutdownHook(priority) { () =>
@@ -232,7 +230,7 @@ private[spark] class ApplicationMaster(
232230
// we only want to unregister if we don't want the RM to retry
233231
if (finalStatus == FinalApplicationStatus.SUCCEEDED || isLastAttempt) {
234232
unregister(finalStatus, finalMsg)
235-
cleanupStagingDir(fs)
233+
cleanupStagingDir()
236234
}
237235
}
238236
}
@@ -530,7 +528,7 @@ private[spark] class ApplicationMaster(
530528
/**
531529
* Clean up the staging directory.
532530
*/
533-
private def cleanupStagingDir(fs: FileSystem) {
531+
private def cleanupStagingDir(): Unit = {
534532
var stagingDirPath: Path = null
535533
try {
536534
val preserveFiles = sparkConf.get(PRESERVE_STAGING_FILES)
@@ -541,6 +539,7 @@ private[spark] class ApplicationMaster(
541539
return
542540
}
543541
logInfo("Deleting staging directory " + stagingDirPath)
542+
val fs = stagingDirPath.getFileSystem(yarnConf)
544543
fs.delete(stagingDirPath, true)
545544
}
546545
} catch {

0 commit comments

Comments
 (0)