Skip to content

Commit 7570eab

Browse files
Marcelo Vanzinsquito
authored andcommitted
[SPARK-22788][STREAMING] Use correct hadoop config for fs append support.
Still look at the old one in case any Spark user is setting it explicitly, though. Author: Marcelo Vanzin <[email protected]> Closes #19983 from vanzin/SPARK-22788.
1 parent 9962390 commit 7570eab

File tree

1 file changed

+3
-1
lines changed

1 file changed

+3
-1
lines changed

streaming/src/main/scala/org/apache/spark/streaming/util/HdfsUtils.scala

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,9 @@ private[streaming] object HdfsUtils {
2929
// If the file exists and we have append support, append instead of creating a new file
3030
val stream: FSDataOutputStream = {
3131
if (dfs.isFile(dfsPath)) {
32-
if (conf.getBoolean("hdfs.append.support", false) || dfs.isInstanceOf[RawLocalFileSystem]) {
32+
if (conf.getBoolean("dfs.support.append", true) ||
33+
conf.getBoolean("hdfs.append.support", false) ||
34+
dfs.isInstanceOf[RawLocalFileSystem]) {
3335
dfs.append(dfsPath)
3436
} else {
3537
throw new IllegalStateException("File exists and there is no append support!")

0 commit comments

Comments
 (0)