You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jan 9, 2020. It is now read-only.
[SPARK-21123][DOCS][STRUCTURED STREAMING] Options for file stream source are in a wrong table
## What changes were proposed in this pull request?
The description for several options of File Source for structured streaming appeared in the File Sink description instead.
This pull request has two commits: The first includes changes to the version as it appeared in spark 2.1 and the second handled an additional option added for spark 2.2
## How was this patch tested?
Built the documentation by SKIP_API=1 jekyll build and visually inspected the structured streaming programming guide.
The original documentation was written by tdas and lw-lin
Author: assafmendelson <[email protected]>
Closesapache#18342 from assafmendelson/spark-21123.
(cherry picked from commit 66a792c)
Signed-off-by: Shixiong Zhu <[email protected]>
Copy file name to clipboardExpand all lines: docs/structured-streaming-programming-guide.md
+15-13Lines changed: 15 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -510,7 +510,20 @@ Here are the details of all the sources in Spark.
510
510
<td><b>File source</b></td>
511
511
<td>
512
512
<code>path</code>: path to the input directory, and common to all file formats.
513
-
<br/><br/>
513
+
<br/>
514
+
<code>maxFilesPerTrigger</code>: maximum number of new files to be considered in every trigger (default: no max)
515
+
<br/>
516
+
<code>latestFirst</code>: whether to processs the latest new files first, useful when there is a large backlog of files (default: false)
517
+
<br/>
518
+
<code>fileNameOnly</code>: whether to check new files based on only the filename instead of on the full path (default: false). With this set to `true`, the following files would be considered as the same file, because their filenames, "dataset.txt", are the same:
519
+
<br/>
520
+
· "file:///dataset.txt"<br/>
521
+
· "s3://a/dataset.txt"<br/>
522
+
· "s3n://a/b/dataset.txt"<br/>
523
+
· "s3a://a/b/c/dataset.txt"<br/>
524
+
<br/>
525
+
526
+
<br/>
514
527
For file-format-specific options, see the related methods in <code>DataStreamReader</code>
@@ -1234,18 +1247,7 @@ Here are the details of all the sinks in Spark.
1234
1247
<td>Append</td>
1235
1248
<td>
1236
1249
<code>path</code>: path to the output directory, must be specified.
1237
-
<br/>
1238
-
<code>maxFilesPerTrigger</code>: maximum number of new files to be considered in every trigger (default: no max)
1239
-
<br/>
1240
-
<code>latestFirst</code>: whether to processs the latest new files first, useful when there is a large backlog of files (default: false)
1241
-
<br/>
1242
-
<code>fileNameOnly</code>: whether to check new files based on only the filename instead of on the full path (default: false). With this set to `true`, the following files would be considered as the same file, because their filenames, "dataset.txt", are the same:
1243
-
<br/>
1244
-
· "file:///dataset.txt"<br/>
1245
-
· "s3://a/dataset.txt"<br/>
1246
-
· "s3n://a/b/dataset.txt"<br/>
1247
-
· "s3a://a/b/c/dataset.txt"<br/>
1248
-
<br/>
1250
+
<br/><br/>
1249
1251
For file-format-specific options, see the related methods in DataFrameWriter
0 commit comments