Skip to content

Commit 9795183

Browse files
committed
2024_10 - Fix monthly broken links - sreekzz
1 parent 38c23ba commit 9795183

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

articles/hdinsight/spark/apache-spark-streaming-overview.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -90,7 +90,7 @@ ssc.start()
9090
ssc.awaitTermination()
9191
```
9292

93-
For details on the Spark Stream API, see [Apache Spark Streaming Programming Guide](https://people.apache.org/~pwendell/spark-releases/latest/streaming-programming-guide.html).
93+
For details on the Spark Stream API, see [Apache Spark Streaming Programming Guide](https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html).
9494

9595
The following sample application is self-contained, so you can run it inside a [Jupyter Notebook](apache-spark-jupyter-notebook-kernels.md). This example creates a mock data source in the class DummySource that outputs the value of a counter and the current time in milliseconds every five seconds. A new StreamingContext object has a batch interval of 30 seconds. Every time a batch is created, the streaming application examines the RDD produced. Then converts the RDD to a Spark DataFrame, and creates a temporary table over the DataFrame.
9696

@@ -229,7 +229,7 @@ After the first minute, there are 12 entries - six entries from each of the two
229229
| 11 | 1497316344339
230230
| 12 | 1497316349361
231231

232-
The sliding window functions available in the Spark Streaming API include window, countByWindow, reduceByWindow, and countByValueAndWindow. For details on these functions, see [Transformations on DStreams](https://people.apache.org/~pwendell/spark-releases/latest/streaming-programming-guide.html#transformations-on-dstreams).
232+
The sliding window functions available in the Spark Streaming API include window, countByWindow, reduceByWindow, and countByValueAndWindow. For details on these functions, see [Transformations on DStreams](https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html).
233233

234234
## Checkpointing
235235

@@ -246,5 +246,5 @@ The status of all applications can also be checked with a GET request against a
246246
## Next steps
247247

248248
* [Create an Apache Spark cluster in HDInsight](../hdinsight-hadoop-create-linux-clusters-portal.md)
249-
* [Apache Spark Streaming Programming Guide](https://people.apache.org/~pwendell/spark-releases/latest/streaming-programming-guide.html)
249+
* [Apache Spark Streaming Programming Guide](https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html)
250250
* [Overview of Apache Spark Structured Streaming](apache-spark-structured-streaming-overview.md)

0 commit comments

Comments
 (0)