Skip to content

Commit 1ea01b9

Browse files
authored
Merge pull request #127330 from changeworld/patch-14
Fix typo: reducting -> reducing
2 parents 5c5a67f + 13cb3b5 commit 1ea01b9

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

articles/hdinsight/spark/optimize-memory-usage.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ If you're using Apache Hadoop YARN, then YARN controls the memory used by all co
3131

3232
To address 'out of memory' messages, try:
3333

34-
* Review DAG Management Shuffles. Reduce by map-side reducting, pre-partition (or bucketize) source data, maximize single shuffles, and reduce the amount of data sent.
34+
* Review DAG Management Shuffles. Reduce by map-side reducing, pre-partition (or bucketize) source data, maximize single shuffles, and reduce the amount of data sent.
3535
* Prefer `ReduceByKey` with its fixed memory limit to `GroupByKey`, which provides aggregations, windowing, and other functions but it has ann unbounded memory limit.
3636
* Prefer `TreeReduce`, which does more work on the executors or partitions, to `Reduce`, which does all work on the driver.
3737
* Use DataFrames rather than the lower-level RDD objects.

0 commit comments

Comments
 (0)