Skip to content

Commit c648793

Browse files
committed
Fix punctuations
1 parent 2a5aa8b commit c648793

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

content/blog/apache-spark-unleashing-big-data-with-rdds-dataframes-and-beyond.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -187,13 +187,13 @@ For example, to filter out even numbers from a dataframe, you would use:
187187
divBy2 = myRange.where("number % 2 = 0") # myRange is a dataframe
188188
```
189189

190-
This code performs a transformation but produces no immediate output. That’s because they are **lazy**, meaning they do not execute immediately; instead, Spark builds a Directed Acyclic Graph (DAG) of transformations that will be executed only when an **action** is triggered. Transformations are the heart of Spark’s business logic and can be of two types: narrow and wide.
190+
This code performs a transformation but produces no immediate output. That’s because they are **lazy**, meaning they do not execute immediately; instead, Spark builds a Directed Acyclic Graph (DAG) of transformations that will be executed only when an **action** is triggered. Transformations are the heart of Spark’s business logic and can be of two types: **narrow and wide**.
191191

192192
### Narrow Transformations
193193

194194
In a **narrow transformation**, each partition of the parent RDD/DataFrame contributes to only one partition of the child RDD/DataFrame. Data does not move across partitions, so the operation is **local** to the same worker node. These are efficient because they avoid **shuffling** (data transfer between nodes).
195195

196-
Examples: `map` ,`filter`
196+
Examples: `map`, `filter`
197197

198198
<p align="center">
199199
<img width="400px" src="/images/blog/apache-spark-unleashing-big-data-with-rdds-dataframes-and-beyond/narrow-transformation.png" alt="Spark Narrow Transformation">

0 commit comments

Comments
 (0)