Skip to content

Commit d35c7d6

Browse files
authored
Change to the term Cores from Nodes
In the Documentation, there is no clarity of what is a core But it is using the term nodes everywhere to represent the number of instances that it will run. So I believe, even the table titles must be nodes, not cores, as I couldn't find any definition of cores in the IR documentation at all.
1 parent 3871f21 commit d35c7d6

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

articles/data-factory/concepts-integration-runtime-performance.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ Data flows distribute the data processing over different nodes in a Spark cluste
3333

3434
The default cluster size is four driver nodes and four worker nodes (small). As you process more data, larger clusters are recommended. Below are the possible sizing options:
3535

36-
| Worker cores | Driver cores | Total cores | Notes |
36+
| Worker Nodes | Driver Nodes | Total Nodes | Notes |
3737
| ------------ | ------------ | ----------- | ----- |
3838
| 4 | 4 | 8 | Small |
3939
| 8 | 8 | 16 | Medium |

0 commit comments

Comments
 (0)