Skip to content

Commit 1d1f3bf

Browse files
authored
Update Design Pattern - Data Vault - Creating Dimensions from Hub tables.md
1 parent 8248621 commit 1d1f3bf

File tree

1 file changed

+10
-5
lines changed

1 file changed

+10
-5
lines changed

1000_Design_Patterns/Design Pattern - Data Vault - Creating Dimensions from Hub tables.md

Lines changed: 10 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Design Pattern - Data Vault - Creating Dimensions from Hub tables
1+
# Design Pattern - Data Vault - Simple Date Math (Joining two Time-Variant Tables
22

33
## Purpose
44
This design pattern describes how to create a typical ‘Type 2 Dimension’ table (Dimensional Modelling) from a Data Vault or Hybrid EDW model.
@@ -9,9 +9,11 @@ Also known as
99
Dimensions / Dimensional Modelling
1010
Gaps and islands
1111
Timelines
12+
1213
## Applicability
1314
This pattern is only applicable for loading processes from source systems or files to the Reporting Structure Area (of the Presentation Layer). The Helper Area may use similar concepts but since this is a ‘free-for-all’ part of the ETL Framework it is not mandatory to follow this Design Pattern.
14-
Structure
15+
16+
## Structure
1517
Creating Dimensions from a Data Vault model essentially means joining the various Hub, Link and Satellite tables together to create a certain hierarchy. In the example displayed in the following diagram the Dimension that can be generated is a ‘Product’ dimension with the Distribution Channel as a higher level in this dimension.
1618

1719
Business Insights > Design Pattern 019 - Creating Dimensions from Hub tables > BI7.png
@@ -150,19 +152,22 @@ WHERE
150152
ELSE D.EXPIRY_DATE -- smallest of the two expiry dates
151153
END)
152154

153-
Implementation guidelines
155+
## Implementation guidelines
154156
The easiest way to join multiple tables is a cascading set based approach. This is done by joining the Hub and Satellite and treating this as a single set which is joined against another similar set of data (for instance a Link and Link-Satellite). The result of this is a new set of consistent timelines for a certain grain of information. This set can be treated as a single set again and joined with the next set (for instance a Hub and Satellite) and so forth.
155157
When creating a standard Dimension table it is recommended to assign new surrogate keys for every dimension record. The only reason for this is to prevent a combination of Integration Layer surrogate keys to be present in the associated Fact table. The range of keys can become very wide. This also fits in with the classic approach towards loading Facts and Dimensions where the Fact table ETL performs a key lookup towards the Dimension table. Using Data Vault as Integration Layer opens up other options as well but this is a well-known (and understood) type of ETL.
156158
The original Integration Layer keys remain attributes of the new Dimension table.
157159
Creating a Type 1 Dimension is easier; only the most recent records can be joined.
158160
Joining has to be done with < and > selections, which not every ETL tool supports (easily). This may require SQL overrides.
159161
Some ETL tools or databases make the WHERE clause a bit more readable by providing a ‘greatest’ or ‘smallest’ function.
160162
This approach requires the timelines in all tables to be complete, ensuring referential integrity in the central Data Vault model. This means that every Hub has to have a record in the Satellite table with a start date of ‘01-01-1900’ and one which ends at ‘31-12-9999’ (can be the same record if there is no history yet). Without this dummy record to complete the timelines the query to calculate the overlaps will become very complex. SQL filters the records in the original WHERE clause before joining to the other history set. This requires the selection on the date range to be done on the JOIN clause but makes it impossible to get the EXPIRY_DATE correct in one pass. The solution with this approach is to only select the EFFECTIVE_DATE values, order these, and join this dataset back to itself to be able to compare the previous row (or the next depending on the sort) and derive the EXPIRY_DATE. In this context the solution to add dummy records to complete the timelines is an easier solution which also improves the integrity of the data in the Data Vault model.
161-
Consequences
163+
164+
## Considerations and consequences
162165
This approach requires the timelines in all tables to be complete, ensuring referential integrity in the central Data Vault model. This means that every Hub has to have a record in the Satellite table with a start date of ‘01-01-1900’ and one which ends at ‘31-12-9999’ (can be the same record if there is no history yet). Without this dummy record to complete the timelines the query to calculate the overlaps will become very complex. SQL filters the records in the original WHERE clause before joining to the other history set. This requires the selection on the date range to be done on the JOIN clause but makes it impossible to get the EXPIRY_DATE correct in one pass. The solution with this approach is to only select the EFFECTIVE_DATE values, order these, and join this dataset back to itself to be able to compare the previous row (or the next depending on the sort) and derive the EXPIRY_DATE. In this context the solution to add dummy records to complete the timelines is an easier solution which also improves the integrity of the data in the Data Vault model.
163166
Known uses
167+
164168
This type of ETL process is to be used to join historical tables together in the Integration Layer.
165-
Related patterns
169+
170+
## Related patterns
166171
Design Pattern 002 – Generic – Types of history
167172
Design Pattern 006 – Generic – Using Start, Process and End dates.
168173
Design Pattern 008 – Data Vault – Loading Hub tables

0 commit comments

Comments
 (0)