Skip to content

Commit 8248621

Browse files
authored
Update Design Pattern - Data Vault - Creating Dimensions from Hub tables.md
1 parent acc7a9b commit 8248621

File tree

1 file changed

+22
-21
lines changed

1 file changed

+22
-21
lines changed

1000_Design_Patterns/Design Pattern - Data Vault - Creating Dimensions from Hub tables.md

Lines changed: 22 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1,22 +1,23 @@
11
# Design Pattern - Data Vault - Creating Dimensions from Hub tables
22

33
## Purpose
4-
This design pattern describes how to create a typical ‘Type 2 Dimension’ table (Dimensional Modelling) from a Data Vault or Hybrid EDW model.
5-
Motivation
6-
To move from a Data Vault (or other Hybrid) model to a Kimball-style Star Schema or similar requires various tables that store historical data to be joined to each other. This is a recurring step which, if done properly, makes it easy to change dimension structures without losing history. Merging various historic sets of data is seen as one of the more complex steps in a Data Vault (or similar) environment. The pattern is called ‘creating Dimensions from Hub’ tables because Hubs are the main entities which are linked together to form a Dimension using their historical information and relationships.
4+
This design pattern describes how to create a typical ‘Type 2 Dimension’ table (Dimensional Modelling) from a Data Vault or Hybrid EDW model.
5+
6+
## Motivation
7+
To move from a Data Vault (or other Hybrid) model to a Kimball-style Star Schema or similar requires various tables that store historical data to be joined to each other. This is a recurring step which, if done properly, makes it easy to change dimension structures without losing history. Merging various historic sets of data is seen as one of the more complex steps in a Data Vault (or similar) environment. The pattern is called ‘creating Dimensions from Hub’ tables because Hubs are the main entities which are linked together to form a Dimension using their historical information and relationships.
78
Also known as
89
Dimensions / Dimensional Modelling
910
Gaps and islands
1011
Timelines
11-
Applicability
12-
This pattern is only applicable for loading processes from source systems or files to the Reporting Structure Area (of the Presentation Layer). The Helper Area may use similar concepts but since this is a ‘free-for-all’ part of the ETL Framework it is not mandatory to follow this Design Pattern.
12+
## Applicability
13+
This pattern is only applicable for loading processes from source systems or files to the Reporting Structure Area (of the Presentation Layer). The Helper Area may use similar concepts but since this is a ‘free-for-all’ part of the ETL Framework it is not mandatory to follow this Design Pattern.
1314
Structure
14-
Creating Dimensions from a Data Vault model essentially means joining the various Hub, Link and Satellite tables together to create a certain hierarchy. In the example displayed in the following diagram the Dimension that can be generated is a ‘Product’ dimension with the Distribution Channel as a higher level in this dimension.
15+
Creating Dimensions from a Data Vault model essentially means joining the various Hub, Link and Satellite tables together to create a certain hierarchy. In the example displayed in the following diagram the Dimension that can be generated is a ‘Product’ dimension with the Distribution Channel as a higher level in this dimension.
1516

1617
Business Insights > Design Pattern 019 - Creating Dimensions from Hub tables > BI7.png
1718

1819
Figure 1: Example Data Vault model
19-
Creating dimensions by joining tables with history means that the overlap in timelines (effective and expiry dates) will be ‘cut’ in multiple records with smaller intervals. This is explained using the following sample datasets, only the tables which contain ‘history’ are shown.
20+
Creating dimensions by joining tables with history means that the overlap in timelines (effective and expiry dates) will be ‘cut’ in multiple records with smaller intervals. This is explained using the following sample datasets, only the tables which contain ‘history’ are shown.
2021

2122
SAT Product:
2223
Key
@@ -34,17 +35,17 @@ Cheese
3435
01-01-2009
3536
05-06-2010
3637

37-
Before being joined to the other sets this Satellite table is joined to the Hub table first. The Hub table maps the Data Warehouse key ‘73’ to the business key ‘CHS’.
38+
Before being joined to the other sets this Satellite table is joined to the Hub table first. The Hub table maps the Data Warehouse key ‘73’ to the business key ‘CHS’.
3839
73
39-
Cheese – Yellow
40+
Cheese – Yellow
4041
05-06-2010
4142
04-04-2011
4243
73
43-
Cheese – Gold
44+
Cheese – Gold
4445
04-04-2011
4546
31-12-9999
4647

47-
SAT Product –Channel (Link-Satellite):
48+
SAT Product –Channel (Link-Satellite):
4849
Link Key
4950
Product Key
5051
Channel Key
@@ -74,7 +75,7 @@ When merging these to data sets into a dimension the overlaps in time are calcul
7475
Business Insights > Design Pattern 019 - Creating Dimensions from Hub tables > BI8.png
7576
Figure 2: Timelines
7677

77-
In other words, the merging of both the historic data sets where one has 4 records (time periods) and the other one has 3 records (time periods) results into a new set that has 6 (‘smaller’) records. This gives the following result data set (changes are highlighted):\
78+
In other words, the merging of both the historic data sets where one has 4 records (time periods) and the other one has 3 records (time periods) results into a new set that has 6 (‘smaller’) records. This gives the following result data set (changes are highlighted):\
7879
Dimension Key
7980
Product Key
8081
Product
@@ -155,17 +156,17 @@ When creating a standard Dimension table it is recommended to assign new surroga
155156
The original Integration Layer keys remain attributes of the new Dimension table.
156157
Creating a Type 1 Dimension is easier; only the most recent records can be joined.
157158
Joining has to be done with < and > selections, which not every ETL tool supports (easily). This may require SQL overrides.
158-
Some ETL tools or databases make the WHERE clause a bit more readable by providing a ‘greatest’ or ‘smallest’ function.
159-
This approach requires the timelines in all tables to be complete, ensuring referential integrity in the central Data Vault model. This means that every Hub has to have a record in the Satellite table with a start date of ‘01-01-1900’ and one which ends at ‘31-12-9999’ (can be the same record if there is no history yet). Without this dummy record to complete the timelines the query to calculate the overlaps will become very complex. SQL filters the records in the original WHERE clause before joining to the other history set. This requires the selection on the date range to be done on the JOIN clause but makes it impossible to get the EXPIRY_DATE correct in one pass. The solution with this approach is to only select the EFFECTIVE_DATE values, order these, and join this dataset back to itself to be able to compare the previous row (or the next depending on the sort) and derive the EXPIRY_DATE. In this context the solution to add dummy records to complete the timelines is an easier solution which also improves the integrity of the data in the Data Vault model.
159+
Some ETL tools or databases make the WHERE clause a bit more readable by providing a ‘greatest’ or ‘smallest’ function.
160+
This approach requires the timelines in all tables to be complete, ensuring referential integrity in the central Data Vault model. This means that every Hub has to have a record in the Satellite table with a start date of ‘01-01-1900’ and one which ends at ‘31-12-9999’ (can be the same record if there is no history yet). Without this dummy record to complete the timelines the query to calculate the overlaps will become very complex. SQL filters the records in the original WHERE clause before joining to the other history set. This requires the selection on the date range to be done on the JOIN clause but makes it impossible to get the EXPIRY_DATE correct in one pass. The solution with this approach is to only select the EFFECTIVE_DATE values, order these, and join this dataset back to itself to be able to compare the previous row (or the next depending on the sort) and derive the EXPIRY_DATE. In this context the solution to add dummy records to complete the timelines is an easier solution which also improves the integrity of the data in the Data Vault model.
160161
Consequences
161-
This approach requires the timelines in all tables to be complete, ensuring referential integrity in the central Data Vault model. This means that every Hub has to have a record in the Satellite table with a start date of ‘01-01-1900’ and one which ends at ‘31-12-9999’ (can be the same record if there is no history yet). Without this dummy record to complete the timelines the query to calculate the overlaps will become very complex. SQL filters the records in the original WHERE clause before joining to the other history set. This requires the selection on the date range to be done on the JOIN clause but makes it impossible to get the EXPIRY_DATE correct in one pass. The solution with this approach is to only select the EFFECTIVE_DATE values, order these, and join this dataset back to itself to be able to compare the previous row (or the next depending on the sort) and derive the EXPIRY_DATE. In this context the solution to add dummy records to complete the timelines is an easier solution which also improves the integrity of the data in the Data Vault model.
162+
This approach requires the timelines in all tables to be complete, ensuring referential integrity in the central Data Vault model. This means that every Hub has to have a record in the Satellite table with a start date of ‘01-01-1900’ and one which ends at ‘31-12-9999’ (can be the same record if there is no history yet). Without this dummy record to complete the timelines the query to calculate the overlaps will become very complex. SQL filters the records in the original WHERE clause before joining to the other history set. This requires the selection on the date range to be done on the JOIN clause but makes it impossible to get the EXPIRY_DATE correct in one pass. The solution with this approach is to only select the EFFECTIVE_DATE values, order these, and join this dataset back to itself to be able to compare the previous row (or the next depending on the sort) and derive the EXPIRY_DATE. In this context the solution to add dummy records to complete the timelines is an easier solution which also improves the integrity of the data in the Data Vault model.
162163
Known uses
163164
This type of ETL process is to be used to join historical tables together in the Integration Layer.
164165
Related patterns
165-
Design Pattern 002 – Generic – Types of history
166-
Design Pattern 006 – Generic – Using Start, Process and End dates.
167-
Design Pattern 008 – Data Vault – Loading Hub tables
168-
Design Pattern 009 – Data Vault – Loading Satellite tables
169-
Design Pattern 010 – Data Vault – Loading Link tables
166+
Design Pattern 002 – Generic – Types of history
167+
Design Pattern 006 – Generic – Using Start, Process and End dates.
168+
Design Pattern 008 – Data Vault – Loading Hub tables
169+
Design Pattern 009 – Data Vault – Loading Satellite tables
170+
Design Pattern 010 – Data Vault – Loading Link tables
170171
Discussion items (not yet to be implemented or used until final)
171-
None.
172+
None.

0 commit comments

Comments
 (0)