|
| 1 | +# Design Pattern - Data Vault - Creating Dimensions from Hub tables |
| 2 | + |
| 3 | +## Purpose |
| 4 | +This design pattern describes how to create a typical ‘Type 2 Dimension’ table (Dimensional Modelling) from a Data Vault or Hybrid EDW model. |
| 5 | +Motivation |
| 6 | +To move from a Data Vault (or other Hybrid) model to a Kimball-style Star Schema or similar requires various tables that store historical data to be joined to each other. This is a recurring step which, if done properly, makes it easy to change dimension structures without losing history. Merging various historic sets of data is seen as one of the more complex steps in a Data Vault (or similar) environment. The pattern is called ‘creating Dimensions from Hub’ tables because Hubs are the main entities which are linked together to form a Dimension using their historical information and relationships. |
| 7 | +Also known as |
| 8 | +Dimensions / Dimensional Modelling |
| 9 | +Gaps and islands |
| 10 | +Timelines |
| 11 | +Applicability |
| 12 | +This pattern is only applicable for loading processes from source systems or files to the Reporting Structure Area (of the Presentation Layer). The Helper Area may use similar concepts but since this is a ‘free-for-all’ part of the ETL Framework it is not mandatory to follow this Design Pattern. |
| 13 | +Structure |
| 14 | +Creating Dimensions from a Data Vault model essentially means joining the various Hub, Link and Satellite tables together to create a certain hierarchy. In the example displayed in the following diagram the Dimension that can be generated is a ‘Product’ dimension with the Distribution Channel as a higher level in this dimension. |
| 15 | + |
| 16 | + Business Insights > Design Pattern 019 - Creating Dimensions from Hub tables > BI7.png |
| 17 | + |
| 18 | +Figure 1: Example Data Vault model |
| 19 | +Creating dimensions by joining tables with history means that the overlap in timelines (effective and expiry dates) will be ‘cut’ in multiple records with smaller intervals. This is explained using the following sample datasets, only the tables which contain ‘history’ are shown. |
| 20 | + |
| 21 | +SAT Product: |
| 22 | +Key |
| 23 | +Product Name |
| 24 | +Effective Date |
| 25 | +Expiry Date |
| 26 | + |
| 27 | +The first record is a dummy record created together with the Hub record. This was updated as part of the history / SCD updates. |
| 28 | +73 |
| 29 | +- (dummy) |
| 30 | +01-01-1900 |
| 31 | +01-01-2009 |
| 32 | +73 |
| 33 | +Cheese |
| 34 | +01-01-2009 |
| 35 | +05-06-2010 |
| 36 | + |
| 37 | +Before being joined to the other sets this Satellite table is joined to the Hub table first. The Hub table maps the Data Warehouse key ‘73’ to the business key ‘CHS’. |
| 38 | +73 |
| 39 | +Cheese – Yellow |
| 40 | +05-06-2010 |
| 41 | +04-04-2011 |
| 42 | +73 |
| 43 | +Cheese – Gold |
| 44 | +04-04-2011 |
| 45 | +31-12-9999 |
| 46 | + |
| 47 | +SAT Product –Channel (Link-Satellite): |
| 48 | +Link Key |
| 49 | +Product Key |
| 50 | +Channel Key |
| 51 | +Effective Date |
| 52 | +Expiry Date |
| 53 | + |
| 54 | +This set indicates that the product has been moved to a different sales channel over time. |
| 55 | + |
| 56 | +1 |
| 57 | +73 |
| 58 | +-1 (dummy) |
| 59 | +01-01-1900 |
| 60 | +01-01-2010 |
| 61 | +2 |
| 62 | +73 |
| 63 | +1 |
| 64 | +01-01-2010 |
| 65 | +04-03-2011 |
| 66 | +3 |
| 67 | +73 |
| 68 | +2 |
| 69 | +04-03-2011 |
| 70 | +31-12-9999 |
| 71 | + |
| 72 | +When merging these to data sets into a dimension the overlaps in time are calculated: |
| 73 | + |
| 74 | + Business Insights > Design Pattern 019 - Creating Dimensions from Hub tables > BI8.png |
| 75 | +Figure 2: Timelines |
| 76 | + |
| 77 | +In other words, the merging of both the historic data sets where one has 4 records (time periods) and the other one has 3 records (time periods) results into a new set that has 6 (‘smaller’) records. This gives the following result data set (changes are highlighted):\ |
| 78 | +Dimension Key |
| 79 | +Product Key |
| 80 | +Product |
| 81 | +Channel Key |
| 82 | +Effective Date |
| 83 | +Expiry Date |
| 84 | +1 |
| 85 | +73 |
| 86 | +- |
| 87 | +-1 |
| 88 | +01-01-1900 |
| 89 | +01-01-2009 |
| 90 | +2 |
| 91 | +73 |
| 92 | +Cheese |
| 93 | +-1 |
| 94 | +01-01-2009 |
| 95 | +01-01-2010 |
| 96 | +3 |
| 97 | +73 |
| 98 | +Cheese |
| 99 | +1 |
| 100 | +01-01-2010 |
| 101 | +05-06-2010 |
| 102 | +4 |
| 103 | +73 |
| 104 | +Cheese-Yellow |
| 105 | +1 |
| 106 | +05-06-2010 |
| 107 | +03-04-2011 |
| 108 | +5 |
| 109 | +73 |
| 110 | +Cheese-Yellow |
| 111 | +2 |
| 112 | +03-04-2011 |
| 113 | +04-04-2011 |
| 114 | +6 |
| 115 | +73 |
| 116 | +Cheese-Gold |
| 117 | +2 |
| 118 | +04-04-2011 |
| 119 | +31-12-9999 |
| 120 | + |
| 121 | +This result can be achieved by joining the tables on their usual keys and calculating the overlapping time ranges: |
| 122 | +SELECT |
| 123 | + B.PRODUCT_NAME, |
| 124 | + C.CHANNEL_KEY, |
| 125 | +(CASE |
| 126 | + WHEN B.EFFECTIVE_DATE > D.EFFECTIVE_DATE |
| 127 | + THEN B.EFFECTIVE_DATE |
| 128 | + ELSE D.EFFECTIVE_DATE |
| 129 | + END) AS EFFECTIVE_DATE, -- greatest of the two effective dates |
| 130 | +(CASE |
| 131 | + WHEN B.EXPIRY_DATE < D.EXPIRY_DATE |
| 132 | + THEN B.EXPIRY_DATE |
| 133 | + ELSE D.EXPIRY_DATE |
| 134 | + END) AS EXPIRY_DATE -- smallest of the two expiry dates |
| 135 | +FROM HUB_PRODUCT A |
| 136 | +JOIN SAT_PRODUCT B ON A.PRODUCT_SK=B.PRODUCT_SK |
| 137 | +JOIN LINK_PRODUCT_CHANNEL C ON A.PRODUCT_SK=C.PRODUCT_SK |
| 138 | +JOIN SAT_LINK_PRODUCT_CHANNEL D ON D.PRODUCT_CHANNEL_SK=C.PRODUCT_CHANNEL_SK |
| 139 | +WHERE |
| 140 | +(CASE |
| 141 | + WHEN B.EFFECTIVE_DATE > D.EFFECTIVE_DATE |
| 142 | + THEN B.EFFECTIVE_DATE |
| 143 | + ELSE D.EFFECTIVE_DATE |
| 144 | + END) -- greatest of the two effective dates |
| 145 | + < |
| 146 | +(CASE |
| 147 | + WHEN B.EXPIRY_DATE < D.EXPIRY_DATE |
| 148 | + THEN B.EXPIRY_DATE |
| 149 | + ELSE D.EXPIRY_DATE -- smallest of the two expiry dates |
| 150 | + END) |
| 151 | + |
| 152 | +Implementation guidelines |
| 153 | +The easiest way to join multiple tables is a cascading set based approach. This is done by joining the Hub and Satellite and treating this as a single set which is joined against another similar set of data (for instance a Link and Link-Satellite). The result of this is a new set of consistent timelines for a certain grain of information. This set can be treated as a single set again and joined with the next set (for instance a Hub and Satellite) and so forth. |
| 154 | +When creating a standard Dimension table it is recommended to assign new surrogate keys for every dimension record. The only reason for this is to prevent a combination of Integration Layer surrogate keys to be present in the associated Fact table. The range of keys can become very wide. This also fits in with the classic approach towards loading Facts and Dimensions where the Fact table ETL performs a key lookup towards the Dimension table. Using Data Vault as Integration Layer opens up other options as well but this is a well-known (and understood) type of ETL. |
| 155 | +The original Integration Layer keys remain attributes of the new Dimension table. |
| 156 | +Creating a Type 1 Dimension is easier; only the most recent records can be joined. |
| 157 | +Joining has to be done with < and > selections, which not every ETL tool supports (easily). This may require SQL overrides. |
| 158 | +Some ETL tools or databases make the WHERE clause a bit more readable by providing a ‘greatest’ or ‘smallest’ function. |
| 159 | +This approach requires the timelines in all tables to be complete, ensuring referential integrity in the central Data Vault model. This means that every Hub has to have a record in the Satellite table with a start date of ‘01-01-1900’ and one which ends at ‘31-12-9999’ (can be the same record if there is no history yet). Without this dummy record to complete the timelines the query to calculate the overlaps will become very complex. SQL filters the records in the original WHERE clause before joining to the other history set. This requires the selection on the date range to be done on the JOIN clause but makes it impossible to get the EXPIRY_DATE correct in one pass. The solution with this approach is to only select the EFFECTIVE_DATE values, order these, and join this dataset back to itself to be able to compare the previous row (or the next depending on the sort) and derive the EXPIRY_DATE. In this context the solution to add dummy records to complete the timelines is an easier solution which also improves the integrity of the data in the Data Vault model. |
| 160 | +Consequences |
| 161 | +This approach requires the timelines in all tables to be complete, ensuring referential integrity in the central Data Vault model. This means that every Hub has to have a record in the Satellite table with a start date of ‘01-01-1900’ and one which ends at ‘31-12-9999’ (can be the same record if there is no history yet). Without this dummy record to complete the timelines the query to calculate the overlaps will become very complex. SQL filters the records in the original WHERE clause before joining to the other history set. This requires the selection on the date range to be done on the JOIN clause but makes it impossible to get the EXPIRY_DATE correct in one pass. The solution with this approach is to only select the EFFECTIVE_DATE values, order these, and join this dataset back to itself to be able to compare the previous row (or the next depending on the sort) and derive the EXPIRY_DATE. In this context the solution to add dummy records to complete the timelines is an easier solution which also improves the integrity of the data in the Data Vault model. |
| 162 | +Known uses |
| 163 | +This type of ETL process is to be used to join historical tables together in the Integration Layer. |
| 164 | +Related patterns |
| 165 | +Design Pattern 002 – Generic – Types of history |
| 166 | +Design Pattern 006 – Generic – Using Start, Process and End dates. |
| 167 | +Design Pattern 008 – Data Vault – Loading Hub tables |
| 168 | +Design Pattern 009 – Data Vault – Loading Satellite tables |
| 169 | +Design Pattern 010 – Data Vault – Loading Link tables |
| 170 | +Discussion items (not yet to be implemented or used until final) |
| 171 | +None. |
0 commit comments