You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -58,7 +58,7 @@ In practice, a single generic pipeline reads the Dataflowspec and uses it to orc
58
58
|[append_flow](https://docs.databricks.com/en/delta-live-tables/flows.html#use-append-flow-to-write-to-a-streaming-table-from-multiple-source-streams) API support | Bronze layer|
59
59
| Liquid cluster support | Bronze, Bronze Quarantine, Silver tables|
1. Deploy Lakeflow Declarative pipeline with dlt-meta configuration like ```layer```, ```group```, ```dataflowSpec table details``` etc to your databricks workspace
169
+
1. Deploy Lakeflow Declarative Pipeline with dlt-meta configuration like ```layer```, ```group```, ```dataflowSpec table details``` etc to your databricks workspace
170
170
2. Display message: ```dlt-meta pipeline={pipeline_id} created and launched with update_id={pipeline_update_id}, url=https://{databricks workspace url}/#joblist/pipelines/{pipeline_id}```
171
171
3. Pipline URL will automatically open in your defaul browser.
Copy file name to clipboardExpand all lines: docs/content/demo/Apply_Changes_From_Snapshot.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,11 +9,11 @@ draft: false
9
9
- This demo will perform following steps
10
10
- Showcase onboarding process for apply changes from snapshot pattern
11
11
- Run onboarding for the bronze stores and products tables, which contains data snapshot data in csv files.
12
-
- Run Bronze DLT to load initial snapshot (LOAD_1.csv)
12
+
- Run Bronze Lakeflow Declarative Pipeline to load initial snapshot (LOAD_1.csv)
13
13
- Upload incremental snapshot LOAD_2.csv version=2 for stores and product
14
-
- Run Bronze DLT to load incremental snapshot (LOAD_2.csv). Stores is scd_type=2 so updated records will expired and added new records with version_number. Products is scd_type=1 so in case records missing for scd_type=1 will be deleted.
14
+
- Run Bronze Lakeflow Declarative Pipeline to load incremental snapshot (LOAD_2.csv). Stores is scd_type=2 so updated records will expired and added new records with version_number. Products is scd_type=1 so in case records missing for scd_type=1 will be deleted.
15
15
- Upload incremental snapshot LOAD_3.csv version=3 for stores and product
16
-
- Run Bronze DLT to load incremental snapshot (LOAD_3.csv). Stores is scd_type=2 so updated records will expired and added new records with version_number. Products is scd_type=1 so in case records missing for scd_type=1 will be deleted.
16
+
- Run Bronze Lakeflow Declarative Pipeline to load incremental snapshot (LOAD_3.csv). Stores is scd_type=2 so updated records will expired and added new records with version_number. Products is scd_type=1 so in case records missing for scd_type=1 will be deleted.
This demo showcases DLT-META's capabilities of creating Bronze and Silver DLT pipelines with initial and incremental mode automatically.
11
+
This demo showcases DLT-META's capabilities of creating Bronze and Silver Lakeflow Declarative Pipeline pipelines with initial and incremental mode automatically.
12
12
- Customer and Transactions feeds for initial load
13
-
- Adds new feeds Product and Stores to existing Bronze and Silver DLT pipelines with metadata changes.
14
-
- Runs Bronze and Silver DLT for incremental load for CDC events
13
+
- Adds new feeds Product and Stores to existing Bronze and Silver Lakeflow Declarative Pipeline pipelines with metadata changes.
14
+
- Runs Bronze and Silver Lakeflow Declarative Pipeline for incremental load for CDC events
15
15
16
16
#### Steps to launch DAIS demo in your Databricks workspace:
0 commit comments