You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<summary><b> List of References </b> (Click to expand)</summary>
@@ -33,15 +32,14 @@ Last updated: 2025-04-15
33
32
34
33
-[Overview](#overview)
35
34
-[Demo](#demo)
36
-
-[Create a Workspace](#create-a-workspace)
37
-
-[Create a Lakehouse](#create-a-lakehouse)
38
-
-[Create a New Semantic Model](#create-a-new-semantic-model)
39
-
-[Auto-Generate Report with Copilot](#auto-generate-report-with-copilot)
40
-
-[Create a Deployment Pipeline](#create-a-deployment-pipeline)
41
-
-[Deploy to Production](#deploy-to-production)
35
+
-[Create a Workspace](#create-a-workspace)
36
+
-[Create a Lakehouse](#create-a-lakehouse)
37
+
-[Create a New Semantic Model](#create-a-new-semantic-model)
38
+
-[Auto-Generate Report with Copilot](#auto-generate-report-with-copilot)
39
+
-[Create a Deployment Pipeline](#create-a-deployment-pipeline)
40
+
-[Deploy to Production](#deploy-to-production)
42
41
-[How to refresh the data](#how-to-refresh-the-data)
43
42
44
-
45
43
</details>
46
44
47
45
## Overview
@@ -61,12 +59,12 @@ Process Overview:
61
59
> `Specifics for Lakehouse:` For lakehouses, the deployment process typically `includes the structure and metadata but not the actual data tables`. This is why you might see the structure and semantic models deployed, but the tables themselves need to be manually refreshed or reloaded in the target environment.<br/> <br/>
62
60
> `Deployment Rules:` You can set deployment rules to manage different stages and change content settings during deployment. For example, you can specify default lakehouses for notebooks to avoid manual changes post-deployment.
|**Incremental Refresh**| Refreshes only the data that has changed since the last refresh, improving efficiency. Click [here to understand more about incremental refresh](../Workloads-Specific/PowerBi/IncrementalRefresh.md)| - **Evaluate Changes**: Checks for changes in the data source based on a DateTime column.<br>- **Retrieve Data**: Only changed data is retrieved and loaded.<br>- **Replace Data**: Updated data is processed and replaced. |
194
191
195
192
Steps to Set Up Incremental Refresh:
193
+
196
194
1.**Create or Open a Dataflow**: Start by creating a new Dataflow Gen2 or opening an existing one.
197
195
2.**Configure the Query**: Ensure your query includes a DateTime column that can be used to filter the data.
198
196
3.**Enable Incremental Refresh**: Right-click the query and select Incremental Refresh. Configure the settings, such as the DateTime column and the time range for data extraction.
> This process shows how to set up Microsoft Fabric Activator to automate workflows by detecting file creation events in a storage system and triggering another pipeline to run. <br/>
13
+
>
13
14
> 1.**First Pipeline**: The process starts with a pipeline that ends with a `Copy Data` activity. This activity uploads data into the `Lakehouse`. <br/>
14
15
> 2.**Event Stream Setup**: An `Event Stream` is configured in Activator to monitor the Lakehouse for file creation or data upload events. <br/>
15
16
> 3.**Triggering the Second Pipeline**: Once the event is detected (e.g., a file is uploaded), the Event Stream triggers the second pipeline to continue the workflow.
@@ -25,19 +26,19 @@ Last updated: 2025-04-15
25
26
<details>
26
27
<summary><b>List of Content </b> (Click to expand)</summary>
27
28
28
-
-[Set Up the First Pipeline](#set-up-the-first-pipeline)
29
-
-[Configure Activator to Detect the Event](#configure-activator-to-detect-the-event)
30
-
-[Set Up the Second Pipeline](#set-up-the-second-pipeline)
31
-
-[Define the Rule in Activator](#define-the-rule-in-activator)
32
-
-[Test the Entire Workflow](#test-the-entire-workflow)
33
-
-[Troubleshooting If Needed](#troubleshooting-if-needed)
29
+
-[Set Up the First Pipeline](#set-up-the-first-pipeline)
30
+
-[Configure Activator to Detect the Event](#configure-activator-to-detect-the-event)
31
+
-[Set Up the Second Pipeline](#set-up-the-second-pipeline)
32
+
-[Define the Rule in Activator](#define-the-rule-in-activator)
33
+
-[Test the Entire Workflow](#test-the-entire-workflow)
34
+
-[Troubleshooting If Needed](#troubleshooting-if-needed)
34
35
35
36
</details>
36
37
37
38
> [!NOTE]
38
39
> This code generates random data with fields such as id, name, age, email, and created_at, organizes it into a PySpark DataFrame, and saves it to a specified Lakehouse path using the Delta format. Click here to see the [example script](./GeneratesRandomData.ipynb)
1.**Run the First Pipeline**: Execute the first pipeline and verify that the trigger file is created.
117
118
2.**Monitor Activator**: Check the `Event Details` and `Rule Activation Details` in Activator to ensure the event is detected and the rule is activated.
118
119
3.**Verify the Second Pipeline**: Confirm that the second pipeline is triggered and runs successfully.
0 commit comments