You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: Deployment-Pipelines/README.md
+11-13Lines changed: 11 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,8 +9,7 @@ Last updated: 2025-04-15
9
9
10
10
------------------------------------------
11
11
12
-
> Lakehouse Schema and Deployment Pipelines
13
-
12
+
> Lakehouse Schema and Deployment Pipelines
14
13
15
14
<details>
16
15
<summary><b> List of References </b> (Click to expand)</summary>
@@ -33,15 +32,14 @@ Last updated: 2025-04-15
33
32
34
33
-[Overview](#overview)
35
34
-[Demo](#demo)
36
-
-[Create a Workspace](#create-a-workspace)
37
-
-[Create a Lakehouse](#create-a-lakehouse)
38
-
-[Create a New Semantic Model](#create-a-new-semantic-model)
39
-
-[Auto-Generate Report with Copilot](#auto-generate-report-with-copilot)
40
-
-[Create a Deployment Pipeline](#create-a-deployment-pipeline)
41
-
-[Deploy to Production](#deploy-to-production)
35
+
-[Create a Workspace](#create-a-workspace)
36
+
-[Create a Lakehouse](#create-a-lakehouse)
37
+
-[Create a New Semantic Model](#create-a-new-semantic-model)
38
+
-[Auto-Generate Report with Copilot](#auto-generate-report-with-copilot)
39
+
-[Create a Deployment Pipeline](#create-a-deployment-pipeline)
40
+
-[Deploy to Production](#deploy-to-production)
42
41
-[How to refresh the data](#how-to-refresh-the-data)
43
42
44
-
45
43
</details>
46
44
47
45
## Overview
@@ -61,12 +59,12 @@ Process Overview:
61
59
> `Specifics for Lakehouse:` For lakehouses, the deployment process typically `includes the structure and metadata but not the actual data tables`. This is why you might see the structure and semantic models deployed, but the tables themselves need to be manually refreshed or reloaded in the target environment.<br/> <br/>
62
60
> `Deployment Rules:` You can set deployment rules to manage different stages and change content settings during deployment. For example, you can specify default lakehouses for notebooks to avoid manual changes post-deployment.
|**Incremental Refresh**| Refreshes only the data that has changed since the last refresh, improving efficiency. Click [here to understand more about incremental refresh](../Workloads-Specific/PowerBi/IncrementalRefresh.md)| - **Evaluate Changes**: Checks for changes in the data source based on a DateTime column.<br>- **Retrieve Data**: Only changed data is retrieved and loaded.<br>- **Replace Data**: Updated data is processed and replaced. |
194
191
195
192
Steps to Set Up Incremental Refresh:
193
+
196
194
1.**Create or Open a Dataflow**: Start by creating a new Dataflow Gen2 or opening an existing one.
197
195
2.**Configure the Query**: Ensure your query includes a DateTime column that can be used to filter the data.
198
196
3.**Enable Incremental Refresh**: Right-click the query and select Incremental Refresh. Configure the settings, such as the DateTime column and the time range for data extraction.
Copy file name to clipboardExpand all lines: Monitoring-Observability/FabricActivatorRulePipeline/README.md
+18-17Lines changed: 18 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,6 +10,7 @@ Last updated: 2025-04-15
10
10
----------
11
11
12
12
> This process shows how to set up Microsoft Fabric Activator to automate workflows by detecting file creation events in a storage system and triggering another pipeline to run. <br/>
13
+
>
13
14
> 1.**First Pipeline**: The process starts with a pipeline that ends with a `Copy Data` activity. This activity uploads data into the `Lakehouse`. <br/>
14
15
> 2.**Event Stream Setup**: An `Event Stream` is configured in Activator to monitor the Lakehouse for file creation or data upload events. <br/>
15
16
> 3.**Triggering the Second Pipeline**: Once the event is detected (e.g., a file is uploaded), the Event Stream triggers the second pipeline to continue the workflow.
@@ -25,19 +26,19 @@ Last updated: 2025-04-15
25
26
<details>
26
27
<summary><b>List of Content </b> (Click to expand)</summary>
27
28
28
-
-[Set Up the First Pipeline](#set-up-the-first-pipeline)
29
-
-[Configure Activator to Detect the Event](#configure-activator-to-detect-the-event)
30
-
-[Set Up the Second Pipeline](#set-up-the-second-pipeline)
31
-
-[Define the Rule in Activator](#define-the-rule-in-activator)
32
-
-[Test the Entire Workflow](#test-the-entire-workflow)
33
-
-[Troubleshooting If Needed](#troubleshooting-if-needed)
29
+
-[Set Up the First Pipeline](#set-up-the-first-pipeline)
30
+
-[Configure Activator to Detect the Event](#configure-activator-to-detect-the-event)
31
+
-[Set Up the Second Pipeline](#set-up-the-second-pipeline)
32
+
-[Define the Rule in Activator](#define-the-rule-in-activator)
33
+
-[Test the Entire Workflow](#test-the-entire-workflow)
34
+
-[Troubleshooting If Needed](#troubleshooting-if-needed)
34
35
35
36
</details>
36
37
37
38
> [!NOTE]
38
39
> This code generates random data with fields such as id, name, age, email, and created_at, organizes it into a PySpark DataFrame, and saves it to a specified Lakehouse path using the Delta format. Click here to see the [example script](./GeneratesRandomData.ipynb)
1.**Run the First Pipeline**: Execute the first pipeline and verify that the trigger file is created.
117
118
2.**Monitor Activator**: Check the `Event Details` and `Rule Activation Details` in Activator to ensure the event is detected and the rule is activated.
118
119
3.**Verify the Second Pipeline**: Confirm that the second pipeline is triggered and runs successfully.
> The `Microsoft Fabric Capacity Metrics app` is designed to provide comprehensive monitoring capabilities for Microsoft Fabric capacities. It helps administrators track capacity consumption, identify performance bottlenecks, and make informed decisions about scaling and resource allocation. The app provides detailed insights into capacity utilization, throttling, and system events, enabling proactive management of resources to ensure optimal performance. <br/> <br/>
45
+
> The `Microsoft Fabric Capacity Metrics app` is designed to provide comprehensive monitoring capabilities for Microsoft Fabric capacities. It helps administrators track capacity consumption, identify performance bottlenecks, and make informed decisions about scaling and resource allocation. The app provides detailed insights into capacity utilization, throttling, and system events, enabling proactive management of resources to ensure optimal performance. <br/> <br/>
46
46
> This app is essential for maintaining the health and efficiency of your Microsoft Fabric capacities
47
47
48
48
|**Feature**|**Description**|
@@ -58,7 +58,7 @@ Last updated: 2025-04-15
58
58
- Navigate to [Microsoft Fabric](https://app.fabric.microsoft.com/). In the left panel, locate the `Apps` icon and click on `Get apps`.
- If the app doesn't show data or can't refresh, try deleting the old app and reinstalling the latest version.
129
130
- Update the semantic model credentials if needed.
130
131
131
-
## Admin monitoring
132
+
## Admin monitoring
132
133
133
134
> `Admin monitoring workspace` in Microsoft Fabric is a powerful tool for administrators to track and analyze usage metrics across their organization. This workspace provides detailed insights into how different features and services are being utilized, helping admins make informed decisions to optimize performance and resource allocation.
134
135
@@ -148,11 +149,11 @@ Benefits of Using Admin Monitoring Workspace:
148
149
3.**Optimize Resources**: Make data-driven decisions about scaling and resource allocation to ensure optimal performance.
149
150
4.**Ensure Compliance**: Use the Purview Hub to monitor data governance and compliance, ensuring that your organization adheres to relevant regulations and standards.
150
151
151
-
152
152
### Configure the Admin Monitoring Workspace
153
153
154
154
> [!IMPORTANT]
155
-
> -**Permissions**: `Only users with direct admin roles can set up the Admin Monitoring workspace`. If the admin role `is assigned through a group, data refreshes may fail`. <br/>
155
+
>
156
+
> -**Permissions**: `Only users with direct admin roles can set up the Admin Monitoring workspace`. If the admin role `is assigned through a group, data refreshes may fail`. <br/>
156
157
> -**Read-Only Workspace**: The `Admin Monitoring workspace is read-only`. Users, including admins, cannot edit or view properties of items such as semantic models and reports within the workspace. `Admins can share reports and semantic models within the workspace with other users by assigning them a workspace viewer role or providing direct access links.`
157
158
> -**Reinitializing the Workspace**: If needed, `you can reinitialize the workspace by executing an API call to delete the semantic model and then reinstalling the workspace`.
158
159
@@ -202,7 +203,7 @@ Benefits of Using Admin Monitoring Workspace:
2.**Create Custom Reports**: You can utilize copilot capabilities to automatically create your report and edit it. Request additional pages with your content or even ask questions about your data.
5.**Take Actions**: If you have the necessary permissions, you can perform actions on activities by selecting the More options (...) next to the activity name.
0 commit comments