You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: learn-pr/wwl/orchestrate-processes-in-fabric/includes/1-introduction.md
-1Lines changed: 0 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,6 @@ Notebooks in Microsoft Fabric offer a versatile environment for data exploration
12
12
13
13
Scheduling pipelines includes three standard types of scheduling, including time-based, event-based, and custom scheduling. Scheduling in Microsoft Fabric also allows for seamless integration with other services, which allow for more dynamic event-driven pipelines.
Copy file name to clipboardExpand all lines: learn-pr/wwl/orchestrate-processes-in-fabric/includes/2-choose-between-pipeline-notebook.md
+11-10Lines changed: 11 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,20 +20,21 @@ Dataflows are built using the Power Query experience, available across Microsoft
20
20
21
21
### Real-World Uses Cases for Dataflows
22
22
23
-
1.**Data Consolidation for Reporting**
24
-
Organizations often have data spread across multiple sources such as databases, cloud storage, and on-premises systems. Dataflows can be used to consolidate this data into a single, unified dataset, which can then be used for reporting and analytics. For example, a company might use Dataflows to combine sales data from different regions into a single dataset for a comprehensive sales report. This single dataset can be further curated and promoted into a semantic model for use by a larger audience.
23
+
**Data Consolidation for Reporting**
24
+
Organizations often have data spread across multiple sources such as databases, cloud storage, and on-premises systems. Dataflows can be used to consolidate this
25
+
data into a single, unified dataset, which can then be used for reporting and analytics. For example, a company might use Dataflows to combine sales data from different regions into a single dataset for a comprehensive sales report. This single dataset can be further curated and promoted into a semantic model for use by a larger audience.
25
26
26
-
2.**Data Preparation for Machine Learning**
27
-
Dataflows can be used to prepare and clean data for machine learning models. This method includes tasks such as data cleansing, transformation, and feature engineering. For instance, a data science team might use Dataflows to preprocess customer data, removing duplicates and normalizing values before feeding it into a machine learning model.
27
+
**Data Preparation for Machine Learning**
28
+
Dataflows can be used to prepare and clean data for machine learning models. This method includes tasks such as data cleansing, transformation, and feature engineering. For instance, a data science team might use Dataflows to preprocess customer data, removing duplicates and normalizing values before feeding it into a machine learning model.
28
29
29
-
3.**Real-Time Data Processing**
30
-
Dataflows can handle real-time data ingestion and transformation, making them ideal for scenarios where timely data processing is crucial. For example, an e-commerce platform might use Dataflows to process real-time transaction data, updating inventory levels and generating real-time sales reports.
30
+
**Real-Time Data Processing**
31
+
Dataflows can handle real-time data ingestion and transformation, making them ideal for scenarios where timely data processing is crucial. For example, an e-commerce platform might use Dataflows to process real-time transaction data, updating inventory levels and generating real-time sales reports.
31
32
32
-
4.**Data Migration**
33
-
When migrating data from legacy systems to modern platforms, Dataflows can be used to extract, transform, and load (ETL) data into the new system. This process ensures that data is accurately and efficiently transferred, minimizing downtime and data loss. For instance, a company migrating from an on-premises database to Azure SQL Database might use Dataflows to handle the data migration process.
33
+
**Data Migration**
34
+
When migrating data from legacy systems to modern platforms, Dataflows can be used to extract, transform, and load (ETL) data into the new system. This process ensures that data is accurately and efficiently transferred, minimizing downtime and data loss. For instance, a company migrating from an on-premises database to Azure SQL Database might use Dataflows to handle the data migration process.
34
35
35
-
5.**Self-Service Data Preparation**
36
-
Dataflows provide a low-code interface that allows business users to prepare their own data without needing extensive technical knowledge. This approach empowers users to create their own dataflows for tasks such as data cleansing, transformation, and enrichment, reducing the dependency on IT teams. For example, a marketing team might use Dataflows to prepare campaign data for analysis.
36
+
**Self-Service Data Preparation**
37
+
Dataflows provide a low-code interface that allows business users to prepare their own data without needing extensive technical knowledge. This approach empowers users to create their own dataflows for tasks such as data cleansing, transformation, and enrichment, reducing the dependency on IT teams. For example, a marketing team might use Dataflows to prepare campaign data for analysis.
37
38
38
39
These use cases demonstrate the flexibility and power of Dataflows in handling various data integration and transformation task and show a powerful self-service feature. Self-service might be more appealing to your organization's business users while still providing a roadmap to a larger ELT project that utilizes pipelines and notebooks.
Copy file name to clipboardExpand all lines: learn-pr/wwl/orchestrate-processes-in-fabric/includes/3-design-schedules-and-event-based-triggers.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,7 @@ You can start a data pipeline in two ways:
17
17
18
18
Just as you would expect, on-demand, or ad-hoc runs happen by browsing to the specific pipeline and selecting the **Run** button. You're prompted to save your changes, but the pipeline receives a ***pipeline run ID*** and you can then view the status of the ***run*** by selecting the ***Output*** tab.
19
19
20
-
[](../media/pipeline-run-output-jobs-expanded.png#lightbox)
20
+
[](../media/pipeline-run-output-jobs-expanded.png#lightbox)
21
21
22
22
### Scheduling pipeline runs
23
23
@@ -26,22 +26,22 @@ When you schedule a data pipeline run, you can choose the frequency at which you
26
26
1.**Select Schedule**:
27
27
- This option is found in the top banner of the **Home** tab to view your scheduling options.
28
28
29
-
[](../media/pipeline-scheduling-expanded.png#lightbox)
29
+
[](../media/pipeline-scheduling-expanded.png#lightbox)
30
30
31
31
1.**Default Setting**:
32
32
- By default, your data pipeline has no schedule defined.
33
33
- Select the **On** radio button under the **Scheduled Run** header
34
34
35
-
[](../media/pipeline-schedule-settings-expanded.png#lightbox)
35
+
[](../media/pipeline-schedule-settings-expanded.png#lightbox)
36
36
37
37
1.**Schedule Configuration**:
38
38
- On the **Schedule configuration page**, you can specify:
39
39
-**Schedule frequency**
40
40
-**Start and end dates and times**
41
41
-**Time zone**
42
-
1.**Apply Your Schedule**:
42
+
2.**Apply Your Schedule**:
43
43
- Once you configure your settings, select **Apply** to set your schedule.
44
-
1.**Editing Your Schedule**:
44
+
3.**Editing Your Schedule**:
45
45
- You can view or edit the schedule at any time by selecting the **Schedule** button again.
46
46
47
47
## Storage Event Triggers in Fabric Data Factory pipelines
0 commit comments