Skip to content

Commit d17dd2a

Browse files
addressing blocking issues
1 parent de2a0da commit d17dd2a

File tree

3 files changed

+16
-16
lines changed

3 files changed

+16
-16
lines changed

learn-pr/wwl/orchestrate-processes-in-fabric/includes/1-introduction.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,6 @@ Notebooks in Microsoft Fabric offer a versatile environment for data exploration
1212

1313
Scheduling pipelines includes three standard types of scheduling, including time-based, event-based, and custom scheduling. Scheduling in Microsoft Fabric also allows for seamless integration with other services, which allow for more dynamic event-driven pipelines.
1414

15-
---
1615
The articles covered in this module include:
1716

1817
- Understanding pipelines in Microsoft Fabric.

learn-pr/wwl/orchestrate-processes-in-fabric/includes/2-choose-between-pipeline-notebook.md

Lines changed: 11 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -20,20 +20,21 @@ Dataflows are built using the Power Query experience, available across Microsoft
2020

2121
### Real-World Uses Cases for Dataflows
2222

23-
1. **Data Consolidation for Reporting**
24-
Organizations often have data spread across multiple sources such as databases, cloud storage, and on-premises systems. Dataflows can be used to consolidate this data into a single, unified dataset, which can then be used for reporting and analytics. For example, a company might use Dataflows to combine sales data from different regions into a single dataset for a comprehensive sales report. This single dataset can be further curated and promoted into a semantic model for use by a larger audience.
23+
**Data Consolidation for Reporting**
24+
Organizations often have data spread across multiple sources such as databases, cloud storage, and on-premises systems. Dataflows can be used to consolidate this
25+
data into a single, unified dataset, which can then be used for reporting and analytics. For example, a company might use Dataflows to combine sales data from different regions into a single dataset for a comprehensive sales report. This single dataset can be further curated and promoted into a semantic model for use by a larger audience.
2526

26-
2. **Data Preparation for Machine Learning**
27-
Dataflows can be used to prepare and clean data for machine learning models. This method includes tasks such as data cleansing, transformation, and feature engineering. For instance, a data science team might use Dataflows to preprocess customer data, removing duplicates and normalizing values before feeding it into a machine learning model.
27+
**Data Preparation for Machine Learning**
28+
Dataflows can be used to prepare and clean data for machine learning models. This method includes tasks such as data cleansing, transformation, and feature engineering. For instance, a data science team might use Dataflows to preprocess customer data, removing duplicates and normalizing values before feeding it into a machine learning model.
2829

29-
3. **Real-Time Data Processing**
30-
Dataflows can handle real-time data ingestion and transformation, making them ideal for scenarios where timely data processing is crucial. For example, an e-commerce platform might use Dataflows to process real-time transaction data, updating inventory levels and generating real-time sales reports.
30+
**Real-Time Data Processing**
31+
Dataflows can handle real-time data ingestion and transformation, making them ideal for scenarios where timely data processing is crucial. For example, an e-commerce platform might use Dataflows to process real-time transaction data, updating inventory levels and generating real-time sales reports.
3132

32-
4. **Data Migration**
33-
When migrating data from legacy systems to modern platforms, Dataflows can be used to extract, transform, and load (ETL) data into the new system. This process ensures that data is accurately and efficiently transferred, minimizing downtime and data loss. For instance, a company migrating from an on-premises database to Azure SQL Database might use Dataflows to handle the data migration process.
33+
**Data Migration**
34+
When migrating data from legacy systems to modern platforms, Dataflows can be used to extract, transform, and load (ETL) data into the new system. This process ensures that data is accurately and efficiently transferred, minimizing downtime and data loss. For instance, a company migrating from an on-premises database to Azure SQL Database might use Dataflows to handle the data migration process.
3435

35-
5. **Self-Service Data Preparation**
36-
Dataflows provide a low-code interface that allows business users to prepare their own data without needing extensive technical knowledge. This approach empowers users to create their own dataflows for tasks such as data cleansing, transformation, and enrichment, reducing the dependency on IT teams. For example, a marketing team might use Dataflows to prepare campaign data for analysis.
36+
**Self-Service Data Preparation**
37+
Dataflows provide a low-code interface that allows business users to prepare their own data without needing extensive technical knowledge. This approach empowers users to create their own dataflows for tasks such as data cleansing, transformation, and enrichment, reducing the dependency on IT teams. For example, a marketing team might use Dataflows to prepare campaign data for analysis.
3738

3839
These use cases demonstrate the flexibility and power of Dataflows in handling various data integration and transformation task and show a powerful self-service feature. Self-service might be more appealing to your organization's business users while still providing a roadmap to a larger ELT project that utilizes pipelines and notebooks.
3940

learn-pr/wwl/orchestrate-processes-in-fabric/includes/3-design-schedules-and-event-based-triggers.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ You can start a data pipeline in two ways:
1717

1818
Just as you would expect, on-demand, or ad-hoc runs happen by browsing to the specific pipeline and selecting the **Run** button. You're prompted to save your changes, but the pipeline receives a ***pipeline run ID*** and you can then view the status of the ***run*** by selecting the ***Output*** tab.
1919

20-
[![Image of pipeline runs with on-demand action and their output.](../media/pipeline-run-output-jobs.png)](../media/pipeline-run-output-jobs-expanded.png#lightbox)
20+
[![Screenshot of pipeline runs with on-demand action and their output.](../media/pipeline-run-output-jobs.png)](../media/pipeline-run-output-jobs-expanded.png#lightbox)
2121

2222
### Scheduling pipeline runs
2323

@@ -26,22 +26,22 @@ When you schedule a data pipeline run, you can choose the frequency at which you
2626
1. **Select Schedule**:
2727
- This option is found in the top banner of the **Home** tab to view your scheduling options.
2828

29-
[![Image of pipeline runs scheduling button to build schedule.](../media/pipeline-scheduling.png)](../media/pipeline-scheduling-expanded.png#lightbox)
29+
[![Screenshot of pipeline runs scheduling button to build schedule.](../media/pipeline-scheduling.png)](../media/pipeline-scheduling-expanded.png#lightbox)
3030

3131
1. **Default Setting**:
3232
- By default, your data pipeline has no schedule defined.
3333
- Select the **On** radio button under the **Scheduled Run** header
3434

35-
[![Image of pane for setting pipeline schedule settings.](../media/pipeline-schedule-settings.png)](../media/pipeline-schedule-settings-expanded.png#lightbox)
35+
[![screenshot of pane for setting pipeline schedule settings.](../media/pipeline-schedule-settings.png)](../media/pipeline-schedule-settings-expanded.png#lightbox)
3636

3737
1. **Schedule Configuration**:
3838
- On the **Schedule configuration page**, you can specify:
3939
- **Schedule frequency**
4040
- **Start and end dates and times**
4141
- **Time zone**
42-
1. **Apply Your Schedule**:
42+
2. **Apply Your Schedule**:
4343
- Once you configure your settings, select **Apply** to set your schedule.
44-
1. **Editing Your Schedule**:
44+
3. **Editing Your Schedule**:
4545
- You can view or edit the schedule at any time by selecting the **Schedule** button again.
4646

4747
## Storage Event Triggers in Fabric Data Factory pipelines

0 commit comments

Comments
 (0)