Skip to content

Commit 9cdc0e2

Browse files
authored
Merge pull request #106302 from Blackmist/data-ingest-adf
Data ingest adf
2 parents 2319024 + 3574873 commit 9cdc0e2

File tree

6 files changed

+104
-0
lines changed

6 files changed

+104
-0
lines changed
Lines changed: 101 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,101 @@
1+
---
2+
title: Data ingestion with Azure Data Factory
3+
titleSuffix: Azure Machine Learning
4+
description: Learn how to build a data ingestion pipeline with Azure Data Factory.
5+
services: machine-learning
6+
ms.service: machine-learning
7+
ms.subservice: core
8+
ms.topic: conceptual
9+
ms.author: iefedore
10+
author: eedorenko
11+
manager: davete
12+
ms.reviewer: larryfr
13+
ms.date: 03/01/2020
14+
15+
# Customer intent: As an experienced data engineer, I need to create a production data ingestion pipeline for the data used to train my models.
16+
17+
---
18+
19+
# Data ingestion with Azure Data Factory
20+
21+
In this article, you learn how to build a data ingestion pipeline with Azure Data Factory (ADF). This pipeline is used to ingest data for use with Azure Machine Learning. Azure Data Factory allows you to easily extract, transform, and load (ETL) data. Once the data has been transformed and loaded into storage, it can be used to train your machine learning models.
22+
23+
Simple data transformation can be handled with native ADF activities and instruments such as [data flow](https://docs.microsoft.com/azure/data-factory/control-flow-execute-data-flow-activity). When it comes to more complicated scenarios, the data can be processed with some custom code. For example, Python or R code.
24+
25+
There are several common techniques of using Azure Data Factory to transform data during ingestion. Each technique has pros and cons that determine if it is a good fit for a specific use case:
26+
27+
| Technique | Pros | Cons |
28+
| ----- | ----- | ----- |
29+
| ADF + Azure Functions | Low latency, serverless compute</br>Stateful functions</br>Reusable functions | Only good for short running processing |
30+
| ADF + custom component | Large-scale parallel computing</br>Suited for heavy algorithms | Wrapping code into an executable</br>Complexity of handling dependencies and IO |
31+
| ADF + Azure Databricks notebook | Apache Spark</br>Native Python environment | Can be expensive</br>Creating clusters initially takes time and adds latency
32+
33+
## ADF with Azure functions
34+
35+
![adf-function](media/how-to-data-ingest-adf/adf-function.png)
36+
37+
Azure Functions allows you to run small pieces of code (functions) without worrying about application infrastructure. In this option, the data is processed with custom Python code wrapped into an Azure Function.
38+
39+
The function is invoked with the [ADF Azure Function activity](https://docs.microsoft.com/azure/data-factory/control-flow-azure-function-activity). This approach is a good option for lightweight data transformations.
40+
41+
* Pros:
42+
* The data is processed on a serverless compute with a relatively low latency
43+
* ADF pipeline can invoke a [Durable Azure Function](/azure/azure-functions/durable/durable-functions-overview) that may implement a sophisticated data transformation flow
44+
* The details of the data transformation are abstracted away by the Azure Function that can be reused and invoked from other places
45+
* Cons:
46+
* The Azure Functions must be created before use with ADF
47+
* Azure Functions is good only for short running data processing
48+
49+
## ADF with Custom Component Activity
50+
51+
![adf-customcomponent](media/how-to-data-ingest-adf/adf-customcomponent.png)
52+
53+
In this option, the data is processed with custom Python code wrapped into an executable. It is invoked with an [ADF Custom Component activity](https://docs.microsoft.com/azure/data-factory/transform-data-using-dotnet-custom-activity). This approach is a better fit for large data than the previous technique.
54+
55+
* Pros:
56+
* The data is processed on [Azure Batch](https://docs.microsoft.com/azure/batch/batch-technical-overview) pool, which provides large-scale parallel and high-performance computing
57+
* Can be used to run heavy algorithms and process significant amounts of data
58+
* Cons:
59+
* Azure Batch pool must be created before use with ADF
60+
* Over engineering related to wrapping Python code into an executable. Complexity of handling dependencies and input/output parameters
61+
62+
## ADF with Azure Databricks Python notebook
63+
64+
![adf-databricks](media/how-to-data-ingest-adf/adf-databricks.png)
65+
66+
[Azure Databricks](https://azure.microsoft.com/services/databricks/) is an Apache Spark-based analytics platform in the Microsoft cloud.
67+
68+
In this technique, the data transformation is performed by a [Python notebook](https://docs.microsoft.com/azure/data-factory/transform-data-using-databricks-notebook), running on an Azure Databricks cluster. This is probably, the most common approach that leverages the full power of an Azure Databricks service. It is designed for distributed data processing at scale.
69+
70+
* Pros:
71+
* The data is transformed on the most powerful data processing Azure service, which is backed up by Apache Spark environment
72+
* Native support of Python along with data science frameworks and libraries including TensorFlow, PyTorch, and scikit-learn
73+
* There is no need to wrap the Python code into functions or executable modules. The code works as is.
74+
* Cons:
75+
* Azure Databricks infrastructure must be created before use with ADF
76+
* Can be expensive depending on Azure Databricks configuration
77+
* Spinning up compute clusters from "cold" mode takes some time that brings high latency to the solution
78+
79+
80+
## Consuming data in Azure Machine Learning pipelines
81+
82+
![aml-dataset](media/how-to-data-ingest-adf/aml-dataset.png)
83+
84+
The transformed data from the ADF pipeline is saved to data storage (such as Azure Blob). Azure Machine Learning can access this data using [datastores](https://docs.microsoft.com/azure/machine-learning/how-to-access-data#create-and-register-datastores) and [datasets](https://docs.microsoft.com/azure/machine-learning/how-to-create-register-datasets).
85+
86+
Each time the ADF pipeline runs, the data is saved to a different location in storage. To pass the location to Azure Machine Learning, the ADF pipeline calls an Azure Machine Learning pipeline. When calling the ML pipeline, the data location and run ID are sent as parameters. The ML pipeline can then create a datastore/dataset using the data location.
87+
88+
> [!TIP]
89+
> Datasets [support versioning](https://docs.microsoft.com/azure/machine-learning/how-to-version-track-datasets), so the ML pipeline can register a new version of the dataset that points to the most recent data from the ADF pipeline.
90+
91+
Once the data is accessible through a datastore or dataset, you can use it to train an ML model. The training process might be part of the same ML pipeline that is called from ADF. Or it might be a separate process such as experimentation in a Jupyter notebook.
92+
93+
Since datasets support versioning, and each run from the pipeline creates a new version, it's easy to understand which version of the data was used to train a model.
94+
95+
## Next steps
96+
97+
* [Run a Databricks notebook in Azure Data Factory](https://docs.microsoft.com/azure/data-factory/transform-data-using-databricks-notebook)
98+
* [Access data in Azure storage services](https://docs.microsoft.com/azure/machine-learning/how-to-access-data#create-and-register-datastores)
99+
* [Train models with datasets in Azure Machine Learning](https://docs.microsoft.com/azure/machine-learning/how-to-train-with-datasets)
100+
* [DevOps for a data ingestion pipeline](https://docs.microsoft.com/azure/machine-learning/how-to-cicd-data-ingestion)
101+
24.3 KB
Loading
25.7 KB
Loading
25.5 KB
Loading
32.3 KB
Loading

articles/machine-learning/toc.yml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -212,6 +212,9 @@
212212
- name: Create datasets with labels
213213
displayName: data, labels, torchvision
214214
href: how-to-use-labeled-dataset.md
215+
- name: Data ingestion with Azure Data Factory
216+
displayName: data, ingestion, adf
217+
href: how-to-data-ingest-adf.md
215218
- name: DevOps for data ingestion
216219
displayName: data, ingestion, devops
217220
href: how-to-cicd-data-ingestion.md

0 commit comments

Comments
 (0)