You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Event Grid is a fully managed service that enables you to easily manage events across many different Azure services and applications. It simplifies building event-driven and serverless applications. In this tutorial, we learn how to trigger a batch endpoint's job to process files as soon as they are created in a storage account. In this architecture, we use a logic app to subscribe to those events and trigger the endpoint.
22
+
Event Grid is a fully managed service that you can use to easily manage events across many different Azure services and applications. The service simplifies the way that you build event-driven and serverless applications. This tutorial shows you how to trigger a batch endpoint's job to process files as soon as they're created in a storage account. The architecture uses a logic app workflow to subscribe to those events and trigger the endpoint.
23
23
24
-
The workflow looks as follows:
24
+
The following diagram shows the architecture for this solution:
25
25
26
-
:::image type="content" source="./media/how-to-use-event-grid-batch/batch-endpoint-event-grid-arch.png" alt-text="Diagram displaying the different components of the architecture.":::
26
+
:::image type="content" source="./media/how-to-use-event-grid-batch/batch-endpoint-event-grid-arch.png" alt-text="Conceptual diagram shows the components for this architecture.":::
27
27
28
-
1. A **file created** event is triggered when a new blob is created in a specific storage account.
29
-
2. The event is sent to Event Grid to get processed to all the subscribers.
30
-
3. A logic app is subscribed to listen to those events. Since the storage account can contain multiple data assets, event filtering will be applied to only react to events happening in a specific folder inside of it. Further filtering can be done if needed (for instance, based on file extensions).
31
-
4. The logic app will be triggered, which in turns will:
28
+
The following steps describe the high-level steps in this solution:
32
29
33
-
1.It will get an authorization token to invoke batch endpoints using the credentials from a Service Principal
30
+
1.When a new blob is created in a specific storage account, a **file created** event is triggered.
34
31
35
-
1.It will trigger the batch endpoint (default deployment) using the newly created file as input.
32
+
1.The event is sent to Event Grid to get processed to all the subscribers.
36
33
37
-
5. The batch endpoint will return the name of the job that was created to process the file.
34
+
1. The logic app workflow subscribes and listens to those events.
35
+
36
+
The storage account can contain multiple data assets, so event filtering is applied to react only to events happening in a specific folder in the storage account. Further filtering can be done if needed, for example, based on file extensions.
37
+
38
+
1. The logic app workflow triggers, and performs the following actions:
39
+
40
+
1. Gets an authorization token to invoke batch endpoints using the credentials from a service principal.
41
+
42
+
1. Triggers the batch endpoint (default deployment) using the newly created file as input.
43
+
44
+
1. The batch endpoint returns the name of the job that was created to process the file.
38
45
39
46
> [!IMPORTANT]
40
-
> When using a logic app connected with event grid to invoke batch endpoint, you are generating one job per *each blob file* created in the sotrage account. Keep in mind that since batch endpoints distribute the work at the file level, there will not be any parallelization happening. Instead, you will be taking advantage of batch endpoints's capability of executing multiple jobs under the same compute cluster. If you need to run jobs on entire folders in an automatic fashion, we recommend you to switch to [Invoking batch endpoints from Azure Data Factory](how-to-use-batch-azure-data-factory.md).
47
+
>
48
+
> When you use a logic app workflow that connects with Event Grid to invoke batch endpoint, you generate one job per *each blob file* created in the storage account. Keep in mind that batch endpoints distribute the work at the file level, so no parallelization happens. Instead, you use the batch endpoints's capability to execute multiple jobs on the same compute cluster. If you need to run jobs on entire folders in an automatic fashion, we recommend that you to switch to [Invoking batch endpoints from Azure Data Factory](how-to-use-batch-azure-data-factory.md).
41
49
42
50
## Prerequisites
43
51
44
-
* This example assumes that you have a model correctly deployed as a batch endpoint. This architecture can perfectly be extended to work with [Pipeline component deployments](concept-endpoints-batch.md?#pipeline-component-deployment) if needed.
45
-
* This example assumes that your batch deployment runs in a compute cluster called `batch-cluster`.
46
-
* The logic app we are creating will communicate with Azure Machine Learning batch endpoints using REST. To know more about how to use the REST API of batch endpoints read [Create jobs and input data for batch endpoints](how-to-access-data-batch-endpoints-jobs.md?tabs=rest).
52
+
* You have a model correctly deployed as a batch endpoint. You can extend this architecture to work with [Pipeline component deployments](concept-endpoints-batch.md?#pipeline-component-deployment) if needed.
53
+
54
+
* Your batch deployment runs in a compute cluster called `batch-cluster`.
55
+
56
+
* The logic app that you create communicates with Azure Machine Learning batch endpoints using REST.
57
+
58
+
For more information about how to use the REST API for batch endpoints, see [Create jobs and input data for batch endpoints](how-to-access-data-batch-endpoints-jobs.md?tabs=rest).
47
59
48
60
## Authenticate against batch endpoints
49
61
50
-
Azure Logic Apps can invoke the REST APIs of batch endpoints by using the [HTTP](../connectors/connectors-native-http.md) activity. Batch endpoints support Microsoft Entra ID for authorization and hence the request made to the APIs require a proper authentication handling.
62
+
Azure Logic Apps can invoke the REST APIs of batch endpoints by using the [HTTP](../connectors/connectors-native-http.md) action. Batch endpoints support Microsoft Entra ID for authorization and hence the request made to the APIs require a proper authentication handling.
63
+
64
+
This tutorial uses a service principal for authentication and interaction with batch endpoints in this scenario.
65
+
66
+
1. Create a service principal by following [Register an application with Microsoft Entra ID and create a service principal](../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal).
51
67
52
-
We recommend to using a service principal for authentication and interaction with batch endpoints in this scenario.
68
+
1. Create a secret to use for authentication by following [Option 3: Create a new client secret](../active-directory/develop/howto-create-service-principal-portal.md#option-3-create-a-new-client-secret).
53
69
54
-
1. Create a service principal following the steps at [Register an application with Microsoft Entra ID and create a service principal](../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal).
55
-
1. Create a secret to use for authentication as explained at [Option 3: Create a new client secret](../active-directory/develop/howto-create-service-principal-portal.md#option-3-create-a-new-client-secret).
56
-
1. Take note of the client secret **Value** that is generated. This is only displayed once.
57
-
1. Take note of the `client ID` and the `tenant id` in the **Overview** pane of the application.
58
-
1. Grant access for the service principal you created to your workspace as explained at [Grant access](../role-based-access-control/quickstart-assign-role-user-portal.md#grant-access). In this example the service principal will require:
70
+
1. Make sure to save the generated client secret **Value**, which appears only once.
59
71
60
-
1. Permission in the workspace to read batch deployments and perform actions over them.
61
-
1. Permissions to read/write in data stores.
72
+
1. Make sure to save the `client ID` and the `tenant id` in the application's **Overview** pane.
62
73
63
-
## Enabling data access
74
+
1. Grant your service principal access to your workspace by following [Grant access](../role-based-access-control/quickstart-assign-role-user-portal.md#grant-access). For this example, the service principal requires the following:
64
75
65
-
We will be using cloud URIs provided by Event Grid to indicate the input data to send to the deployment job. Batch endpoints use the identity of the compute to mount the data while keeping the identity of the job **to read it** once mounted. Hence, we need to assign a user-assigned managed identity to the compute cluster in order to ensure it does have access to mount the underlying data. Follow these steps to ensure data access:
76
+
- Permission in the workspace to read batch deployments and perform actions over them.
77
+
- Permissions to read/write in data stores.
78
+
79
+
## Enable data access
80
+
81
+
To indicate the input data that you want to send to the deployment job, this tutorial uses cloud URIs provided by Event Grid. Batch endpoints use the identity of the compute to mount the data, while keeping the identity of the job to read the mounted data. So, you have to assign a user-assigned managed identity to the compute cluster, and make sure the cluster has access to mount the underlying data. To ensure data access, follow these steps:
66
82
67
83
1. Create a [managed identity resource](../active-directory/managed-identities-azure-resources/overview.md):
68
84
@@ -79,10 +95,11 @@ We will be using cloud URIs provided by Event Grid to indicate the input data to
1.Go to the [Azure portal](https://portal.azure.com) and ensure the managed identity has the right permissions to read the data. To access storage services, you must have at least [Storage Blob Data Reader](../role-based-access-control/built-in-roles.md#storage-blob-data-reader) access to the storage account. Only storage account owners can [change your access level via the Azure portal](../storage/blobs/assign-azure-role-data-access.md).
128
+
1.In the [Azure portal](https://portal.azure.com), make sure the managed identity has the correct permissions to read the data.
112
129
113
-
## Create a logic app
130
+
To access storage services, you must have at least [Storage Blob Data Reader](../role-based-access-control/built-in-roles.md#storage-blob-data-reader) access to the storage account. Only storage account owners can [change your access level via the Azure portal](../storage/blobs/assign-azure-role-data-access.md).
114
131
115
-
1. In the [Azure portal](https://portal.azure.com), sign in with your Azure account.
132
+
## Create a logic app
116
133
117
-
1.On the Azure home page, select **Create a resource**.
134
+
1.In the [Azure portal](https://portal.azure.com), on the Azure home page, select **Create a resource**.
118
135
119
136
1. On the Azure Marketplace menu, select **Integration** > **Logic App**.
120
137
121
-

138
+

122
139
123
140
1. On the **Create Logic App** pane, on the **Basics** tab, provide the following information about your logic app resource.
124
141
125
-

126
-
127
142
| Property | Required | Value | Description |
128
143
|----------|----------|-------|-------------|
129
144
|**Subscription**| Yes | <*Azure-subscription-name*> | Your Azure subscription name. This example uses **Pay-As-You-Go**. |
130
145
|**Resource Group**| Yes |**LA-TravelTime-RG**| The [Azure resource group](../azure-resource-manager/management/overview.md) where you create your logic app resource and related resources. This name must be unique across regions and can contain only letters, numbers, hyphens (`-`), underscores (`_`), parentheses (`(`, `)`), and periods (`.`). |
131
146
|**Name**| Yes |**LA-TravelTime**| Your logic app resource name, which must be unique across regions and can contain only letters, numbers, hyphens (`-`), underscores (`_`), parentheses (`(`, `)`), and periods (`.`). |
132
147
148
+

149
+
133
150
1. Before you continue making selections, go to the **Plan** section. For **Plan type**, select **Consumption** to show only the settings for a Consumption logic app workflow, which runs in multitenant Azure Logic Apps.
134
151
152
+
> [!IMPORTANT]
153
+
>
154
+
> For private-link enabled workspaces, you need to use the Standard plan for Azure Logic Apps with allow private networking configuration.
155
+
135
156
The **Plan type** property also specifies the billing model to use.
136
157
137
158
| Plan type | Description |
138
159
|-----------|-------------|
139
160
|**Standard**| This logic app type is the default selection and runs in single-tenant Azure Logic Apps and uses the [Standard pricing model](../logic-apps/logic-apps-pricing.md#standard-pricing). |
140
161
|**Consumption**| This logic app type runs in global, multitenant Azure Logic Apps and uses the [Consumption pricing model](../logic-apps/logic-apps-pricing.md#consumption-pricing). |
141
162
142
-
> [!IMPORTANT]
143
-
> For private-link enabled workspaces, you need to use the Standard plan for Azure Logic Apps with allow private networking configuration.
144
-
145
163
1. Now continue with the following selections:
146
164
147
165
| Property | Required | Value | Description |
@@ -203,7 +221,7 @@ We want to trigger the logic app workflow each time a new file is created in a g
203
221
204
222
> [!IMPORTANT]
205
223
>
206
-
> The **Prefix Filter** property allows Event Grid to only notify the workflow when a blob is created in the specific path we indicated. In this case, we are assumming that files are created by some external process in the folder specified by **<path-to-data-folder>** inside the container **<container-name>**, which is in the selected storage account. Configure this parameter to match the location of your data. Otherwise, the event is fired for any file created at any location of the storage account. For more information, see [Event filtering for Event Grid](../event-grid/event-filtering.md).
224
+
> The **Prefix Filter** property allows Event Grid to only notify the workflow when a blob is created in the specific path we indicated. In this case, we assume that files are created by some external process in the folder specified by **<path-to-data-folder>** inside the container **<container-name>**, which is in the selected storage account. Configure this parameter to match the location of your data. Otherwise, the event is fired for any file created at any location of the storage account. For more information, see [Event filtering for Event Grid](../event-grid/event-filtering.md).
207
225
208
226
The following example shows how the trigger appears:
209
227
@@ -218,13 +236,13 @@ We want to trigger the logic app workflow each time a new file is created in a g
218
236
| Property | Value | Notes |
219
237
|----------|-------|-------|
220
238
|**Method**|`POST`| The HTTP method |
221
-
|**URI**|`concat('https://login.microsoftonline.com/', parameters('tenant_id'), '/oauth2/token')`| To enter this expression, select inside the **URI** box. From the options that appear, select the expresion editor (formula icon). |
239
+
|**URI**|`concat('https://login.microsoftonline.com/', parameters('tenant_id'), '/oauth2/token')`| To enter this expression, select inside the **URI** box. From the options that appear, select the expression editor (formula icon). |
222
240
|**Headers**|`Content-Type` with value `application/x-www-form-urlencoded`||
223
-
|**Body**|`concat('grant_type=client_credentials&client_id=', parameters('client_id'), '&client_secret=', parameters('client_secret'), '&resource=https://ml.azure.com')`| To enter this expression, select inside the **Body** box. From the options that appear, select the expresion editor (formula icon). |
241
+
|**Body**|`concat('grant_type=client_credentials&client_id=', parameters('client_id'), '&client_secret=', parameters('client_secret'), '&resource=https://ml.azure.com')`| To enter this expression, select inside the **Body** box. From the options that appear, select the expression editor (formula icon). |
224
242
225
243
The following example shows a sample **Authorize** action:
226
244
227
-
:::image type="content" source="./media/how-to-use-event-grid-batch/authorize.png" alt-text="Screenshot of the authorize activity of the logic app.":::
245
+
:::image type="content" source="./media/how-to-use-event-grid-batch/authorize.png" alt-text="Screenshot shows sample Authorize action in the logic app workflow.":::
228
246
229
247
1. Under the **Authorize** action, add another **HTTP** action, and rename the title to **Invoke**.
230
248
@@ -235,9 +253,9 @@ We want to trigger the logic app workflow each time a new file is created in a g
235
253
|**Method**|`POST`| The HTTP method |
236
254
|**URI**|`endpoint_uri`| Select inside the **URI** box, and then under **Parameters**, select **endpoint_uri**. |
237
255
|**Headers**|`Content-Type` with value `application/json`||
238
-
|**Headers**|`Authorization` with value `concat('Bearer ', body('Authorize')['access_token'])`| To enter this expression, select inside the **Headers** box. From the options that appear, select the expresion editor (formula icon). |
256
+
|**Headers**|`Authorization` with value `concat('Bearer ', body('Authorize')['access_token'])`| To enter this expression, select inside the **Headers** box. From the options that appear, select the expression editor (formula icon). |
239
257
240
-
1. Select inside the **Body** box, and from the options that appear, select the expresion editor (formula icon) to enter the following expression:
258
+
1. Select inside the **Body** box, and from the options that appear, select the expression editor (formula icon) to enter the following expression:
0 commit comments