You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/data-factory/better-understand-different-integration-runtime-charges.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,7 +18,7 @@ In this article, we'll illustrate the pricing model using different integration
18
18
The integration runtime, which is serverless in Azure and self-hosted in hybrid scenarios, provides the compute resources used to execute the activities in a pipeline. Integration runtime charges are prorated by the minute and rounded up.
19
19
20
20
> [!NOTE]
21
-
> The prices used in these examples below are hypothetical and are not intended to imply actual pricing.
21
+
> The prices used in this example below are hypothetical and are not intended to imply actual pricing.
* The foreach activity will iterate over a specified collection of activities in a loop.
51
51
- Trigger-based flows:
52
-
- Pipelines can be triggered on demand, by wall-clock time, or in response to driven by event grid topics
52
+
- Pipelines can be triggered on demand, by wall-clock time, or in response to driven by Event Grid topics
53
53
- Delta flows:
54
54
- Parameters can be used to define your high-water mark for delta copy while moving dimension or reference tables from a relational store, either on-premises or in the cloud, to load the data into the lake.
55
55
@@ -223,14 +223,18 @@ sections:
223
223
### How do I gracefully handle null values in an activity output?
224
224
225
225
You can use the `@coalesce` construct in the expressions to handle null values gracefully.
226
+
227
+
### How many pipeline activities can be executed simultaneously?
228
+
229
+
A maximum of 50 concurrent pipeline activities is allowed. The 51st pipeline activity will be queued until a free slot is opened up. A maximum of 800 concurrent external activities will be allowed, after which they will be queued in the same way.
226
230
227
231
- question: |
228
232
Mapping data flows
229
233
answer: |
230
234
### I need help troubleshooting my data flow logic. What info do I need to provide to get help?
231
235
232
236
When Microsoft provides help or troubleshooting with data flows, please provide the ADF pipeline support files.
233
-
This Zip file contains the code-behind script from your data flow graph. From the ADF UI, click **...** next to pipeline, and then click **Download support files**.
237
+
This Zip file contains the code-behind script from your data flow graph. From the ADF UI, select **...** next to pipeline, and then select **Download support files**.
234
238
235
239
### How do I access data by using the other 90 dataset types in Data Factory?
236
240
@@ -240,23 +244,23 @@ sections:
240
244
241
245
### Is the self-hosted integration runtime available for data flows?
242
246
243
-
Self-hosted IR is an ADF pipeline construct that you can use with the Copy Activity to acquire or move data to and from on-prem or VM-based data sources and sinks. The virtual machines that you use for a self-hosted IR can also be placed inside of the same VNET as your protected data stores for access to those data stores from ADF. With data flows, you'll achieve these same end-results using the Azure IR with managed VNET instead.
247
+
Self-hosted IR is an ADF pipeline construct that you can use with the Copy Activity to acquire or move data to and from on-premises or VM-based data sources and sinks. The virtual machines that you use for a self-hosted IR can also be placed inside of the same VNET as your protected data stores for access to those data stores from ADF. With data flows, you'll achieve these same end-results using the Azure IR with managed VNET instead.
244
248
245
249
### Does the data flow compute engine serve multiple tenants?
246
250
247
251
Clusters are never shared. We guarantee isolation for each job run in production runs. In case of debug scenario one person gets one cluster, and all debugs will go to that cluster which are initiated by that user.
248
252
249
-
### Is there a way to write attributes in cosmos db in the same order as specified in the sink in ADF data flow?
253
+
### Is there a way to write attributes in Cosmos DB in the same order as specified in the sink in ADF data flow?
250
254
251
-
For cosmos DB, the underlying format of each document is a JSON object which is an unordered set of name/value pairs, so the order cannot be reserved.
255
+
For Cosmos DB, the underlying format of each document is a JSON object which is an unordered set of name/value pairs, so the order cannot be reserved.
252
256
253
257
### Why a user is unable to use data preview in the data flows?
254
258
255
259
You should check permissions for custom role. There are multiple actions involved in the dataflow data preview. You start by checking network traffic while debugging on your browser. Please follow all of the actions, for details, please refer to [Resource provider.](../role-based-access-control/resource-provider-operations.md#microsoftdatafactory)
256
260
257
261
### In ADF, can I calculate value for a new column from existing column from mapping?
258
262
259
-
You can use derive transformation in mapping data flow to create a new column on the logic you want. When creating a derived column, you can either generate a new column or update an existing one. In the Column textbox, enter in the column you are creating. To override an existing column in your schema, you can use the column dropdown. To build the derived column's expression, click on the Enter expression textbox. You can either start typing your expression or open up the expression builder to construct your logic.
263
+
You can use derive transformation in mapping data flow to create a new column on the logic you want. When creating a derived column, you can either generate a new column or update an existing one. In the Column textbox, enter in the column you are creating. To override an existing column in your schema, you can use the column dropdown. To build the derived column's expression, select on the Enter expression textbox. You can either start typing your expression or open up the expression builder to construct your logic.
260
264
261
265
### Why mapping data flow preview failing with Gateway timeout?
262
266
@@ -279,7 +283,7 @@ sections:
279
283
Data factory is available in following [regions.](https://azure.microsoft.com/global-infrastructure/services/?products=data-factory)
280
284
The Power Query feature is available in all data flow regions. If the feature is not available in your region, please check with support.
281
285
282
-
### What is the difference between mapping data flow and Power query actvity (data wrangling)?
286
+
### What is the difference between mapping data flow and Power query activity (data wrangling)?
283
287
284
288
Mapping data flows provide a way to transform data at scale without any coding required. You can design a data transformation job in the data flow canvas by constructing a series of transformations. Start with any number of source transformations followed by data transformation steps. Complete your data flow with a sink to land your results in a destination. Mapping data flow is great at mapping and transforming data with both known and unknown schemas in the sinks and sources.
0 commit comments