You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/data-factory/how-to-data-flow-error-rows.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,7 +20,7 @@ There are two primary methods to graceful handle errors when writing data to you
20
20
* Alternatively, use the following steps to provide logging of columns that don't fit into a target string column, allowing your data flow to continue.
21
21
22
22
> [!NOTE]
23
-
> When enabling automatic error row handling, as opposed to the following method of writing your own error handling logic, there will be a small performance penalty incurred by and additional step taken by ADF to perform a 2-phase operation to trap errors.
23
+
> When enabling automatic error row handling, as opposed to the following method of writing your own error handling logic, there will be a small performance penalty incurred by and additional step taken by the data factory to perform a 2-phase operation to trap errors.
Copy file name to clipboardExpand all lines: articles/data-factory/how-to-sqldb-to-cosmosdb.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,9 +14,9 @@ This guide explains how to take an existing normalized database schema in Azure
14
14
15
15
SQL schemas are typically modeled using third normal form, resulting in normalized schemas that provide high levels of data integrity and fewer duplicate data values. Queries can join entities together across tables for reading. Azure Cosmos DB is optimized for super-quick transactions and querying within a collection or container via denormalized schemas with data self-contained inside a document.
16
16
17
-
Using Azure Data Factory, we build a pipeline that uses a single Mapping Data Flow to read from two Azure SQL Database normalized tables that contain primary and foreign keys as the entity relationship. ADF will join those tables into a single stream using the data flow Spark engine, collect joined rows into arrays and produce individual cleansed documents for insert into a new Azure Cosmos DB container.
17
+
Using Azure Data Factory, we build a pipeline that uses a single Mapping Data Flow to read from two Azure SQL Database normalized tables that contain primary and foreign keys as the entity relationship. Data factory will join those tables into a single stream using the data flow Spark engine, collect joined rows into arrays and produce individual cleansed documents for insert into a new Azure Cosmos DB container.
18
18
19
-
This guide builds a new container on the fly called "orders" that will use the ```SalesOrderHeader``` and ```SalesOrderDetail``` tables from the standard SQL Server [Adventure Works sample database](/sql/samples/adventureworks-install-configure?tabs=ssms). Those tables represent sales transactions joined by ```SalesOrderID```. Each unique detail records have its own primary key of ```SalesOrderDetailID```. The relationship between header and detail is ```1:M```. We join on ```SalesOrderID``` in ADF and then roll each related detail record into an array called "detail".
19
+
This guide builds a new container on the fly called "orders" that will use the ```SalesOrderHeader``` and ```SalesOrderDetail``` tables from the standard SQL Server [Adventure Works sample database](/sql/samples/adventureworks-install-configure?tabs=ssms). Those tables represent sales transactions joined by ```SalesOrderID```. Each unique detail record has its own primary key of ```SalesOrderDetailID```. The relationship between header and detail is ```1:M```. We join on ```SalesOrderID``` in ADF and then roll each related detail record into an array called "detail".
20
20
21
21
The representative SQL query for this guide is:
22
22
@@ -33,7 +33,7 @@ The representative SQL query for this guide is:
33
33
FROM SalesLT.SalesOrderHeader o;
34
34
```
35
35
36
-
The resulting Azure Cosmos DB container embeds the inner query into a single document and look like this:
36
+
The resulting Azure Cosmos DB container embeds the inner query into a single document and looks like this:
@@ -61,7 +61,7 @@ The resulting Azure Cosmos DB container embeds the inner query into a single doc
61
61
62
62
9. Now, let's go to the sales header source. Add a Join transformation. For the right-side select "MakeStruct". Leave it set to inner join and choose ```SalesOrderID``` for both sides of the join condition.
63
63
64
-
10. Select on the Data Preview tab in the new join that you added so that you can see your results up to this point. You should see all of the header rows joined with the detail rows. This is the result of the join being formed from the ```SalesOrderID```. Next, we combine the details from the common rows into the details struct and aggregate the common rows.
64
+
10. Select the Data Preview tab in the new join that you added so that you can see your results up to this point. You should see all of the header rows joined with the detail rows. This is the result of the join being formed from the ```SalesOrderID```. Next, we combine the details from the common rows into the details struct and aggregate the common rows.
Copy file name to clipboardExpand all lines: articles/data-factory/pipeline-trigger-troubleshoot-guide.md
+10-10Lines changed: 10 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,11 +21,11 @@ Pipeline runs are typically instantiated by passing arguments to parameters that
21
21
22
22
### An Azure Functions app pipeline throws an error with private endpoint connectivity
23
23
24
-
You have Data Factory and a function app running on a private endpoint in Azure. You're trying to run a pipeline that interacts with the function app. You've tried three different methods, but one returns error "Bad Request," and the other two methods return "103 Error Forbidden."
24
+
You have a data factory and a function app running on a private endpoint in Azure. You're trying to run a pipeline that interacts with the function app. You've tried three different methods, but one returns error "Bad Request," and the other two methods return "103 Error Forbidden."
25
25
26
26
**Cause**
27
27
28
-
Data Factory currently doesn't support a private endpoint connector for function apps. Azure Functions rejects calls because it's configured to allow only connections from a private link.
28
+
Azure Data Factory currently doesn't support a private endpoint connector for function apps. Azure Functions rejects calls because it's configured to allow only connections from a private link.
29
29
30
30
**Resolution**
31
31
@@ -45,7 +45,7 @@ Refresh the browser and apply the correct monitoring filters.
45
45
46
46
**Cause**
47
47
48
-
If a folder you're copying contains files with different schemas, such as variable number of columns, different delimiters, quote char settings, or some data issue, the Data Factory pipeline might throw this error:
48
+
If a folder you're copying contains files with different schemas, such as variable number of columns, different delimiters, quote char settings, or some data issue, the pipeline might throw this error:
49
49
50
50
`
51
51
Operation on target Copy_sks failed: Failure happened on 'Sink' side.
Select the **Binary Copy** option while creating the Copy activity. This way, for bulk copies or migrating your data from one data lake to another, Data Factory won't open the files to read the schema. Instead, Data Factory treats each file as binary and copy it to the other location.
60
+
Select the **Binary Copy** option while creating the Copy activity. This way, for bulk copies or migrating your data from one data lake to another, Data Factory won't open the files to read the schema. Instead, Azure Data Factory treats each file as binary and copies it to the other location.
61
61
62
62
### A pipeline run fails when you reach the capacity limit of the integration runtime for data flow
63
63
@@ -115,7 +115,7 @@ Azure Data Factory evaluates the outcome of all leaf-level activities. Pipeline
115
115
116
116
**Cause**
117
117
118
-
You might need to monitor failed Data Factory pipelines in intervals, say 5 minutes. You can query and filter the pipeline runs from a data factory by using the endpoint.
118
+
You might need to monitor failed Azure Data Factory pipelines in intervals, say 5 minutes. You can query and filter the pipeline runs from a data factory by using the endpoint.
119
119
120
120
**Resolution**
121
121
* You can set up an Azure logic app to query all of the failed pipelines every 5 minutes, as described in [Query By Factory](/rest/api/datafactory/pipelineruns/querybyfactory). Then, you can report incidents to your ticketing system.
@@ -152,7 +152,7 @@ Known Facts about *ForEach*
152
152
**Resolution**
153
153
154
154
***Concurrency Limit:** If your pipeline has a concurrency policy, verify that there are no old pipeline runs in progress.
155
-
***Monitoring limits**: Go to the ADF authoring canvas, select your pipeline, and determine if it has a concurrency property assigned to it. If it does, go to the Monitoring view, and make sure there's nothing in the past 45 days that's in progress. If there's something in progress, you can cancel it and the new pipeline run should start.
155
+
***Monitoring limits**: Go to the authoring canvas, select your pipeline, and determine if it has a concurrency property assigned to it. If it does, go to the Monitoring view, and make sure there's nothing in the past 45 days that's in progress. If there's something in progress, you can cancel it and the new pipeline run should start.
156
156
157
157
***Transient Issues:** It's possible that your run was impacted by a transient network issue, credential failures, services outages etc. If this happens, Azure Data Factory has an internal recovery process that monitors all the runs and starts them when it notices something went wrong. You can rerun pipelines and activities as described [here.](monitor-visually.md#rerun-pipelines-and-activities). You can rerun activities if you had canceled activity or had a failure as per [Rerun from activity failures.](monitor-visually.md#rerun-from-failed-activity) This process happens every one hour, so if your run is stuck for more than an hour, create a support case.
158
158
@@ -199,7 +199,7 @@ It's a user error because JSON payload that hits management.azure.com is corrupt
199
199
200
200
**Resolution**
201
201
202
-
Perform network tracing of your API call from ADF portal using Microsoft Edge/Chrome browser **Developer tools**. You'll see offending JSON payload, which could be due to a special character(for example $), spaces, and other types of user input. Once you fix the string expression, you'll proceed with rest of ADF usage calls in the browser.
202
+
Perform network tracing of your API call from ADF portal using Microsoft Edge/Chrome browser **Developer tools**. You'll see offending JSON payload, which could be due to a special character(for example, ```$```), spaces, and other types of user input. Once you fix the string expression, you'll proceed with rest of ADF usage calls in the browser.
203
203
204
204
### ForEach activities don't run in parallel mode
205
205
@@ -272,8 +272,8 @@ Input **execute pipeline** activity for pipeline parameter as *@createArray(
272
272
273
273
For more troubleshooting help, try these resources:
0 commit comments