Skip to content

Commit 8082bae

Browse files
committed
Merge branch 'master' of https://github.com/MicrosoftDocs/azure-docs-pr into heidist-fresh2
2 parents 48255ac + 6c5d8fa commit 8082bae

File tree

3 files changed

+23
-11
lines changed

3 files changed

+23
-11
lines changed

articles/data-factory/author-global-parameters.md

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.subservice: authoring
66
ms.topic: conceptual
77
author: joshuha-msft
88
ms.author: joowen
9-
ms.date: 05/12/2021
9+
ms.date: 01/31/2022
1010
ms.custom: devx-track-azurepowershell
1111
---
1212

@@ -30,6 +30,8 @@ After a global parameter is created, you can edit it by clicking the parameter's
3030

3131
:::image type="content" source="media/author-global-parameters/create-global-parameter-3.png" alt-text="Create global parameters":::
3232

33+
Global parameters are stored as part of the /factory/{factory_name}-arm-template parameters.json.
34+
3335
## Using global parameters in a pipeline
3436

3537
Global parameters can be used in any [pipeline expression](control-flow-expression-language-functions.md). If a pipeline is referencing another resource such as a dataset or data flow, you can pass down the global parameter value via that resource's parameters. Global parameters are referenced as `pipeline().globalParameters.<parameterName>`.
@@ -93,7 +95,8 @@ $globalParametersJson = Get-Content $globalParametersFilePath
9395
Write-Host "Parsing JSON..."
9496
$globalParametersObject = [Newtonsoft.Json.Linq.JObject]::Parse($globalParametersJson)
9597
96-
foreach ($gp in $globalParametersObject.GetEnumerator()) {
98+
foreach ($gp in $factoryFileObject.properties.globalParameters.GetEnumerator()) {
99+
# foreach ($gp in $globalParametersObject.GetEnumerator()) {
97100
Write-Host "Adding global parameter:" $gp.Key
98101
$globalParameterValue = $gp.Value.ToObject([Microsoft.Azure.Management.DataFactory.Models.GlobalParameterSpecification])
99102
$newGlobalParameters.Add($gp.Key, $globalParameterValue)

articles/data-factory/data-factory-troubleshoot-guide.md

Lines changed: 11 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.service: data-factory
77
ms.subservice: troubleshooting
88
ms.custom: synapse
99
ms.topic: troubleshooting
10-
ms.date: 09/30/2021
10+
ms.date: 01/28/2022
1111
ms.author: abnarain
1212
---
1313

@@ -1021,6 +1021,15 @@ For more information, see [Getting started with Fiddler](https://docs.telerik.co
10211021
10221022
## General
10231023
1024+
### REST continuation token NULL error
1025+
1026+
**Error message:** {\"token\":null,\"range\":{\"min\":\..}
1027+
1028+
**Cause:** When querying across multiple partitions/pages, backend service returns continuation token in JObject format with 3 properties: **token, min and max key ranges**, for instance, {\"token\":null,\"range\":{\"min\":\"05C1E9AB0DAD76\",\"max":\"05C1E9CD673398"}}). Depending on source data, querying can result 0 indicating missing token though there is more data to fetch.
1029+
1030+
**Recommendation:** When the continuationToken is non-null, as the string {\"token\":null,\"range\":{\"min\":\"05C1E9AB0DAD76\",\"max":\"05C1E9CD673398"}}, it is required to call queryActivityRuns API again with the continuation token from the previous response. You need to pass the full string for the query API again. The activities will be returned in the subsequent pages for the query result. You should ignore that there is empty array in this page, as long as the full continuationToken value != null, you need continue querying. For more details, please refer to [REST api for pipeline run query.](/rest/api/datafactory/activity-runs/query-by-pipeline-run)
1031+
1032+
10241033
### Activity stuck issue
10251034
10261035
When you observe that the activity is running much longer than your normal runs with barely no progress, it may happen to be stuck. You can try canceling it and retry to see if it helps. If it's a copy activity, you can learn about the performance monitoring and troubleshooting from [Troubleshoot copy activity performance](copy-activity-performance-troubleshooting.md); if it's a data flow, learn from [Mapping data flows performance](concepts-data-flow-performance.md) and tuning guide.
@@ -1051,4 +1060,4 @@ For more troubleshooting help, try these resources:
10511060
* [Stack Overflow forum for Data Factory](https://stackoverflow.com/questions/tagged/azure-data-factory)
10521061
* [Twitter information about Data Factory](https://twitter.com/hashtag/DataFactory)
10531062
* [Azure videos](https://azure.microsoft.com/resources/videos/index/)
1054-
* [Microsoft Q&A question page](/answers/topics/azure-data-factory.html)
1063+
* [Microsoft Q&A question page](/answers/topics/azure-data-factory.html)

articles/data-factory/data-flow-troubleshoot-errors.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.reviewer: daperlov
77
ms.service: data-factory
88
ms.subservice: data-flows
99
ms.topic: troubleshooting
10-
ms.date: 10/01/2021
10+
ms.date: 01/21/2022
1111
---
1212

1313
# Common error codes and messages
@@ -33,7 +33,7 @@ This article lists common error codes and messages reported by mapping data flow
3333

3434
If you're running the data flow in a debug test execution from a debug pipeline run, you might run into this condition more frequently. That's because Azure Data Factory throttles the broadcast timeout to 60 seconds to maintain a faster debugging experience. You can extend the timeout to the 300-second timeout of a triggered run. To do so, you can use the **Debug** > **Use Activity Runtime** option to use the Azure IR defined in your Execute Data Flow pipeline activity.
3535

36-
- **Message**: Broadcast join timeout error, you can choose 'Off' of broadcast option in join/exists/lookup transformation to avoid this issue. If you intend to broadcast join option to improve performance then make sure broadcast stream can produce data within 60 secs in debug runs and 300 secs in job runs.
36+
- **Message**: Broadcast join timeout error, you can choose 'Off' of broadcast option in join/exists/lookup transformation to avoid this issue. If you intend to broadcast join option to improve performance, then make sure broadcast stream can produce data within 60 secs in debug runs and 300 secs in job runs.
3737
- **Cause**: Broadcast has a default timeout of 60 seconds in debug runs and 300 seconds in job runs. On the broadcast join, the stream chosen for broadcast is too large to produce data within this limit. If a broadcast join isn't used, the default broadcast by dataflow can reach the same limit.
3838
- **Recommendation**: Turn off the broadcast option or avoid broadcasting large data streams for which the processing can take more than 60 seconds. Choose a smaller stream to broadcast. Large Azure SQL Data Warehouse tables and source files aren't typically good choices. In the absence of a broadcast join, use a larger cluster if this error occurs.
3939

@@ -49,7 +49,7 @@ This article lists common error codes and messages reported by mapping data flow
4949
- **Recommendation**: Set an alias if you're using a SQL function like min() or max().
5050

5151
## Error code: DF-Executor-DriverError
52-
- **Message**: INT96 is legacy timestamp type which is not supported by ADF Dataflow. Please consider upgrading the column type to the latest types.
52+
- **Message**: INT96 is legacy timestamp type, which is not supported by ADF Dataflow. Please consider upgrading the column type to the latest types.
5353
- **Cause**: Driver error.
5454
- **Recommendation**: INT96 is a legacy timestamp type that's not supported by Azure Data Factory data flow. Consider upgrading the column type to the latest type.
5555

@@ -59,7 +59,7 @@ This article lists common error codes and messages reported by mapping data flow
5959
- **Recommendation**: Contact the Microsoft product team for more details about this problem.
6060

6161
## Error code: DF-Executor-PartitionDirectoryError
62-
- **Message**: The specified source path has either multiple partitioned directories (for e.g. &lt;Source Path&gt;/<Partition Root Directory 1>/a=10/b=20, &lt;Source Path&gt;/&lt;Partition Root Directory 2&gt;/c=10/d=30) or partitioned directory with other file or non-partitioned directory (for example &lt;Source Path&gt;/&lt;Partition Root Directory 1&gt;/a=10/b=20, &lt;Source Path&gt;/Directory 2/file1), remove partition root directory from source path and read it through separate source transformation.
62+
- **Message**: The specified source path has either multiple partitioned directories (for example, &lt;Source Path&gt;/<Partition Root Directory 1>/a=10/b=20, &lt;Source Path&gt;/&lt;Partition Root Directory 2&gt;/c=10/d=30) or partitioned directory with other file or non-partitioned directory (for example &lt;Source Path&gt;/&lt;Partition Root Directory 1&gt;/a=10/b=20, &lt;Source Path&gt;/Directory 2/file1), remove partition root directory from source path and read it through separate source transformation.
6363
- **Cause**: The source path has either multiple partitioned directories or a partitioned directory that has another file or non-partitioned directory.
6464
- **Recommendation**: Remove the partitioned root directory from the source path and read it through separate source transformation.
6565

@@ -125,7 +125,7 @@ This article lists common error codes and messages reported by mapping data flow
125125
## Error code: InvalidTemplate
126126
- **Message**: The pipeline expression cannot be evaluated.
127127
- **Cause**: The pipeline expression passed in the Data Flow activity isn't being processed correctly because of a syntax error.
128-
- **Recommendation**: Check your activity in activity monitoring to verify the expression.
128+
- **Recommendation**: Check data flow activity name. Check expressions in activity monitoring to verify the expressions. For example, data flow activity name can not have a space or a hyphen.
129129

130130
## Error code: 2011
131131
- **Message**: The activity was running on Azure Integration Runtime and failed to decrypt the credential of data store or compute connected via a Self-hosted Integration Runtime. Please check the configuration of linked services associated with this activity, and make sure to use the proper integration runtime type.
@@ -248,7 +248,7 @@ This article lists common error codes and messages reported by mapping data flow
248248
## Error code: DF-Hive-InvalidBlobStagingConfiguration
249249
- **Message**: Blob storage staging properties should be specified.
250250
- **Cause**: An invalid staging configuration is provided in the Hive.
251-
- **Recommendation**: Please check if the account key, account name and container are set properly in the related Blob linked service which is used as staging.
251+
- **Recommendation**: Please check if the account key, account name and container are set properly in the related Blob linked service, which is used as staging.
252252

253253
## Error code: DF-Hive-InvalidGen2StagingConfiguration
254254
- **Message**: ADLS Gen2 storage staging only support service principal key credential.
@@ -567,4 +567,4 @@ For more help with troubleshooting, see these resources:
567567
- [Data Factory feature requests](/answers/topics/azure-data-factory.html)
568568
- [Azure videos](https://azure.microsoft.com/resources/videos/index/?sort=newest&services=data-factory)
569569
- [Stack Overflow forum for Data Factory](https://stackoverflow.com/questions/tagged/azure-data-factory)
570-
- [Twitter information about Data Factory](https://twitter.com/hashtag/DataFactory)
570+
- [Twitter information about Data Factory](https://twitter.com/hashtag/DataFactory)

0 commit comments

Comments
 (0)