Skip to content

Commit 0be8187

Browse files
Merge pull request #304063 from MicrosoftDocs/main
Auto Publish – main to live - 2025-08-11 11:00 UTC
2 parents 0ab6ef3 + ffc7eab commit 0be8187

14 files changed

+114
-41
lines changed

articles/backup/backup-azure-files-faq.yml

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -108,6 +108,10 @@ sections:
108108
answer: |
109109
If you are restoring from snapshot backup, for Original Location Recovery (OLR) the total restore time depends on number of files and directories in the share. When you restore to alternate location, the restore time depends on number of files and directories in the share to be restored and available IOPS on source and target storage account.
110110
111+
- question: |
112+
Will the retention policy delete a recovery point selected for restore before the restore operation completes?
113+
answer: |
114+
No. Azure Backup takes a lease on the recovery point during restore, ensuring it isn't deleted before completion.
111115
112116
113117
- name: Manage backup

articles/backup/troubleshoot-azure-files.md

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
title: Troubleshoot Azure Files backup
33
description: This article is troubleshooting information about issues occurring when protecting your Azure Files.
44
ms.service: azure-backup
5-
ms.date: 04/30/2025
5+
ms.date: 08/11/2025
66
ms.topic: troubleshooting
77
author: AbhishekMallick-MS
88
ms.author: v-mallicka
@@ -307,6 +307,13 @@ Recommended Actions: Ensure that the following configurations in the storage acc
307307

308308
**Recommended action**: The next backup will be automatically triggered with increased vault storage.
309309

310+
### UserErrorStorageKeyBasedAuthenticationNotPermitted
311+
312+
**Error code**: `UserErrorStorageKeyBasedAuthenticationNotPermitted`
313+
314+
**Error message**: Storage account does not support key based authentication required for Azure Backup integration.
315+
316+
**Recommended action**: Enable storage key based authentication on storage account and the retry opertaion.
310317

311318
## Common policy modification errors
312319

articles/data-factory/connector-azure-database-for-postgresql.md

Lines changed: 69 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -25,11 +25,12 @@ This connector is specialized for the [Azure Database for PostgreSQL service](/a
2525

2626
This Azure Database for PostgreSQL connector is supported for the following capabilities:
2727

28-
| Supported capabilities|IR | Managed private endpoint|
28+
| Supported capabilities | IR | Managed private endpoint | Connector supported versions |
2929
|---------| --------| --------|
30-
|[Copy activity](copy-activity-overview.md) (source/sink)|① ②||
31-
|[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|① ||
32-
|[Lookup activity](control-flow-lookup-activity.md)|① ②||
30+
|[Copy activity](copy-activity-overview.md) (source/sink)|① ②||1.0 & 2.0 |
31+
|[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|① ||1.0 & 2.0 |
32+
|[Lookup activity](control-flow-lookup-activity.md)|① ②||1.0 & 2.0 |
33+
|[Script activity](transform-data-using-script.md)|① ②||2.0 |
3334

3435
*① Azure integration runtime ② Self-hosted integration runtime*
3536

@@ -474,17 +475,20 @@ To copy data from Azure Database for PostgreSQL, set the source type in the copy
474475

475476
### Azure Database for PostgreSQL as sink
476477

477-
To copy data to Azure Database for PostgreSQL, the following properties are supported in the copy activity **sink** section:
478+
To copy data to Azure Database for PostgreSQL, set the sink type in the copy activity to **SqlSink**. The following properties are supported in the copy activity **sink** section:
478479

479-
| Property | Description | Required |
480-
|:--- |:--- |:--- |
481-
| type | The type property of the copy activity sink must be set to **AzurePostgreSqlSink**. | Yes |
482-
| preCopyScript | Specify a SQL query for the copy activity to execute before you write data into Azure Database for PostgreSQL in each run. You can use this property to clean up the preloaded data. | No |
483-
| writeMethod | The method used to write data into Azure Database for PostgreSQL.<br>Allowed values are: **CopyCommand** (default, which is more performant), **BulkInsert**. | No |
484-
| writeBatchSize | The number of rows loaded into Azure Database for PostgreSQL per batch.<br>Allowed value is an integer that represents the number of rows. | No (default is 1,000,000) |
485-
| writeBatchTimeout | Wait time for the batch insert operation to complete before it times out.<br>Allowed values are Timespan strings. An example is 00:30:00 (30 minutes). | No (default is 00:30:00) |
480+
| Property | Description | Required | Connector support version |
481+
|:--- |:--- |:--- |:--- |
482+
| type | The type property of the copy activity sink must be set to **AzurePostgreSQLSink**. | Yes | Version 1.0 & Version 2.0 |
483+
| preCopyScript | Specify a SQL query for the copy activity to execute before you write data into Azure Database for PostgreSQL in each run. You can use this property to clean up the preloaded data. | No | Version 1.0 & Version 2.0 |
484+
| writeMethod | The method used to write data into Azure Database for PostgreSQL.<br>Allowed values are: **CopyCommand** (default, which is more performant), **BulkInsert** and **Upsert** (Version 2.0 only). | No | Version 1.0 & Version 2.0 |
485+
| upsertSettings | Specify the group of the settings for write behavior. <br/> Apply when the WriteBehavior option is `Upsert`. | No | Version 2.0 |
486+
| ***Under `upsertSettings`:*** | | |
487+
| keys | Specify the column names for unique row identification. Either a single key or a series of keys can be used. Keys must be a primary key or unique column. If not specified, the primary key is used. | No | Version 2.0 |
488+
| writeBatchSize | The number of rows loaded into Azure Database for PostgreSQL per batch.<br>Allowed value is an integer that represents the number of rows. | No (default is 1,000,000) | Version 1.0 & Version 2.0 |
489+
| writeBatchTimeout | Wait time for the batch insert operation to complete before it times out.<br>Allowed values are Timespan strings. An example is 00:30:00 (30 minutes). | No (default is 00:30:00) | Version 1.0 & Version 2.0 |
486490

487-
**Example**:
491+
**Example 1: Copy Command**
488492

489493
```json
490494
"activities":[
@@ -518,6 +522,47 @@ To copy data to Azure Database for PostgreSQL, the following properties are supp
518522
]
519523
```
520524

525+
**Example 2: Upsert data**
526+
527+
```json
528+
"activities":[
529+
{
530+
"name": "CopyToAzureDatabaseForPostgreSQL",
531+
"type": "Copy",
532+
"inputs": [
533+
{
534+
"referenceName": "<input dataset name>",
535+
"type": "DatasetReference"
536+
}
537+
],
538+
"outputs": [
539+
{
540+
"referenceName": "<Azure PostgreSQL output dataset name>",
541+
"type": "DatasetReference"
542+
}
543+
],
544+
"typeProperties": {
545+
"source": {
546+
"type": "<source type>"
547+
},
548+
"sink": {
549+
"type": "AzurePostgreSQLSink",
550+
"writeMethod": "Upsert",
551+
"upsertSettings": {
552+
"keys": [
553+
"<column name>"
554+
]
555+
},
556+
}
557+
}
558+
}
559+
]
560+
```
561+
562+
### Upsert data
563+
564+
Copy activity natively supports upsert operations. To perform an upsert, user should provide key column(s) that are either primary keys or unique columns. If the user does not provide key column(s) then primary key column(s) in the sink table are used. Copy Activity will update non-key column(s) in the sink table where the key column value(s) match those in the source table; otherwise, it will insert new data.
565+
521566
## Parallel copy from Azure Database for PostgreSQL
522567

523568
The Azure Database for PostgreSQL connector in copy activity provides built-in data partitioning to copy data in parallel. You can find data partitioning options on the **Source** tab of the copy activity.
@@ -641,6 +686,17 @@ IncomingStream sink(allowSchemaDrift: true,
641686
skipDuplicateMapOutputs: true) ~> AzurePostgreSqlSink
642687
```
643688

689+
## Script activity
690+
691+
> [!IMPORTANT]
692+
> Script activity is only supported in the version 2.0 connector.
693+
> [!IMPORTANT]
694+
> Multi-query statements using output parameters are not supported. It is recommended that you split any output queries into separate script blocks within the same or different script activity.
695+
>
696+
> Multi-query statements using positional parameters are not supported. It is recommended that you split any positional queries into separate script blocks within the same or different script activity.
697+
698+
For more information about script activity, see [Script activity](transform-data-using-script.md).
699+
644700
## Lookup activity properties
645701

646702
For more information about the properties, see [Lookup activity](control-flow-lookup-activity.md).

articles/data-factory/connector-teradata.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: jianleishen
66
ms.subservice: data-movement
77
ms.custom: synapse
88
ms.topic: conceptual
9-
ms.date: 06/06/2025
9+
ms.date: 08/05/2025
1010
ms.author: jianleishen
1111
---
1212

@@ -34,7 +34,8 @@ For a list of data stores that are supported as sources/sinks by the copy activi
3434

3535
Specifically, this Teradata connector supports:
3636

37-
- Teradata **version 14.10, 15.0, 15.10, 16.0, 16.10, and 16.20**.
37+
- Teradata Vantage Versions **17.0, 17.10, 17.20 and 20.0** for version 2.0.
38+
- Teradata Vantage Versions **14.10, 15.0, 15.10, 16.0, 16.10, and 16.20** for version 1.0.
3839
- Copying data by using **Basic**, **Windows**, or **LDAP** authentication.
3940
- Parallel copying from a Teradata source. See the [Parallel copy from Teradata](#parallel-copy-from-teradata) section for details.
4041

@@ -44,7 +45,8 @@ Specifically, this Teradata connector supports:
4445

4546
### For version 2.0
4647

47-
You need to [install .NET Data Provider](https://downloads.teradata.com/download/connectivity/net-data-provider-teradata) with version 20.00.03.00 or above on your self-hosted integration runtime if you use it.
48+
You need to [install .NET Data Provider](https://downloads.teradata.com/download/connectivity/net-data-provider-teradata) with version 20.00.03.00 or above on the machine running the self-hosted integration runtime with a version under 5.56.9318.1. Manual installation of the Teradata driver is not required when using self-hosted integration runtime version 5.56.9318.1 or above, as these versions provide a built-in driver.
49+
4850
### For version 1.0
4951

5052
If you use Self-hosted Integration Runtime, note it provides a built-in Teradata driver starting from version 3.18. You don't need to manually install any driver. The driver requires "Visual C++ Redistributable 2012 Update 4" on the self-hosted integration runtime machine. If you don't yet have it installed, download it from [here](https://www.microsoft.com/en-sg/download/details.aspx?id=30679).

articles/data-factory/transform-data-using-script.md

Lines changed: 9 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.topic: conceptual
66
author: nabhishek
77
ms.author: abnarain
88
ms.custom: synapse
9-
ms.date: 10/03/2024
9+
ms.date: 08/01/2025
1010
ms.subservice: orchestration
1111
---
1212

@@ -20,9 +20,10 @@ Using the script activity, you can execute common operations with Data Manipulat
2020

2121
You can use the Script activity to invoke a SQL script in one of the following data stores in your enterprise or on an Azure virtual machine (VM):
2222

23+
- Azure Database for PostgreSQL (Version 2.0)
2324
- Azure SQL Database
24-
- Azure Synapse Analytics
25-
- SQL Server Database. If you are using SQL Server, install Self-hosted integration runtime on the same machine that hosts the database or on a separate machine that has access to the database. Self-Hosted integration runtime is a component that connects data sources on-premises/on Azure VM with cloud services in a secure and managed way. See the [Self-hosted integration runtime](create-self-hosted-integration-runtime.md) article for details.
25+
- Azure Synapse Analytics
26+
- SQL Server Database. If you're using SQL Server, install Self-hosted integration runtime on the same machine that hosts the database or on a separate machine that has access to the database. Self-Hosted integration runtime is a component that connects data sources on-premises/on Azure VM with cloud services in a secure and managed way. See the [Self-hosted integration runtime](create-self-hosted-integration-runtime.md) article for details.
2627
- Oracle
2728
- Snowflake
2829

@@ -36,7 +37,7 @@ The script can contain either a single SQL statement or multiple SQL statements
3637

3738
## Syntax details
3839

39-
Here is the JSON format for defining a Script activity:
40+
Here's the JSON format for defining a Script activity:
4041

4142
```json
4243
{
@@ -142,7 +143,7 @@ Sample output:
142143
|Property name |Description |Condition |
143144
|---------|---------|---------|
144145
|resultSetCount |The count of result sets returned by the script. |Always |
145-
|resultSets |The array which contains all the result sets. |Always |
146+
|resultSets |The array that contains all the result sets. |Always |
146147
|resultSets.rowCount |Total rows in the result set. |Always |
147148
|resultSets.rows |The array of rows in the result set. |Always |
148149
|recordsAffected |The row count of affected rows by the script. |If scriptType is NonQuery |
@@ -153,10 +154,10 @@ Sample output:
153154

154155
> [!NOTE]
155156
> - The output is collected every time a script block is executed. The final output is the merged result of all script block outputs. The output parameter with same name in different script block will get overwritten.
156-
> - Since the output has size / rows limitation, the output will be truncated in following order: logs -> parameters -> rows. Note, this applies to a single script block, which means the output rows of next script block won’t evict previous logs.
157+
> - Since the output has size / rows limitation, the output will be truncated in the following order: logs -> parameters -> rows. This applies to a single script block, which means the output rows of next script block won’t evict previous logs.
157158
> - Any error caused by log won’t fail the activity.
158-
> - For consuming activity output resultSets in down stream activity please refer to the [Lookup activity result documentation](control-flow-lookup-activity.md#use-the-lookup-activity-result).
159-
> - Use outputLogs when you are using 'PRINT' statements for logging purpose. If query returns resultSets, it will be available in the activity output and will be limited to 5000 rows/ 4MB size limit.
159+
> - For consuming activity output resultSets in down stream activity, refer to the [Lookup activity result documentation](control-flow-lookup-activity.md#use-the-lookup-activity-result).
160+
> - Use outputLogs when you're using 'PRINT' statements for logging purpose. If query returns resultSets, it will be available in the activity output and will be limited to 5000 rows/ 4MB size limit.
160161
161162
## Configure the Script activity using UI
162163

articles/hdinsight/hdinsight-autoscale-clusters.md

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -75,13 +75,12 @@ It's recommended that Ambari DB is sized correctly to reap the benefits of autos
7575
The following table describes the cluster types and versions that are compatible with the Autoscale feature.
7676

7777
| Version | Spark | Hive | Interactive Query | HBase | Kafka |
78-
|---|---|---|---|---|---|---|
79-
| HDInsight 4.0 without ESP | Yes | Yes | Yes* | No | No |
80-
| HDInsight 4.0 with ESP | Yes | Yes | Yes* | No | No |
81-
| HDInsight 5.0 without ESP | Yes | Yes | Yes* | No | No |
82-
| HDInsight 5.0 with ESP | Yes | Yes | Yes* | No | No |
78+
|---|---|---|---|---|---|
79+
| HDInsight 5.1 without ESP | Yes | Yes | Yes* | No | No |
80+
| HDInsight 5.1 with ESP | Yes | Yes | Yes* | No | No |
8381

84-
\* Interactive Query clusters can only be configured for schedule-based scaling, not load-based.
82+
> [!Note]
83+
> Interactive Query clusters can only be configured for schedule-based scaling. Load-based Autoscale is not supported.
8584
8685
## Get started
8786

articles/hdinsight/hdinsight-component-retirements-and-action-required.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: Azure HDInsight component retirements and action required
33
description: Learn about HDInsight retirement versions and its components in Azure HDInsight clusters.
44
ms.service: azure-hdinsight
55
ms.topic: conceptual
6-
ms.date: 09/30/2024
6+
ms.date: 08/11/2025
77
author: anuj1905
88
ms.author: anujsharda
99
ms.reviewer: hgowrisankar
@@ -42,8 +42,9 @@ HDInsight bundles open-source components and HDInsight platform into a package t
4242

4343
|Retirement Item | Retirement Date | Action Required by Customers| Cluster creation required?|
4444
|-|-|-|-|
45-
|[Basic and Standard A-series VMs Retirement](https://azure.microsoft.com/updates/basic-and-standard-aseries-vms-on-hdinsight-will-retire-on-31-august-2024/) |August 31, 2024 |[Av1-series retirement - Azure Virtual Machines](/azure/virtual-machines/sizes/migration-guides/av1-series-retirement) |N|
45+
|[Basic and Standard A-series VMs Retirement](https://azure.microsoft.com/updates/basic-and-standard-aseries-vms-on-hdinsight-will-retire-on-31-august-2024/) |August 31, 2024 |[Av1-series retirement - Azure Virtual Machines](/azure/virtual-machines/sizes/migration-guides/av1-series-retirement) |Y|
4646
|[Azure Monitor experience (preview)](https://azure.microsoft.com/updates/v2/hdinsight-azure-monitor-experience-retirement/) | February 01, 2025 |[Azure Monitor Agent (AMA) migration guide for Azure HDInsight clusters](./azure-monitor-agent.md) |Y|
47+
|[Enterprise Security Package](https://azure.microsoft.com/updates?id=497263) | July 31, 2026 | Migrate to alternative Azure offerings such as Microsoft Fabric |N/A|
4748

4849

4950
## Next steps

articles/hdinsight/interactive-query/llap-schedule-based-autoscale-best-practices.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,9 +16,9 @@ This document provides the onboarding steps to enable schedule-based autosca
1616
## **Supportability**
1717

1818
- Autoscale isn't supported in HDI 3.6 Interactive Query(LLAP) cluster.
19-
- HDI 4.0 Interactive Query Cluster supports only Schedule-Based Autoscale.
19+
- Interactive Query Cluster supports only Schedule-Based Autoscale.
2020

21-
Feature Supportability with HDInsight 4.0 Interactive Query(LLAP) Autoscale
21+
Feature Supportability with Interactive Query(LLAP) Autoscale
2222

2323
| Feature | Schedule-Based Autoscale |
2424
|:---:|:---:|

articles/migrate/common-questions-server-migration.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -156,7 +156,7 @@ The **Migration and modernization** tool migrates all the UEFI-based machines to
156156
> [!NOTE]
157157
> If a major version of an operating system is supported in agentless migration, all minor versions and kernels are automatically supported.
158158
159-
[!CAUTION]
159+
> [!CAUTION]
160160
> This article references Windows Server versions that have reached End of Support (EOS).Microsoft has officially ended support for the following operating systems:
161161
> - Windows Server 2003
162162
> - Windows Server 2008 (including SP2 and R2 SP1)

articles/sentinel/sap/deploy-data-connector-agent-container.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -370,6 +370,9 @@ At this stage, the system's **Health** status is **Pending**. If the agent is up
370370
371371
1. Select **Connect**.
372372
373+
> [!IMPORTANT]
374+
> There may be some wait time on initial connect. Find more details to verify the connector [here](/azure/sentinel/create-codeless-connector#verify-the-codeless-connector).
375+
373376
## Customize data connector behavior (optional)
374377
375378
If you have an SAP agentless data connector for Microsoft Sentinel, you can use the SAP Integration Suite to customize how the agentless data connector ingests data from your SAP system into Microsoft Sentinel.

0 commit comments

Comments
 (0)