You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/backup/backup-azure-files-faq.yml
+4Lines changed: 4 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -108,6 +108,10 @@ sections:
108
108
answer: |
109
109
If you are restoring from snapshot backup, for Original Location Recovery (OLR) the total restore time depends on number of files and directories in the share. When you restore to alternate location, the restore time depends on number of files and directories in the share to be restored and available IOPS on source and target storage account.
110
110
111
+
- question: |
112
+
Will the retention policy delete a recovery point selected for restore before the restore operation completes?
113
+
answer: |
114
+
No. Azure Backup takes a lease on the recovery point during restore, ensuring it isn't deleted before completion.
@@ -474,17 +475,20 @@ To copy data from Azure Database for PostgreSQL, set the source type in the copy
474
475
475
476
### Azure Database for PostgreSQL as sink
476
477
477
-
To copy data to Azure Database for PostgreSQL, the following properties are supported in the copy activity **sink** section:
478
+
To copy data to Azure Database for PostgreSQL, set the sink type in the copy activity to **SqlSink**. The following properties are supported in the copy activity **sink** section:
478
479
479
-
| Property | Description | Required |
480
-
|:--- |:--- |:--- |
481
-
| type | The type property of the copy activity sink must be set to **AzurePostgreSqlSink**. | Yes |
482
-
| preCopyScript | Specify a SQL query for the copy activity to execute before you write data into Azure Database for PostgreSQL in each run. You can use this property to clean up the preloaded data. | No |
483
-
| writeMethod | The method used to write data into Azure Database for PostgreSQL.<br>Allowed values are: **CopyCommand** (default, which is more performant), **BulkInsert**. | No |
484
-
| writeBatchSize | The number of rows loaded into Azure Database for PostgreSQL per batch.<br>Allowed value is an integer that represents the number of rows. | No (default is 1,000,000) |
485
-
| writeBatchTimeout | Wait time for the batch insert operation to complete before it times out.<br>Allowed values are Timespan strings. An example is 00:30:00 (30 minutes). | No (default is 00:30:00) |
480
+
| Property | Description | Required | Connector support version |
481
+
|:--- |:--- |:--- |:--- |
482
+
| type | The type property of the copy activity sink must be set to **AzurePostgreSQLSink**. | Yes | Version 1.0 & Version 2.0 |
483
+
| preCopyScript | Specify a SQL query for the copy activity to execute before you write data into Azure Database for PostgreSQL in each run. You can use this property to clean up the preloaded data. | No | Version 1.0 & Version 2.0 |
484
+
| writeMethod | The method used to write data into Azure Database for PostgreSQL.<br>Allowed values are: **CopyCommand** (default, which is more performant), **BulkInsert** and **Upsert** (Version 2.0 only). | No | Version 1.0 & Version 2.0 |
485
+
| upsertSettings | Specify the group of the settings for write behavior. <br/> Apply when the WriteBehavior option is `Upsert`. | No | Version 2.0 |
486
+
|***Under `upsertSettings`:***|||
487
+
| keys | Specify the column names for unique row identification. Either a single key or a series of keys can be used. Keys must be a primary key or unique column. If not specified, the primary key is used. | No | Version 2.0 |
488
+
| writeBatchSize | The number of rows loaded into Azure Database for PostgreSQL per batch.<br>Allowed value is an integer that represents the number of rows. | No (default is 1,000,000) | Version 1.0 & Version 2.0 |
489
+
| writeBatchTimeout | Wait time for the batch insert operation to complete before it times out.<br>Allowed values are Timespan strings. An example is 00:30:00 (30 minutes). | No (default is 00:30:00) | Version 1.0 & Version 2.0 |
486
490
487
-
**Example**:
491
+
**Example 1: Copy Command**
488
492
489
493
```json
490
494
"activities":[
@@ -518,6 +522,47 @@ To copy data to Azure Database for PostgreSQL, the following properties are supp
Copy activity natively supports upsert operations. To perform an upsert, user should provide key column(s) that are either primary keys or unique columns. If the user does not provide key column(s) then primary key column(s) in the sink table are used. Copy Activity will update non-key column(s) in the sink table where the key column value(s) match those in the source table; otherwise, it will insert new data.
565
+
521
566
## Parallel copy from Azure Database for PostgreSQL
522
567
523
568
The Azure Database for PostgreSQL connector in copy activity provides built-in data partitioning to copy data in parallel. You can find data partitioning options on the **Source** tab of the copy activity.
> Script activity is only supported in the version 2.0 connector.
693
+
> [!IMPORTANT]
694
+
> Multi-query statements using output parameters are not supported. It is recommended that you split any output queries into separate script blocks within the same or different script activity.
695
+
>
696
+
> Multi-query statements using positional parameters are not supported. It is recommended that you split any positional queries into separate script blocks within the same or different script activity.
697
+
698
+
For more information about script activity, see [Script activity](transform-data-using-script.md).
699
+
644
700
## Lookup activity properties
645
701
646
702
For more information about the properties, see [Lookup activity](control-flow-lookup-activity.md).
Copy file name to clipboardExpand all lines: articles/data-factory/connector-teradata.md
+5-3Lines changed: 5 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ author: jianleishen
6
6
ms.subservice: data-movement
7
7
ms.custom: synapse
8
8
ms.topic: conceptual
9
-
ms.date: 06/06/2025
9
+
ms.date: 08/05/2025
10
10
ms.author: jianleishen
11
11
---
12
12
@@ -34,7 +34,8 @@ For a list of data stores that are supported as sources/sinks by the copy activi
34
34
35
35
Specifically, this Teradata connector supports:
36
36
37
-
- Teradata **version 14.10, 15.0, 15.10, 16.0, 16.10, and 16.20**.
37
+
- Teradata Vantage Versions **17.0, 17.10, 17.20 and 20.0** for version 2.0.
38
+
- Teradata Vantage Versions **14.10, 15.0, 15.10, 16.0, 16.10, and 16.20** for version 1.0.
38
39
- Copying data by using **Basic**, **Windows**, or **LDAP** authentication.
39
40
- Parallel copying from a Teradata source. See the [Parallel copy from Teradata](#parallel-copy-from-teradata) section for details.
40
41
@@ -44,7 +45,8 @@ Specifically, this Teradata connector supports:
44
45
45
46
### For version 2.0
46
47
47
-
You need to [install .NET Data Provider](https://downloads.teradata.com/download/connectivity/net-data-provider-teradata) with version 20.00.03.00 or above on your self-hosted integration runtime if you use it.
48
+
You need to [install .NET Data Provider](https://downloads.teradata.com/download/connectivity/net-data-provider-teradata) with version 20.00.03.00 or above on the machine running the self-hosted integration runtime with a version under 5.56.9318.1. Manual installation of the Teradata driver is not required when using self-hosted integration runtime version 5.56.9318.1 or above, as these versions provide a built-in driver.
49
+
48
50
### For version 1.0
49
51
50
52
If you use Self-hosted Integration Runtime, note it provides a built-in Teradata driver starting from version 3.18. You don't need to manually install any driver. The driver requires "Visual C++ Redistributable 2012 Update 4" on the self-hosted integration runtime machine. If you don't yet have it installed, download it from [here](https://www.microsoft.com/en-sg/download/details.aspx?id=30679).
Copy file name to clipboardExpand all lines: articles/data-factory/transform-data-using-script.md
+9-8Lines changed: 9 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ ms.topic: conceptual
6
6
author: nabhishek
7
7
ms.author: abnarain
8
8
ms.custom: synapse
9
-
ms.date: 10/03/2024
9
+
ms.date: 08/01/2025
10
10
ms.subservice: orchestration
11
11
---
12
12
@@ -20,9 +20,10 @@ Using the script activity, you can execute common operations with Data Manipulat
20
20
21
21
You can use the Script activity to invoke a SQL script in one of the following data stores in your enterprise or on an Azure virtual machine (VM):
22
22
23
+
- Azure Database for PostgreSQL (Version 2.0)
23
24
- Azure SQL Database
24
-
- Azure Synapse Analytics
25
-
- SQL Server Database. If you are using SQL Server, install Self-hosted integration runtime on the same machine that hosts the database or on a separate machine that has access to the database. Self-Hosted integration runtime is a component that connects data sources on-premises/on Azure VM with cloud services in a secure and managed way. See the [Self-hosted integration runtime](create-self-hosted-integration-runtime.md) article for details.
25
+
- Azure Synapse Analytics
26
+
- SQL Server Database. If you're using SQL Server, install Self-hosted integration runtime on the same machine that hosts the database or on a separate machine that has access to the database. Self-Hosted integration runtime is a component that connects data sources on-premises/on Azure VM with cloud services in a secure and managed way. See the [Self-hosted integration runtime](create-self-hosted-integration-runtime.md) article for details.
26
27
- Oracle
27
28
- Snowflake
28
29
@@ -36,7 +37,7 @@ The script can contain either a single SQL statement or multiple SQL statements
36
37
37
38
## Syntax details
38
39
39
-
Here is the JSON format for defining a Script activity:
40
+
Here's the JSON format for defining a Script activity:
40
41
41
42
```json
42
43
{
@@ -142,7 +143,7 @@ Sample output:
142
143
|Property name |Description |Condition |
143
144
|---------|---------|---------|
144
145
|resultSetCount |The count of result sets returned by the script. |Always |
145
-
|resultSets |The array which contains all the result sets. |Always |
146
+
|resultSets |The array that contains all the result sets. |Always |
146
147
|resultSets.rowCount |Total rows in the result set. |Always |
147
148
|resultSets.rows |The array of rows in the result set. |Always |
148
149
|recordsAffected |The row count of affected rows by the script. |If scriptType is NonQuery |
@@ -153,10 +154,10 @@ Sample output:
153
154
154
155
> [!NOTE]
155
156
> - The output is collected every time a script block is executed. The final output is the merged result of all script block outputs. The output parameter with same name in different script block will get overwritten.
156
-
> - Since the output has size / rows limitation, the output will be truncated in following order: logs -> parameters -> rows. Note, this applies to a single script block, which means the output rows of next script block won’t evict previous logs.
157
+
> - Since the output has size / rows limitation, the output will be truncated in the following order: logs -> parameters -> rows. This applies to a single script block, which means the output rows of next script block won’t evict previous logs.
157
158
> - Any error caused by log won’t fail the activity.
158
-
> - For consuming activity output resultSets in down stream activity please refer to the [Lookup activity result documentation](control-flow-lookup-activity.md#use-the-lookup-activity-result).
159
-
> - Use outputLogs when you are using 'PRINT' statements for logging purpose. If query returns resultSets, it will be available in the activity output and will be limited to 5000 rows/ 4MB size limit.
159
+
> - For consuming activity output resultSets in down stream activity, refer to the [Lookup activity result documentation](control-flow-lookup-activity.md#use-the-lookup-activity-result).
160
+
> - Use outputLogs when you're using 'PRINT' statements for logging purpose. If query returns resultSets, it will be available in the activity output and will be limited to 5000 rows/ 4MB size limit.
description: Learn about HDInsight retirement versions and its components in Azure HDInsight clusters.
4
4
ms.service: azure-hdinsight
5
5
ms.topic: conceptual
6
-
ms.date: 09/30/2024
6
+
ms.date: 08/11/2025
7
7
author: anuj1905
8
8
ms.author: anujsharda
9
9
ms.reviewer: hgowrisankar
@@ -42,8 +42,9 @@ HDInsight bundles open-source components and HDInsight platform into a package t
42
42
43
43
|Retirement Item | Retirement Date | Action Required by Customers| Cluster creation required?|
44
44
|-|-|-|-|
45
-
|[Basic and Standard A-series VMs Retirement](https://azure.microsoft.com/updates/basic-and-standard-aseries-vms-on-hdinsight-will-retire-on-31-august-2024/)|August 31, 2024 |[Av1-series retirement - Azure Virtual Machines](/azure/virtual-machines/sizes/migration-guides/av1-series-retirement)|N|
45
+
|[Basic and Standard A-series VMs Retirement](https://azure.microsoft.com/updates/basic-and-standard-aseries-vms-on-hdinsight-will-retire-on-31-august-2024/)|August 31, 2024 |[Av1-series retirement - Azure Virtual Machines](/azure/virtual-machines/sizes/migration-guides/av1-series-retirement)|Y|
46
46
|[Azure Monitor experience (preview)](https://azure.microsoft.com/updates/v2/hdinsight-azure-monitor-experience-retirement/)| February 01, 2025 |[Azure Monitor Agent (AMA) migration guide for Azure HDInsight clusters](./azure-monitor-agent.md)|Y|
47
+
|[Enterprise Security Package](https://azure.microsoft.com/updates?id=497263)| July 31, 2026 | Migrate to alternative Azure offerings such as Microsoft Fabric |N/A|
Copy file name to clipboardExpand all lines: articles/migrate/common-questions-server-migration.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -156,7 +156,7 @@ The **Migration and modernization** tool migrates all the UEFI-based machines to
156
156
> [!NOTE]
157
157
> If a major version of an operating system is supported in agentless migration, all minor versions and kernels are automatically supported.
158
158
159
-
[!CAUTION]
159
+
> [!CAUTION]
160
160
> This article references Windows Server versions that have reached End of Support (EOS).Microsoft has officially ended support for the following operating systems:
161
161
> - Windows Server 2003
162
162
> - Windows Server 2008 (including SP2 and R2 SP1)
Copy file name to clipboardExpand all lines: articles/sentinel/sap/deploy-data-connector-agent-container.md
+3Lines changed: 3 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -370,6 +370,9 @@ At this stage, the system's **Health** status is **Pending**. If the agent is up
370
370
371
371
1. Select **Connect**.
372
372
373
+
> [!IMPORTANT]
374
+
> There may be some wait time on initial connect. Find more details to verify the connector [here](/azure/sentinel/create-codeless-connector#verify-the-codeless-connector).
375
+
373
376
## Customize data connector behavior (optional)
374
377
375
378
If you have an SAP agentless data connector for Microsoft Sentinel, you can use the SAP Integration Suite to customize how the agentless data connector ingests data from your SAP system into Microsoft Sentinel.
0 commit comments