Skip to content

Commit 9e85104

Browse files
Merge pull request #303310 from MicrosoftDocs/main
Auto Publish – main to live - 2025-07-25 11:00 UTC
2 parents 0779eae + f96fe37 commit 9e85104

18 files changed

+229
-110
lines changed

articles/backup/faq-backup-sql-server.yml

Lines changed: 10 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ metadata:
55
ms.reviewer: vijayts
66
ms.topic: faq
77
ms.service: azure-backup
8-
ms.date: 06/24/2025
8+
ms.date: 07/25/2025
99
author: AbhishekMallick-MS
1010
ms.author: v-mallicka
1111

@@ -43,22 +43,26 @@ sections:
4343
1. On the SQL Server instance, in the *C:\Program Files\Azure Workload Backup\bin* folder, create or edit the **ExtensionSettingsOverrides.json** file.
4444
1. In the **ExtensionSettingsOverrides.json** file, set `{"EnableAutoHealer": false}`.
4545
1. Save the changes and close the file.
46-
1. On the SQL Server instance, open **Task Manage**, and then restart the **AzureWLBackupCoordinatorSvc** service.
46+
1. On the **SQL Server instance**, open **Task Manager**, stop `AzureWLBackupPluginSvs` and `AzureWLBackupInquirySvc` services, and then restart the `AzureWLBackupCoordinatorSvc` service.
4747
48+
`AzureWLBackupPluginSvs` and `AzureWLBackupInquirySvc` services auto-start when new tasks arrive. Avoid restarting `AzureWLBackupCoordinatorSvc` during active backups; otherwise, it aborts them and might trigger full remedial backups.
49+
4850
- question: |
4951
Can I control how many concurrent backups run on the SQL server?
5052
answer: |
5153
Yes. You can throttle the rate at which the backup policy runs to minimize the impact on a SQL Server instance. To change the setting:
5254
5355
1. On the SQL Server instance, in the *C:\Program Files\Azure Workload Backup\bin* folder, create the *ExtensionSettingsOverrides.json* file.
54-
2. In the **ExtensionSettingsOverrides.json** file, change the `DefaultBackupTasksThreshold` setting to a lower value (for example, 5). <br>
56+
1. In the **ExtensionSettingsOverrides.json** file, change the `DefaultBackupTasksThreshold` setting to a lower value (for example, 5). <br>
5557
`{"DefaultBackupTasksThreshold": 5}`
5658
<br>
5759
The default value of DefaultBackupTasksThreshold is **20**.
5860
59-
3. Save your changes and close the file.
60-
4. On the SQL Server instance, open **Task Manager**. Restart the **AzureWLBackupCoordinatorSvc** service.<br/> <br/>
61-
While this method helps if the backup application is consuming a large quantity of resources, SQL Server [Resource Governor](/sql/relational-databases/resource-governor/resource-governor) is a more generic way to specify limits on the amount of CPU, physical IO, and memory that incoming application requests can use.
61+
1. Save your changes and close the file.
62+
1. On the **SQL Server instance**, open **Task Manager**, stop `AzureWLBackupPluginSvs` and `AzureWLBackupInquirySvc` services, and then restart the `AzureWLBackupCoordinatorSvc` service.
63+
64+
`AzureWLBackupPluginSvs` and `AzureWLBackupInquirySvc` services auto-start when new tasks arrive. Avoid restarting `AzureWLBackupCoordinatorSvc` during active backups; otherwise, it aborts them and might trigger full remedial backups. For more generic control over CPU, I/O, and memory usage by backup applications, use **SQL Server Resource Governor**.
65+
6266
6367
> [!NOTE]
6468
> In the UX you can still go ahead and schedule as many backups at any given time. However they'll be processed in a sliding window of say, 5, according to the above example.

articles/backup/quick-backup-vm-portal.md

Lines changed: 16 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: Quickstart - Back up a VM with the Azure portal by using Azure Backup
33
description: In this Quickstart, learn how to create a Recovery Services vault, enable protection on an Azure VM, and back up the VM, with the Azure portal.
4-
ms.date: 01/30/2025
4+
ms.date: 07/25/2025
55
ms.topic: quickstart
66
ms.devlang: azurecli
77
ms.custom: mvc, mode-ui, engagement-fy24
@@ -32,28 +32,31 @@ Sign in to the [Azure portal](https://portal.azure.com).
3232

3333
To apply a backup policy to your Azure VMs, follow these steps:
3434

35-
1. Go to **Backup center** and select **+Backup** from the **Overview** tab.
35+
1. Go to **Business Continuity Center** and select **+ Configure protection**.
3636

37-
![Screenshot showing the Backup button.](./media/backup-azure-arm-vms-prepare/backup-button.png)
37+
:::image type="content" source="./media/backup-azure-arm-vms-prepare/configure-protection.png" alt-text="Screenshot shows how to start configuring system backup." lightbox="./media/backup-azure-arm-vms-prepare/configure-protection.png":::
3838

39-
1. On the **Start: Configure Backup** blade, select **Azure Virtual machines** as the **Datasource type** and select the vault you have created. Then select **Continue**.
39+
1. On the **Configure protection** pane, select **Resource managed by** as **Azure**, **Datasource type** as **Azure Virtual machines**, **Solution** as **Azure Backup**, and then select **Continue**.
4040

41-
![Screenshot showing Backup and Backup Goal blades.](./media/backup-azure-arm-vms-prepare/select-backup-goal-1.png)
41+
:::image type="content" source="./media/backup-azure-arm-vms-prepare/configure-system-protection.png" alt-text="Screenshot shows how to set the system backup." lightbox="./media/backup-azure-arm-vms-prepare/configure-system-protection.png":::
4242

43-
1. Assign a Backup policy.
4443

45-
- The default policy backs up the VM once a day. The daily backups are retained for _30 days_. Instant recovery snapshots are retained for two days.
44+
1. On the **Start: Configure Backup** pane, select **Azure Virtual machines** as the **Datasource type** and select the vault you have created. Then select **Continue**.
4645

47-
![Screenshot showing the default backup policy.](./media/backup-azure-arm-vms-prepare/default-policy.png)
46+
:::image type="content" source="./media/backup-azure-arm-vms-prepare/select-backup-goal-1.png" alt-text="Screenshot showing Backup and Backup Goal panes." lightbox="./media/backup-azure-arm-vms-prepare/select-backup-goal-1.png":::
4847

49-
- If you don't want to use the default policy, select **Create New**, and create a custom policy as described in the next procedure.
48+
1. On the **Configure backup** pane, select the **Policy sub type** as **Enhanced**, **Standard**.
5049

51-
> [!Note]
52-
> With Enhanced policy, you can now back up Azure VMs multiple times a day that helps to perform hourly backups. [Learn more](backup-azure-vms-enhanced-policy.md).
50+
- **Enhanced Backup policy**: [This policy](backup-azure-vms-enhanced-policy.md) allows multiple daily backups, enabling hourly backups. To enable Azure Backup on Azure VMs in Azure Extended Zones, you can only use the Enhanced policy.
51+
- **Standard Backup policy**: [This policy](backup-instant-restore-capability.md) allows VM backup once a day. The daily backups are retained for 30 days. Instant recovery snapshots are retained for two days.
52+
53+
:::image type="content" source="./media/backup-azure-arm-vms-prepare/default-policy.png" alt-text="Screenshot showing the default backup policy." lightbox="./media/backup-azure-arm-vms-prepare/default-policy.png":::
54+
55+
If you don't want to use the default policy, select **Create New**, and create a custom policy as described in the next procedure.
5356

5457
## Select a VM to back up
5558

56-
Create a simple scheduled daily backup to a Recovery Services vault.
59+
Create a scheduled daily backup to a Recovery Services vault.
5760

5861
1. Under **Virtual Machines**, select **Add**.
5962

@@ -159,25 +162,7 @@ Azure Backup backs up Azure VMs by installing an extension to the Azure VM agent
159162

160163
When no longer needed, you can disable protection on the VM, remove the restore points and Recovery Services vault, then delete the resource group and associated VM resources
161164

162-
If you're going to continue on to a Backup tutorial that explains how to restore data for your VM, skip the steps in this section and go to [Next steps](#next-steps).
163-
164-
1. Select the **Backup** option for your VM.
165-
166-
2. Choose **Stop backup**.
167-
168-
![Screenshot showing to stop VM backup from the Azure portal.](./media/quick-backup-vm-portal/stop-backup.png)
169-
170-
3. Select **Delete Backup Data** from the drop-down menu.
171-
172-
4. In the **Type the name of the Backup item** dialog, enter your VM name, such as *myVM*. Select **Stop Backup**.
173-
174-
Once the VM backup has been stopped and recovery points removed, you can delete the resource group. If you used an existing VM, you may want to leave the resource group and VM in place.
175-
176-
5. In the menu on the left, select **Resource groups**.
177-
6. From the list, choose your resource group. If you used the sample VM quickstart commands, the resource group is named *myResourceGroup*.
178-
7. Select **Delete resource group**. To confirm, enter the resource group name, then select **Delete**.
179-
180-
![Screenshot showing to delete the resource group from the Azure portal.](./media/quick-backup-vm-portal/delete-resource-group-from-portal.png)
165+
Learn how to [stop protection and delete VM backups](backup-azure-manage-vms.md#stop-protection-and-delete-backup-data).
181166

182167
## Next steps
183168

articles/data-factory/connector-lifecycle-overview.md

Lines changed: 5 additions & 49 deletions
Original file line numberDiff line numberDiff line change
@@ -24,11 +24,11 @@ Connector upgrades are essential to evolve innovation in a fast manner, maintain
2424

2525
- **New feature enhancements such as security, performance, etc.**
2626

27-
While the service actively evolves to provide the most secure and reliable features in the connector, leveraging the connector lifecycle is an efficient approach to ensure that users can take full advantage of the new enhancements at their manageable pace without business interruption.
27+
While the service actively evolves to provide the most secure and reliable features in the connector, applying the connector lifecycle is an efficient approach to ensure that users can take full advantage of the new enhancements at their manageable pace without business interruption.
2828

2929
- **Protocol change introduced by external data source vendors leading to potential behavior changes**
3030

31-
These changes aren't always exhaustively predictable and arise due to incompatibility brought by individual data source vendor itself. Given these uncertainties, versioning ensures that users can adopt the updated connector (e.g. version 2.0) while maintaining a fallback option in a period. This empowers users to well plan for a version upgrade to accommodate potential differences while providing users with a clear transition path.
31+
These changes aren't always exhaustively predictable and arise due to incompatibility brought by individual data source vendor itself. Given these uncertainties, versioning ensures that users can adopt the updated connector (for example, version 2.0) while maintaining a fallback option in a period. This empowers users to well plan for a version upgrade to accommodate potential differences while providing users with a clear transition path.
3232

3333
- **Fixing unintended behaviors**
3434

@@ -48,14 +48,14 @@ A connector lifecycle includes multiple stages with thorough and measurable asse
4848
| Public Preview | This stage marks the initial release of a new connector version to all users publicly. During this phase, users are encouraged to try the latest connector version and provide feedback. For newly created connections, it defaults to the latest connector version. Users can switch back to the previous version. | 1 month or above* |
4949
| General Availability | Once a connector version meets the General Availability (GA) criteria, it's released to the public and is suitable for production workloads. To reach this stage, the new connector version must meet the requirements in terms of performance, reliability, and its capability to meet business needs. | 12 months or above* |
5050
| End-of-Support (EOS) announced | When a connector version reaches its EOS, it won't receive any further updates or support. A six-month notice is announced before the EOS date of this version. This is documented together with the removal date. | 6 months before the end-of-support date* |
51-
| End-of-Support (EOS) | Once the previously announced EOS date arrives, the connector version becomes officially unsupported. This implies that it won't receive any updates or bug fixes, and no official support will be provided. Users won't be able to create new workloads on a version that is under EOS stage. Using an unsupported connector version is at the user's own risk. The workload running on EOS version may not fail immediately, the service might expedite moving into the final stage at any time, at Microsoft's discretion due to outstanding security issues, or other factors. | / |
52-
| Version removed | Once the connector version passes its EOS date, the service will remove all related components associated with this connector version. This implies that pipelines using this connector version will discontinue to execute. | 1-12 months after the end of support date* |
51+
| End-of-Support (EOS) | Once the previously announced EOS date arrives, the connector version becomes officially unsupported. This implies that it won't receive any updates or bug fixes, and no official support will be provided. Users won't be able to create new workloads on a version that is under EOS stage. Using an unsupported connector version is at the user's own risk. The workload running on EOS version may not fail immediately. The service might expedite moving into the final stage at any time, at Microsoft's discretion due to outstanding security issues, or other factors. | / |
52+
| Version removed | Once the connector version passes its EOS date, the service removes all related components associated with this connector version. This implies that pipelines using this connector version discontinues to execute. | 1-12 months after the end of support date* |
5353

5454
*\* These timelines are provided as an example and might vary depending on various factors. Lifecycle timelines are subject to change at Microsoft discretion.*
5555

5656
## Understanding connector versions
5757

58-
To manage connection updates effectively, it's important to understand versioning and how to interpret the change. Connectors in Azure Data Factory generally follow versioning Major.Minor (e.g., 1.2):
58+
To manage connection updates effectively, it's important to understand versioning and how to interpret the change. Connectors in Azure Data Factory generally follow versioning Major.Minor (for example, 1.2):
5959

6060
- **Major updates (x.0):** These are significant changes that require review on the changes before upgrade.
6161
- **Minor updates (1.x):** These might introduce new features or fixes, but with minor changes to the existing behavior.
@@ -70,50 +70,6 @@ When a version reaches its end-of-support date, users are no longer allowed to c
7070

7171
In addition to major and minor version updates, the service also delivers new features and bug fixes that are fully backward compatible with your existing setup. These changes don't require a version update to the connector. Depending on the nature of the change, users may either receive the improvements automatically or have the option to enable new features as needed. This approach ensures a seamless experience while maintaining stability and flexibility.
7272

73-
## Automatic connector upgrade
74-
75-
In addition to providing [tools](connector-upgrade-advisor.md) and [best practices](connector-upgrade-guidance.md) to help users manually upgrade their connectors, the service now also provides a more streamlined upgrade process for some cases where applicable. This is designed to help users adopt the most reliable and supported connector versions with minimal disruption.
76-
77-
The following section outlines the general approach that the service takes for automatic upgrades. While this provides a high-level overview, it's strongly recommended to review the documentation specific to each connector to understand which scenarios are supported and how the upgrade process applies to your workloads.
78-
79-
In cases where certain scenarios running on the latest GA connector version are fully backward compatible with the previous version, the service will automatically upgrade existing workloads (such as Copy, Lookup, and Script activities) to a compatibility mode that preserves the behavior of the earlier version.
80-
81-
These auto-upgraded workloads aren't affected by the announced removal date of the older version, giving users additional time to evaluate and transition to the latest GA version without facing immediate failures.
82-
83-
You can identify which activities have been automatically upgraded by inspecting the activity output, where relevant upgraded information is recorded.
84-
85-
**Example:**
86-
87-
Copy activity output
88-
89-
```json
90-
"source": {
91-
"type": "AmazonS3",
92-
"autoUpgrade": "true",
93-
}
94-
95-
"sink": {
96-
"type": "AmazonS3",
97-
"autoUpgrade": "true",
98-
}
99-
```
100-
101-
> [!NOTE]
102-
> While compatibility mode offers flexibility, we strongly encourage users to upgrade to the latest GA version as soon as possible to benefit from ongoing improvements, optimizations, and full support.
103-
104-
You can find more details from the table below on the connector list that is planned for the automatic upgrade.
105-
106-
| Connector | Scenario |
107-
|------------------|----------|
108-
| [Amazon Redshift](connector-amazon-redshift.md) | Scenario that doesn't rely on below capability in Amazon Redshift (version 1.0):<br><br>• Linked service that uses Azure integration runtime.<br>• Use [UNLOAD](connector-amazon-redshift.md#use-unload-to-copy-data-from-amazon-redshift).<br><br>Automatic upgrade is only applicable when the driver is installed in your machine that installs the self-hosted integration runtime.<br><br> For more information, go to [Install Amazon Redshift ODBC driver for the version 2.0](connector-amazon-redshift.md#install-amazon-redshift-odbc-driver-for-the-version-20).|
109-
| [Google BigQuery](connector-google-bigquery.md) | Scenario that doesn't rely on below capability in Google BigQuery V1:<br><br> • Use `trustedCertsPath`, `additionalProjects`, `requestgoogledrivescope` connection properties.<br> • Set `useSystemTrustStore` connection property as `false`.<br> • Use **STRUCT** and **ARRAY** data types. <br><br>If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.55 or above. |
110-
| [Hive](connector-hive.md) | Scenario that doesn't rely on below capability in Hive (version 1.0):<br><br>• Authentication types:<br>&nbsp;&nbsp;• Username<br>• Thrift transport protocol:<br>&nbsp;&nbsp;• HiveServer1<br>• Service discovery mode: True<br>• Use native query: True <br><br>If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.55 or above.|
111-
| [Impala](connector-impala.md) | Scenario that doesn't rely on below capability in Impala (version 1.0):<br><br>• Authentication types:<br>&nbsp;&nbsp;• SASL Username<br><br>If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.55 or above. |
112-
| [Spark](connector-spark.md) | Scenario that doesn't rely on below capability in Spark (version 1.0):<br><br>• Authentication types:<br>&nbsp;&nbsp;• Username<br>• Thrift transport protocol:<br>&nbsp;&nbsp;• SASL<br>&nbsp;&nbsp;• Binary<br>• Thrift transport protocol:<br>&nbsp;&nbsp;• SharkServer<br>&nbsp;&nbsp;• SharkServer2<br><br>If your pipeline runs on self-hosted integration runtime, it requires SHIR version 5.55 or above.|
113-
| [Teradata](connector-teradata.md) | Scenario that doesn't rely on below capability in Teradata (version 1.0):<br><br> • Set below value for **CharacterSet**:<br>&nbsp;&nbsp;• BIG5 (TCHBIG5_1R0)<br>&nbsp;&nbsp;• EUC (Unix compatible, KANJIEC_0U)<br>&nbsp;&nbsp;• GB (SCHGB2312_1T0)<br>&nbsp;&nbsp;• IBM Mainframe (KANJIEBCDIC5035_0I)<br>&nbsp;&nbsp;• NetworkKorean (HANGULKSC5601_2R4)<br>&nbsp;&nbsp;• Shift-JIS (Windows, DOS compatible, KANJISJIS_0S)|
114-
| [Vertica](connector-vertica.md) | Scenario that doesn't rely on below capability in Vertica (version 1.0):<br><br>• Linked service that uses Azure integration runtime.<br><br>Automatic upgrade is only applicable when the driver is installed in your machine that installs the self-hosted integration runtime (version 5.55 or above).<br><br> For more information, go to [Install Vertica ODBC driver for the version 2.0](connector-vertica.md#install-vertica-odbc-driver-for-the-version-20). |
115-
116-
11773
## Related content
11874

11975
- [Connector overview](connector-overview.md)

0 commit comments

Comments
 (0)