You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-arc/servers/manage-vm-extensions-cli.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -103,7 +103,7 @@ For the `--extension-targets` parameter, you need to specify the extension and t
103
103
To upgrade the Log Analytics agent extension for Windows that has a newer version available, run the following command:
104
104
105
105
```azurecli
106
-
az connectedmachine upgrade-extension --machine-name "myMachineName" --resource-group "myResourceGroup" --extension-targets '{\"Microsoft.EnterpriseCloud.Monitoring.MicrosoftMonitoringAgent\":{\"targetVersion\":\"1.0.18053.0\"}}'
106
+
az connectedmachine upgrade-extension --machine-name "myMachineName" --resource-group "myResourceGroup" --extension-targets '{"Microsoft.EnterpriseCloud.Monitoring.MicrosoftMonitoringAgent":{"targetVersion":"1.0.18053.0"}}'
107
107
```
108
108
109
109
You can review the version of installed VM extensions at any time by running the command [az connectedmachine extension list](/cli/azure/connectedmachine/extension#az-connectedmachine-extension-list). The `typeHandlerVersion` property value represents the version of the extension.
GetClientAccessTokenOptions option = new GetClientAccessTokenOptions();
@@ -225,3 +244,71 @@ The same Client Access URL can be generated by using the Web PubSub server SDK.
225
244
---
226
245
227
246
In real-world code, we usually have a server side to host the logic generating the Client Access URL. When a client request comes in, the server side can use the general authentication/authorization workflow to validate the client request. Only valid client requests can get the Client Access URL back.
247
+
248
+
## Generate from RESTAPI`:generateToken`
249
+
You could also use Microsoft Entra ID and generate the token by invoking [Generate Client Token RESTAPI](/rest/api/webpubsub/dataplane/web-pub-sub/generate-client-token).
250
+
251
+
> [!NOTE]
252
+
> Web PubSub does not recommend that you create Microsoft Entra ID tokens for Microsoft Entra ID service principals manually. This is because each Microsoft Entra ID token is short-lived, typically expiring within one hour. Afterthis time, you must manually generate a replacement Microsoft Entra IDtoken. Instead, use [our SDKs](#generate-from-service-sdk) that automatically generate and replace expired Microsoft Entra ID tokens for you.
253
+
254
+
1. Follow [Authorize from application ](./howto-authorize-from-application.md#add-a-client-secret) to enable Microsoft Entra ID and add a client secret.
255
+
256
+
1. Gather the following information:
257
+
258
+
| Value name | How to get the value |
259
+
|---|---|
260
+
| TenantId | TenantId is the value of**Directory (tenant) ID** on the **Overview** pane of the application you registered. |
261
+
| ClientId | ClientId is the value of**Application (client) ID** from the **Overview** pane of the application you registered. |
262
+
| ClientSecret | ClientSecret is the value of the client secret you just added in step #1|
263
+
264
+
1. Get the Microsoft Entra ID token from Microsoft identity platform
265
+
266
+
We use [CURL](https://curl.se/) tool to show how to invoke the REST APIs. The tool is bundled into Windows 10/11, and you could install the tool following [Install CURL](https://curl.se/download.html).
267
+
268
+
```bash
269
+
# set neccessory values, replace the placeholders with your actual values
270
+
export TenantId=<your_tenant_id>
271
+
export ClientId=<your_client_id>
272
+
export ClientSecret=<your_client_secret>
273
+
274
+
curl -X POST "https://login.microsoftonline.com/$TenantId/oauth2/v2.0/token" \
The above curl command sends a POST request to Microsoft identity endpoint to get the [Microsoft Entra ID token](/entra/identity-platform/id-tokens) back.
283
+
In the response you see the Microsoft Entra ID token in`access_token`field. Copy and store it for later use.
284
+
285
+
1. Use the Microsoft Entra ID token to invoke `:generateToken`
286
+
287
+
```bash
288
+
# Replace the values in {} with your actual values.
Copy file name to clipboardExpand all lines: articles/backup/backup-azure-linux-database-consistent-enhanced-pre-post.md
+10-10Lines changed: 10 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,16 +1,16 @@
1
1
---
2
-
title: Database consistent snapshots using enhanced pre-post script framework
3
-
description: Learn how Azure Backup allows you to take database consistent snapshots, leveraging Azure VM backup and using packaged pre-post scripts
4
-
ms.topic: conceptual
2
+
title: Database consistent snapshots using enhanced prepost script framework
3
+
description: Learn how Azure Backup allows you to take database consistent snapshots, leveraging Azure Virtual Machine (VM) backup and using packaged prepost scripts
4
+
ms.topic: how-to
5
5
ms.custom: linux-related-content
6
-
ms.date: 09/16/2021
6
+
ms.date: 09/11/2024
7
7
author: AbhishekMallick-MS
8
8
ms.author: v-abhmallick
9
9
---
10
10
11
-
# Enhanced pre-post scripts for database consistent snapshot
11
+
# Enhanced prepost scripts for database consistent snapshot
12
12
13
-
Azure Backup service already provides a [_pre-post_ script framework](./backup-azure-linux-app-consistent.md) to achieve application consistency in Linux VMs using Azure Backup. This involves invoking a pre-script (to quiesce the applications) before taking
13
+
Azure Backup service already provides a [_prepost_ script framework](./backup-azure-linux-app-consistent.md) to achieve application consistency in Linux VMs using Azure Backup. This process involves invoking a pre-script (to quiesce the applications) before taking
14
14
snapshot of disks and calling post-script (commands to un-freeze the applications) after the snapshot is completed to return the applications to the normal mode.
15
15
16
16
Authoring, debugging and maintenance of e pre/post scripts could be challenging. To remove this complexity, Azure Backup provides simplified pre/post-script experience for marquee databases to get application consistent snapshot with least overhead.
@@ -19,7 +19,7 @@ Authoring, debugging and maintenance of e pre/post scripts could be challenging.
19
19
20
20
The new _enhanced_ pre-post script framework has the following key benefits:
21
21
22
-
- These pre-post scripts are directly installed in Azure VMs along with the backup extension. This helps to eliminate authoring and download them from an external location.
22
+
- These pre-post scripts are directly installed in Azure VMs along with the backup extension, which helps to eliminate authoring and download them from an external location.
23
23
- You can view the definition and content of pre-post scripts in [GitHub](https://github.com/Azure/azure-linux-extensions/tree/master/VMBackup/main/workloadPatch/DefaultScripts), even submit suggestions and changes. You can even submit suggestions and changes via GitHub, which will be triaged and added to benefit the broader community.
24
24
- You can even add new pre-post scripts for other databases via [GitHub](https://github.com/Azure/azure-linux-extensions/tree/master/VMBackup/main/workloadPatch/DefaultScripts), _which will be triaged and addressed to benefit the broader community_.
25
25
- The robust framework is efficient to handle scenarios, such as pre-script execution failure or crashes. In any event, the post-script automatically runs to roll back all changes done in the pre-script.
@@ -56,7 +56,7 @@ The following table describes the parameters:
56
56
|Parameter |Mandatory |Explanation |
57
57
|---------|---------|---------|
58
58
|workload_name | Yes | This will contain the name of the database for which you need application consistent backup. The current supported values are `oracle` or `mysql`. |
59
-
|command_path/configuration_path || This will contain path to the workload binary. This is not a mandatory field if the workload binary is set as path variable. |
59
+
|command_path/configuration_path || This will contain path to the workload binary. This isn't a mandatory field if the workload binary is set as path variable. |
60
60
|linux_user | Yes | This will contain the username of the Linux user with access to the database user login. If this value isn't set, then root is considered as the default user. |
61
61
|credString || This stands for credential string to connect to the database. This will contain the entire login string. |
62
62
|ipc_folder || The workload can only write to certain file system paths. You need to provide here this folder path so that the pre-script can write the states to this folder path. |
@@ -87,9 +87,9 @@ Therefore, a daily snapshot + logs with occasional full backup for long-term ret
87
87
88
88
### Log backup strategy
89
89
90
-
The enhanced pre-post script framework is built on Azure VM backup that schedules backup once per day. So, the data loss window with RPO as 24 hours isn’t suitable for production databases. This solution is complemented with a log backup strategy where log backups are streamed out explicitly.
90
+
The enhanced pre-post script framework is built on Azure VM backup that schedules backup once per day. So, the data loss window with Recovery Point Objective (RPO) as 24 hours isn’t suitable for production databases. This solution is complemented with a log backup strategy where log backups are streamed out explicitly.
91
91
92
-
[NFS on blob](../storage/blobs/network-file-system-protocol-support.md) and [NFS on AFS (Preview)](../storage/files/files-nfs-protocol.md) help in easy mounting of volumes directly on database VMs and use database clients to transfer log backups. The data loss window, that is RPO, falls to the frequency of log backups. Also, NFS targets don't need to be highly performant as you might not need to trigger regular streaming (full and incremental) for operational backups after you have a database consistent snapshots.
92
+
[NFS on blob](../storage/blobs/network-file-system-protocol-support.md) and [NFS on AFS (Preview)](../storage/files/files-nfs-protocol.md) help in easy mounting of volumes directly on database VMs and use database clients to transfer log backups. The data loss window that is RPO, falls to the frequency of log backups. Also, NFS targets don't need to be highly performant as you might not need to trigger regular streaming (full and incremental) for operational backups after you have a database consistent snapshots.
93
93
94
94
>[!NOTE]
95
95
>The enhanced pre- script usually takes care to flush all the log transactions in transit to the log backup destination, before quiescing the database to take a snapshot. Therefore, the snapshots are database consistent and reliable during recovery.
0 commit comments