Skip to content

Commit 44b45dc

Browse files
authored
Merge pull request #87467 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to master to sync with https://github.com/Microsoft/azure-docs (branch master)
2 parents a0b27c4 + 297eb56 commit 44b45dc

File tree

10 files changed

+80
-50
lines changed

10 files changed

+80
-50
lines changed

articles/active-directory/reports-monitoring/reference-sign-ins-error-codes.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -150,6 +150,8 @@ You can also programmatically access the sign-in data using the [reporting API](
150150
|70018|Invalid verification code due to User typing in wrong user code for device code flow. Authorization is not approved.|
151151
|70019|Verification code expired. Have the user retry the sign-in.|
152152
|70037|Incorrect challenge response provided. Remote auth session denied.|
153+
|70043|Azure Conditional Access session management forces the session to expire|
154+
|70044|Azure Conditional Access session management forces the session to expire|
153155
|75001|An error occurred during SAML message binding.|
154156
|75003|The application returned an error related to unsupported Binding (SAML protocol response cannot be sent via bindings other than HTTP POST). Contact the application owner.|
155157
|75005|Azure AD doesn’t support the SAML Request sent by the application for Single Sign-on. Contact the application owner.|

articles/azure-functions/durable/durable-functions-bindings.md

Lines changed: 31 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ Here are some notes about the orchestration trigger:
4646
* **Return values** - Return values are serialized to JSON and persisted to the orchestration history table in Azure Table storage. These return values can be queried by the orchestration client binding, described later.
4747

4848
> [!WARNING]
49-
> Orchestrator functions should never use any input or output bindings other than the orchestration trigger binding. Doing so has the potential to cause problems with the Durable Task extension because those bindings may not obey the single-threading and I/O rules.
49+
> Orchestrator functions should never use any input or output bindings other than the orchestration trigger binding. Doing so has the potential to cause problems with the Durable Task extension because those bindings may not obey the single-threading and I/O rules. If you'd like to use other bindings, add them to an Activity function called from your Orchestrator function.
5050
5151
> [!WARNING]
5252
> JavaScript orchestrator functions should never be declared `async`.
@@ -237,6 +237,35 @@ public static async Task<dynamic> Mapper([ActivityTrigger] DurableActivityContex
237237
}
238238
```
239239

240+
### Using input and output bindings
241+
242+
You can use regular input and output bindings in addition to the activity trigger binding. For example, you can take the input to your activity binding, and send a message to an EventHub using the EventHub output binding:
243+
244+
```json
245+
{
246+
"bindings": [
247+
{
248+
"name": "message",
249+
"type": "activityTrigger",
250+
"direction": "in"
251+
},
252+
{
253+
"type": "eventHub",
254+
"name": "outputEventHubMessage",
255+
"connection": "EventhubConnectionSetting",
256+
"eventHubName": "eh_messages",
257+
"direction": "out"
258+
}
259+
]
260+
}
261+
```
262+
263+
```javascript
264+
module.exports = async function (context) {
265+
context.bindings.outputEventHubMessage = context.bindings.message;
266+
};
267+
```
268+
240269
## Orchestration client
241270

242271
The orchestration client binding enables you to write functions which interact with orchestrator functions. For example, you can act on orchestration instances in the following ways:
@@ -360,4 +389,4 @@ More details on starting instances can be found in [Instance management](durable
360389
## Next steps
361390

362391
> [!div class="nextstepaction"]
363-
> [Learn about checkpointing and replay behaviors](durable-functions-checkpointing-and-replay.md)
392+
> [Learn about checkpointing and replay behaviors](durable-functions-checkpointing-and-replay.md)

articles/data-explorer/data-lake-query-data.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -45,6 +45,7 @@ Azure Data Explorer integrates with Azure Blob Storage and Azure Data Lake Stora
4545
> * Increased performance is expected with more granular partitioning. For example, queries over external tables with daily partitions, will have better performance than those queries with monthly partitioned tables.
4646
> * When you define an external table with partitions, the storage structure is expected to be identical.
4747
For example, if the table is defined with a DateTime partition in yyyy/MM/dd format (default), the URI storage file path should be *container1/yyyy/MM/dd/all_exported_blobs*.
48+
> * If the external table is partitioned by a datetime column, always include a time filter for a closed range in your query (for example, the query - `ArchivedProducts | where Timestamp between (ago(1h) .. 10m)` - should perform better than this (opened range) one - `ArchivedProducts | where Timestamp > ago(1h)` ).
4849
4950
1. The external table is visible in the left pane of the Web UI
5051

articles/data-factory/create-self-hosted-integration-runtime.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -74,6 +74,9 @@ Here is a high-level data flow for the summary of steps for copying with a self-
7474
- If the host machine hibernates, the self-hosted integration runtime does not respond to data requests. Configure an appropriate power plan on the computer before you install the self-hosted integration runtime. If the machine is configured to hibernate, the self-hosted integration runtime installation prompts a message.
7575
- You must be an administrator on the machine to install and configure the self-hosted integration runtime successfully.
7676
- Copy activity runs happen on a specific frequency. Resource usage (CPU, memory) on the machine follows the same pattern with peak and idle times. Resource utilization also depends heavily on the amount of data being moved. When multiple copy jobs are in progress, you see resource usage go up during peak times.
77+
- Tasks may fail if extracting data in Parquet, ORC, or Avro formats. The file creation runs on the self-hosted integration machine and requires the following pre-requisites to work as expected (see [Parquet format in Azure Data Factory](https://docs.microsoft.com/azure/data-factory/format-parquet#using-self-hosted-integration-runtime)).
78+
- [Visual C++ 2010 Redistributable](https://download.microsoft.com/download/3/2/2/3224B87F-CFA0-4E70-BDA3-3DE650EFEBA5/vcredist_x64.exe) package (x64)
79+
- Java Runtime (JRE) version 8 from a JRE provider such as [Adopt OpenJDK](https://adoptopenjdk.net/), ensuring that the `JAVA_HOME` environment variable is set.
7780

7881
## Installation best practices
7982
You can install the self-hosted integration runtime by downloading an MSI setup package from the [Microsoft Download Center](https://www.microsoft.com/download/details.aspx?id=39717). See [Move data between on-premises and cloud article](tutorial-hybrid-copy-powershell.md) for step-by-step instructions.

articles/key-vault/key-vault-overview-storage-keys-powershell.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -106,7 +106,7 @@ Using the same PowerShell session, update the Key Vault access policy for manage
106106
```azurepowershell-interactive
107107
# Give your user principal access to all storage account permissions, on your Key Vault instance
108108
109-
Set-AzKeyVaultAccessPolicy -VaultName $keyVaultName -UserPrincipalName $userId -PermissionsToStorage get, list, listsas, delete, set, update, regeneratekey, recover, backup, restore, purge
109+
Set-AzKeyVaultAccessPolicy -VaultName $keyVaultName -UserPrincipalName $userId -PermissionsToStorage get, list, delete, set, update, regeneratekey, getsas, listsas, deletesas, setsas, recover, backup, restore, purge
110110
```
111111

112112
Note that permissions for storage accounts aren't available on the storage account "Access policies" page in the Azure portal.

articles/service-health/alerts-activity-log-service-notifications.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -56,8 +56,8 @@ For information on how to configure service health notification alerts by using
5656

5757
![The "Add activity log alert" dialog box](media/alerts-activity-log-service-notifications/activity-log-alert-new-ux.png)
5858

59-
> [!NOTE]
60-
> This subscription is used to save the activity log alert. The alert resource is deployed to this subscription and monitors events in the activity log for it.
59+
> [!NOTE]
60+
> This subscription is used to save the activity log alert. The alert resource is deployed to this subscription and monitors events in the activity log for it.
6161
6262
1. Choose the **Event types** you want to be alerted for: *Service issue*, *Planned maintenance*, and *Health advisories*
6363

@@ -86,7 +86,6 @@ Learn how to [Configure webhook notifications for existing problem management sy
8686
>[!NOTE]
8787
>The action group defined in these steps is reusable as an existing action group for all future alert definitions.
8888
>
89-
>
9089
9190
## Alert with existing action group using Azure portal
9291

articles/site-recovery/azure-to-azure-tutorial-failback.md

Lines changed: 7 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -40,26 +40,19 @@ After VMs are reprotected, you can fail back to the primary region as needed.
4040

4141
![Failback to primary](./media/site-recovery-azure-to-azure-failback/azure-to-azure-failback.png)
4242

43-
3. Select **Test failover** to perform a test failover back to the primary region.
44-
4. Select the recovery point and virtual network for the test failover, and then select **OK**. You can review the test VM created in the primary region.
45-
5. After the test failover finishes successfully, select **Cleanup test failover** to clean up resources created in the source region for the test failover.
46-
6. In **Replicated items**, select the VM, and then select **Failover**.
47-
7. In **Failover**, select a recovery point to fail over to:
43+
2. In **Replicated items**, select the VM, and then select **Failover**.
44+
3. In **Failover**, select a recovery point to fail over to:
4845
- **Latest (default)**: Processes all the data in the Site Recovery service and provides the lowest recovery point objective (RPO).
4946
- **Latest processed**: Reverts the VM to the latest recovery point that has been processed by Site Recovery.
5047
- **Custom**: Fails over to a particular recovery point. This option is useful for performing a test failover.
51-
52-
8. Select **Shut down machine before beginning failover** if you want Site Recovery to attempt a shutdown of source VMs before triggering the failover. The failover continues even if shutdown fails. Note that Site Recovery doesn't clean up the source after failover.
53-
9. Follow the failover progress on the **Jobs** page.
54-
10. After the failover is complete, validate the VM by logging in to it. You can change the recovery point as needed.
55-
11. After you've verified the failover, select **Commit the failover**. Committing deletes all the available recovery points. The change recovery point option is no longer available.
56-
12. The VM should show as failed over and failed back.
48+
4. Select **Shut down machine before beginning failover** if you want Site Recovery to attempt a shutdown of VMs in DR region before triggering the failover. The failover continues even if shutdown fails.
49+
5. Follow the failover progress on the **Jobs** page.
50+
6. After the failover is complete, validate the VM by logging in to it. You can change the recovery point as needed.
51+
7. After you've verified the failover, select **Commit the failover**. Committing deletes all the available recovery points. The change recovery point option is no longer available.
52+
8. The VM should show as failed over and failed back.
5753

5854
![VM at primary and secondary regions](./media/site-recovery-azure-to-azure-failback/azure-to-azure-failback-vm-view.png)
5955

60-
> [!NOTE]
61-
> The disaster recovery VMs will remain in the shutdown/deallocated state. This is by design because Site Recovery saves the VM information, which might be useful for failover from the primary to the secondary region later. You aren't charged for the deallocated VMs, so they should be kept as they are.
62-
6356
## Next steps
6457

6558
[Learn more](azure-to-azure-how-to-reprotect.md#what-happens-during-reprotection) about the reprotection flow.

articles/site-recovery/site-recovery-failover.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -22,8 +22,8 @@ Use the following table to know about the failover options provided by Azure Sit
2222

2323
| Scenario | Application recovery requirement | Workflow for Hyper-V | Workflow for VMware
2424
|---|--|--|--|
25-
|Planned failover due to an upcoming datacenter downtime| Zero data loss for the application when a planned activity is performed| For Hyper-V, ASR replicates data at a copy frequency that is specified by the user. Planned Failover is used to override the frequency and replicate the final changes before a failover is initiated. <br/> <br/> 1. Plan a maintenance window as per your business's change management process. <br/><br/> 2.Notify users of upcoming downtime. <br/><br/> 3. Take the user-facing application offline.<br/><br/>4.Initiate Planned Failover using the ASR portal. The on-premises virtual machine is automatically shut-down.<br/><br/>Effective application data loss = 0 <br/><br/>A journal of recovery points is also provided in a retention window for a user who wants to use an older recovery point. (24 hours retention for Hyper-V). If replication has been stopped beyond the time frame of the retention window, customers may still be able to failover using the latest available recovery points. | For VMware, ASR replicates data continually using CDP. Failover gives the user the option to failover to the Latest data (including post application shut-down)<br/><br/> 1. Plan a maintenance window as per the change management process <br/><br/>2.Notify users of upcoming downtime <br/><br/>3. Take the user-facing application offline. <br/><br/>4. Initiate a Planned Failover using ASR portal to the Latest point after the application is offline. Use the "Unplanned Failover" option on the portal and select the Latest point to failover. The on-premises virtual machine is automatically shut-down.<br/><br/>Effective application data loss = 0 <br/><br/>A journal of recovery points in a retention window is provided for a customer who wants to use an older recovery point. (72 hours of retention for VMware). If replication has been stopped beyond the time frame of the retention window, customers may still be able to failover using the latest available recovery points.
26-
|Failover due to an unplanned datacenter downtime (natural or IT disaster) | Minimal data loss for the application | 1.Initiate the organization’s BCP plan <br/><br/>2. Initiate Unplanned Failover using ASR portal to the Latest or a point from the retention window (journal).| 1. Initiate the organization’s BCP plan. <br/><br/>2. Initiate unplanned Failover using ASR portal to the Latest or a point from the retention window (journal).
25+
|Planned failover due to an upcoming datacenter downtime| Zero data loss for the application when a planned activity is performed| For Hyper-V, ASR replicates data at a copy frequency that is specified by the user. Planned Failover is used to override the frequency and replicate the final changes before a failover is initiated. <br/> <br/> 1. Plan a maintenance window as per your business's change management process. <br/><br/> 2. Notify users of upcoming downtime. <br/><br/> 3. Take the user-facing application offline.<br/><br/>4. Initiate Planned Failover using the ASR portal. The on-premises virtual machine is automatically shut-down.<br/><br/>Effective application data loss = 0 <br/><br/>A journal of recovery points is also provided in a retention window for a user who wants to use an older recovery point. (24 hours retention for Hyper-V). If replication has been stopped beyond the time frame of the retention window, customers may still be able to failover using the latest available recovery points. | For VMware, ASR replicates data continually using CDP. Failover gives the user the option to failover to the Latest data (including post application shut-down)<br/><br/> 1. Plan a maintenance window as per the change management process <br/><br/>2.Notify users of upcoming downtime <br/><br/>3. Take the user-facing application offline.<br/><br/>4. Initiate a Planned Failover using ASR portal to the Latest point after the application is offline. Use the "Planned Failover" option on the portal and select the Latest point to failover. The on-premises virtual machine is automatically shut-down.<br/><br/>Effective application data loss = 0 <br/><br/>A journal of recovery points in a retention window is provided for a customer who wants to use an older recovery point. (72 hours of retention for VMware). If replication has been stopped beyond the time frame of the retention window, customers may still be able to failover using the latest available recovery points.
26+
|Failover due to an unplanned datacenter downtime (natural or IT disaster) | Minimal data loss for the application | 1. Initiate the organization’s BCP plan <br/><br/>2. Initiate Unplanned Failover using ASR portal to the Latest or a point from the retention window (journal).| 1. Initiate the organization’s BCP plan. <br/><br/>2. Initiate unplanned Failover using ASR portal to the Latest or a point from the retention window (journal).
2727

2828

2929
## Run a failover

articles/virtual-machines/extensions/agent-windows.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -63,6 +63,9 @@ The VM Agent can be installed by double-clicking the Windows installer file. For
6363
msiexec.exe /i WindowsAzureVmAgent.2.7.1198.778.rd_art_stable.160617-1120.fre /quiet
6464
```
6565

66+
### Prerequisites
67+
The Windows VM Agent needs at least Windows Server 2008 R2 (64-bits) to run, with the .Net Framework 4.0.
68+
6669
## Detect the VM Agent
6770

6871
### PowerShell

0 commit comments

Comments
 (0)