Skip to content

Commit 1db0413

Browse files
authored
Merge pull request #274045 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to main to sync with https://github.com/MicrosoftDocs/azure-docs (branch main)
2 parents da5227a + 47a16b5 commit 1db0413

File tree

12 files changed

+32
-22
lines changed

12 files changed

+32
-22
lines changed

articles/app-service/manage-scale-up.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ This article shows you how to scale your app in Azure App Service. There are two
1515
like dedicated virtual machines (VMs), custom domains and certificates, staging slots, autoscaling, and more. You scale up by changing the pricing tier of the
1616
App Service plan that your app belongs to.
1717
* [Scale out](https://en.wikipedia.org/wiki/Scalability#Horizontal_and_vertical_scaling): Increase the number of VM instances that run your app.
18-
You can scale out to as many as 30 instances, depending on your pricing tier. [App Service Environments](environment/intro.md)
18+
Basic, Standard and Premium service plans scale out to as many as 3, 10 and 30 instances respectively. [App Service Environments](environment/intro.md)
1919
in **Isolated** tier further increases your scale-out count to 100 instances. For more information about scaling out, see
2020
[Scale instance count manually or automatically](../azure-monitor/autoscale/autoscale-get-started.md). There, you find out how
2121
to use autoscaling, which is to scale instance count automatically based on predefined rules and schedules.

articles/azure-monitor/alerts/alerts-processing-rules.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ For those alert types, you can use alert processing rules to add action groups.
5454

5555
This section describes the scope and filters for alert processing rules.
5656

57-
Each alert processing rule has a scope. A scope is a list of one or more specific Azure resources, a specific resource group, or an entire subscription. The alert processing rule applies to alerts that fired on resources within that scope. You cannot create an alert processing rule on a resource from a different subsciption.
57+
Each alert processing rule has a scope. A scope is a list of one or more specific Azure resources, a specific resource group, or an entire subscription. The alert processing rule applies to alerts that fired on resources within that scope. You cannot create an alert processing rule on a resource from a different subscription.
5858

5959
You can also define filters to narrow down which specific subset of alerts are affected within the scope. The available filters are described in the following table.
6060

articles/azure-signalr/signalr-concept-serverless-development-config.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ A client application requires a valid access token to connect to Azure SignalR S
3838

3939
Use an HTTP-triggered Azure Function and the `SignalRConnectionInfo` input binding to generate the connection information object. The function must have an HTTP route that ends in `/negotiate`.
4040

41-
With [class-based model](#class-based-model) in C#, you don't need the `SignalRConnectionInfo` input binding and can add custom claims much more easily. For more information, see [Negotiation experience in class-based model](#negotiation-experience-in-class-based-model).
41+
With [class-based model](#class-based-model) in C#, you don't need the `SignalRConnectionInfo` input binding and can add custom claims much more easily. For more information, see [Negotiation experience in class-based model](#negotiation-experience-in-class-based-model-1).
4242

4343
For more information about the `negotiate` function, see [Azure Functions development](#negotiation-function).
4444

articles/azure-vmware/faq.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -115,7 +115,7 @@ sections:
115115
Remember that you can only resize a VM within the same series or to an available series in the same Azure region. Ensure the new plan supports your VM's storage and networking configurations.
116116
117117
- question: Is VMware HCX supported on VPNs?
118-
answer: Yes, provided VMware HCX [Network Underlay Minimum Requirements](https://docs.vmware.com/en/VMware-HCX/4.2/hcx-user-guide/GUID-8128EB85-4E3F-4E0C-A32C-4F9B15DACC6D.html) are met.
118+
answer: Yes, provided VMware HCX [Network Underlay Minimum Requirements](https://docs.vmware.com/en/VMware-HCX/4.9/hcx-user-guide/GUID-8128EB85-4E3F-4E0C-A32C-4F9B15DACC6D.html) are met.
119119

120120
- question: What versions of VMware software are used in private clouds?
121121
answer: |

articles/azure-vmware/migrate-sql-server-always-on-availability-group.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ The following table indicates the estimated downtime for migration of each SQL S
5151
|:---|:-----|:-----|
5252
| **SQL Server standalone instance** | Low | Migration is done using VMware vMotion, the database is available during migration time, but it isn't recommended to commit any critical data during it. |
5353
| **SQL Server Always On Availability Group** | Low | The primary replica will always be available during the migration of the first secondary replica and the secondary replica will become the primary after the initial failover to Azure. |
54-
| **SQL Server Always On Failover Customer Instance** | High | All nodes of the cluster are shut down and migrated using VMware HCX Cold Migration. Downtime duration depends upon database size and private network speed to Azure cloud. |
54+
| **SQL Server Always On Failover Cluster Instance** | High | All nodes of the cluster are shut down and migrated using VMware HCX Cold Migration. Downtime duration depends upon database size and private network speed to Azure cloud. |
5555

5656
## Windows Server Failover Cluster quorum considerations
5757

articles/azure-vmware/migrate-sql-server-failover-cluster.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ The following table indicates the estimated downtime for migration of each SQL S
5757
|:---|:-----|:-----|
5858
| **SQL Server standalone instance** | Low | Migration is done using VMware vMotion, the database is available during migration time, but it isn't recommended to commit any critical data during it. |
5959
| **SQL Server Always On Availability Group** | Low | The primary replica will always be available during the migration of the first secondary replica and the secondary replica will become the primary after the initial failover to Azure. |
60-
| **SQL Server Always On Failover Customer Instance** | High | All nodes of the cluster are shut down and migrated using VMware HCX Cold Migration. Downtime duration depends upon database size and private network speed to Azure cloud. |
60+
| **SQL Server Always On Failover Cluster Instance** | High | All nodes of the cluster are shut down and migrated using VMware HCX Cold Migration. Downtime duration depends upon database size and private network speed to Azure cloud. |
6161

6262
## Windows Server Failover Cluster quorum considerations
6363

articles/azure-vmware/migrate-sql-server-standalone-cluster.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ The following table indicates the estimated downtime for migration of each SQL S
7676
|:---|:-----|:-----|
7777
| **SQL Server standalone instance** | Low | Migration is done using VMware vMotion, the database is available during migration time, but it isn't recommended to commit any critical data during it. |
7878
| **SQL Server Always On Availability Group** | Low | The primary replica will always be available during the migration of the first secondary replica and the secondary replica will become the primary after the initial failover to Azure. |
79-
| **SQL Server Always On Failover Customer Instance** | High | All nodes of the cluster are shut down and migrated using VMware HCX Cold Migration. Downtime duration depends upon database size and private network speed to Azure cloud. |
79+
| **SQL Server Always On Failover Cluster Instance** | High | All nodes of the cluster are shut down and migrated using VMware HCX Cold Migration. Downtime duration depends upon database size and private network speed to Azure cloud. |
8080

8181
## Executing the migration
8282

articles/cloud-shell/vnet/deployment.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ Cloud Shell needs access to certain Azure resources. You make that access availa
3030
resource providers. The following resource providers must be registered in your subscription:
3131

3232
- **Microsoft.CloudShell**
33-
- **Microsoft.ContainerInstances**
33+
- **Microsoft.ContainerInstance**
3434
- **Microsoft.Relay**
3535

3636
Depending on when your tenant was created, some of these providers might already be registered.
@@ -44,7 +44,7 @@ To see all resource providers and the registration status for your subscription:
4444
1. In the search box, enter `cloudshell` to search for the resource provider.
4545
1. Select the **Microsoft.CloudShell** resource provider from the provider list.
4646
1. Select **Register** to change the status from **unregistered** to **registered**.
47-
1. Repeat the previous steps for the **Microsoft.ContainerInstances** and **Microsoft.Relay**
47+
1. Repeat the previous steps for the **Microsoft.ContainerInstance** and **Microsoft.Relay**
4848
resource providers.
4949

5050
[![Screenshot of selecting resource providers in the Azure portal.][98a]][98b]

articles/cosmos-db/nosql/change-feed-processor.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -26,9 +26,9 @@ The change feed processor has four main components:
2626

2727
* **The monitored container**: The monitored container has the data from which the change feed is generated. Any inserts and updates to the monitored container are reflected in the change feed of the container.
2828

29-
* **The lease container**: The lease container acts as state storage and coordinates processing the change feed across multiple workers. The lease container can be stored in the same account as the monitored container or in a separate account.
29+
* **The lease container**: The lease container acts as state storage and coordinates the processing of the change feed across multiple workers. The lease container can be stored in the same account as the monitored container or in a separate account.
3030

31-
* **The compute instance**: A compute instance hosts the change feed processor to listen for changes. Depending on the platform, it might be represented by a virtual machine (VM), a kubernetes pod, an Azure App Service instance, or an actual physical machine. The compute instance has a unique identifier that's called the *instance name* throughout this article.
31+
* **The compute instance**: A compute instance hosts the change feed processor to listen for changes. Depending on the platform, it might be represented by a virtual machine (VM), a Kubernetes pod, an Azure App Service instance, or an actual physical machine. The compute instance has a unique identifier that's called the *instance name* throughout this article.
3232

3333
* **The delegate**: The delegate is the code that defines what you, the developer, want to do with each batch of changes that the change feed processor reads.
3434

@@ -72,7 +72,7 @@ The normal life cycle of a host instance is:
7272
The change feed processor is resilient to user code errors. If your delegate implementation has an unhandled exception (step #4), the thread that is processing that particular batch of changes stops, and a new thread is eventually created. The new thread checks the latest point in time that the lease store has saved for that range of partition key values. The new thread restarts from there, effectively sending the same batch of changes to the delegate. This behavior continues until your delegate processes the changes correctly, and it's the reason the change feed processor has an "at least once" guarantee.
7373

7474
> [!NOTE]
75-
> In only one scenario, a batch of changes is not retried. If the failure happens on the first ever delegate execution, the lease store has no previous saved state to be used on the retry. In those cases, the retry uses the [initial starting configuration](#starting-time), which might or might not include the last batch.
75+
> In only one scenario, a batch of changes is not retried. If the failure happens on the first-ever delegate execution, the lease store has no previous saved state to be used on the retry. In those cases, the retry uses the [initial starting configuration](#starting-time), which might or might not include the last batch.
7676
7777
To prevent your change feed processor from getting "stuck" continuously retrying the same batch of changes, you should add logic in your delegate code to write documents, upon exception, to an errored-message queue. This design ensures that you can keep track of unprocessed changes while still being able to continue to process future changes. The errored-message queue might be another Azure Cosmos DB container. The exact data store doesn't matter. You simply want the unprocessed changes to be persisted.
7878

@@ -104,13 +104,13 @@ As mentioned earlier, within a deployment unit, you can have one or more compute
104104

105105
If these three conditions apply, then the change feed processor distributes all the leases that are in the lease container across all running instances of that deployment unit, and it parallelizes compute by using an equal-distribution algorithm. A lease is owned by one instance at any time, so the number of instances shouldn't be greater than the number of leases.
106106

107-
The number of instances can grow and shrink. The change feed processor dynamically adjusts the load by redistributing accordingly.
107+
The number of instances can grow and shrink. The change feed processor dynamically adjusts the load by redistributing it accordingly.
108108

109109
Moreover, the change feed processor can dynamically adjust a container's scale if the container's throughput or storage increases. When your container grows, the change feed processor transparently handles the scenario by dynamically increasing the leases and distributing the new leases among existing instances.
110110

111111
## Starting time
112112

113-
By default, when a change feed processor starts for the first time, it initializes the leases container and start its [processing life cycle](#processing-life-cycle). Any changes that happened in the monitored container before the change feed processor is initialized for the first time aren't detected.
113+
By default, when a change feed processor starts for the first time, it initializes the lease container and starts its [processing life cycle](#processing-life-cycle). Any changes that happened in the monitored container before the change feed processor is initialized for the first time aren't detected.
114114

115115
### Reading from a previous date and time
116116

@@ -142,13 +142,13 @@ For full working samples, see [here](https://github.com/Azure-Samples/azure-cosm
142142
>
143143
> [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedProcessor.java?name=ChangeFeedProcessorOptions)]
144144
145-
The delegate implementation for reading the change feed in [all versions and deletes mode](change-feed-modes.md#all-versions-and-deletes-change-feed-mode-preview) is similar, but instead of calling `.handleChanges()`, call `.handleAllVersionsAndDeletesChanges()`. All versions and deletes mode is in preview and is available in Java SDK version >= `4.42.0`.
145+
The delegate implementation for reading the change feed in [all versions and deletes mode](change-feed-modes.md#all-versions-and-deletes-change-feed-mode-preview) is similar, but instead of calling `.handleChanges()`, call `.handleAllVersionsAndDeletesChanges()`. The All versions and deletes mode is in preview and is available in Java SDK version >= `4.42.0`.
146146

147147
Here's an example:
148148

149149
[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedProcessorForAllVersionsAndDeletesMode.java?name=Delegate)]
150150

151-
In either change feed mode, you can assign it to `changeFeedProcessorInstance` and pass the parameters of compute instance name (`hostName`), the monitored container (here called `feedContainer`), and the `leaseContainer`. Then start the change feed processor:
151+
In either change feed mode, you can assign it to `changeFeedProcessorInstance` and pass the parameters of the compute instance name (`hostName`), the monitored container (here called `feedContainer`), and the `leaseContainer`. Then start the change feed processor:
152152

153153
[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedProcessor.java?name=StartChangeFeedProcessor)]
154154

@@ -166,10 +166,10 @@ The normal life cycle of a host instance is:
166166

167167
## Error handling
168168

169-
The change feed processor is resilient to user code errors. If your delegate implementation has an unhandled exception (step #4), the thread that's processing that particular batch of changes is stopped, and a new thread is created. The new thread checks the latest point in time that the lease store has saved for that range of partition key values, and it restart from there, effectively sending the same batch of changes to the delegate. This behavior continues until your delegate processes the changes correctly. It's the reason why the change feed processor has an "at least once" guarantee.
169+
The change feed processor is resilient to user code errors. If your delegate implementation has an unhandled exception (step #4), the thread that's processing that particular batch of changes is stopped, and a new thread is created. The new thread checks the latest point in time that the lease store has saved for that range of partition key values, and it restarts from there, effectively sending the same batch of changes to the delegate. This behavior continues until your delegate processes the changes correctly. It's the reason why the change feed processor has an "at least once" guarantee.
170170

171171
> [!NOTE]
172-
> In only one scenario is a batch of changes is not retried. If the failure happens on the first ever delegate execution, the lease store has no previous saved state to be used on the retry. In those cases, the retry uses the [initial starting configuration](#starting-time), which might or might not include the last batch.
172+
> In only one scenario is a batch of changes is not retried. If the failure happens on the first-ever delegate execution, the lease store has no previous saved state to be used on the retry. In those cases, the retry uses the [initial starting configuration](#starting-time), which might or might not include the last batch.
173173
174174
To prevent your change feed processor from getting "stuck" continuously retrying the same batch of changes, you should add logic in your delegate code to write documents, upon exception, to an errored-message. This design ensures that you can keep track of unprocessed changes while still being able to continue to process future changes. The errored-message might be another Azure Cosmos DB container. The exact data store doesn't matter. You simply want the unprocessed changes to be persisted.
175175

@@ -191,7 +191,7 @@ As mentioned earlier, within a deployment unit, you can have one or more compute
191191

192192
If these three conditions apply, then the change feed processor distributes all the leases in the lease container across all running instances of that deployment unit, and it parallelizes compute by using an equal-distribution algorithm. A lease is owned by one instance at any time, so the number of instances shouldn't be greater than the number of leases.
193193

194-
The number of instances can grow and shrink. The change feed processor dynamically adjusts the load by redistributing accordingly. Deployment units can share the same lease container, but they should each have a different `leasePrefix` value.
194+
The number of instances can grow and shrink. The change feed processor dynamically adjusts the load by redistributing it accordingly. Deployment units can share the same lease container, but they should each have a different `leasePrefix` value.
195195

196196
Moreover, the change feed processor can dynamically adjust a container's scale if the container's throughput or storage increases. When your container grows, the change feed processor transparently handles the scenario by dynamically increasing the leases and distributing the new leases among existing instances.
197197

articles/virtual-machine-scale-sets/virtual-machine-scale-sets-faq.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ sections:
5252
Create and capture a VM image, and then use that as the source for your scale set. For a tutorial on how to create and use a custom VM image, you can use the [Azure CLI](tutorial-use-custom-image-cli.md) or [Azure PowerShell](tutorial-use-custom-image-powershell.md).
5353
5454
- question: |
55-
What is the difference betweeen OS Image Upgrade and Reimage?
55+
What is the difference between OS Image Upgrade and Reimage?
5656
answer: |
5757
OS Image Upgrade is a gradual and non-disruptive process that updates the OS image for the entire Virtual Machine Scale Set over time, ensuring minimal impact on running workloads.
5858

0 commit comments

Comments
 (0)