You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/app-service/manage-scale-up.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ This article shows you how to scale your app in Azure App Service. There are two
15
15
like dedicated virtual machines (VMs), custom domains and certificates, staging slots, autoscaling, and more. You scale up by changing the pricing tier of the
16
16
App Service plan that your app belongs to.
17
17
*[Scale out](https://en.wikipedia.org/wiki/Scalability#Horizontal_and_vertical_scaling): Increase the number of VM instances that run your app.
18
-
You can scale out to as many as 30 instances, depending on your pricing tier. [App Service Environments](environment/intro.md)
18
+
Basic, Standard and Premium service plans scale out to as many as 3, 10 and 30 instances respectively. [App Service Environments](environment/intro.md)
19
19
in **Isolated** tier further increases your scale-out count to 100 instances. For more information about scaling out, see
20
20
[Scale instance count manually or automatically](../azure-monitor/autoscale/autoscale-get-started.md). There, you find out how
21
21
to use autoscaling, which is to scale instance count automatically based on predefined rules and schedules.
Copy file name to clipboardExpand all lines: articles/azure-monitor/alerts/alerts-processing-rules.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -54,7 +54,7 @@ For those alert types, you can use alert processing rules to add action groups.
54
54
55
55
This section describes the scope and filters for alert processing rules.
56
56
57
-
Each alert processing rule has a scope. A scope is a list of one or more specific Azure resources, a specific resource group, or an entire subscription. The alert processing rule applies to alerts that fired on resources within that scope. You cannot create an alert processing rule on a resource from a different subsciption.
57
+
Each alert processing rule has a scope. A scope is a list of one or more specific Azure resources, a specific resource group, or an entire subscription. The alert processing rule applies to alerts that fired on resources within that scope. You cannot create an alert processing rule on a resource from a different subscription.
58
58
59
59
You can also define filters to narrow down which specific subset of alerts are affected within the scope. The available filters are described in the following table.
Copy file name to clipboardExpand all lines: articles/azure-signalr/signalr-concept-serverless-development-config.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -38,7 +38,7 @@ A client application requires a valid access token to connect to Azure SignalR S
38
38
39
39
Use an HTTP-triggered Azure Function and the `SignalRConnectionInfo` input binding to generate the connection information object. The function must have an HTTP route that ends in `/negotiate`.
40
40
41
-
With [class-based model](#class-based-model) in C#, you don't need the `SignalRConnectionInfo` input binding and can add custom claims much more easily. For more information, see [Negotiation experience in class-based model](#negotiation-experience-in-class-based-model).
41
+
With [class-based model](#class-based-model) in C#, you don't need the `SignalRConnectionInfo` input binding and can add custom claims much more easily. For more information, see [Negotiation experience in class-based model](#negotiation-experience-in-class-based-model-1).
42
42
43
43
For more information about the `negotiate` function, see [Azure Functions development](#negotiation-function).
Copy file name to clipboardExpand all lines: articles/azure-vmware/faq.yml
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -115,7 +115,7 @@ sections:
115
115
Remember that you can only resize a VM within the same series or to an available series in the same Azure region. Ensure the new plan supports your VM's storage and networking configurations.
Copy file name to clipboardExpand all lines: articles/azure-vmware/migrate-sql-server-always-on-availability-group.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -51,7 +51,7 @@ The following table indicates the estimated downtime for migration of each SQL S
51
51
|:---|:-----|:-----|
52
52
|**SQL Server standalone instance**| Low | Migration is done using VMware vMotion, the database is available during migration time, but it isn't recommended to commit any critical data during it. |
53
53
|**SQL Server Always On Availability Group**| Low | The primary replica will always be available during the migration of the first secondary replica and the secondary replica will become the primary after the initial failover to Azure. |
54
-
|**SQL Server Always On Failover Customer Instance**| High | All nodes of the cluster are shut down and migrated using VMware HCX Cold Migration. Downtime duration depends upon database size and private network speed to Azure cloud. |
54
+
|**SQL Server Always On Failover Cluster Instance**| High | All nodes of the cluster are shut down and migrated using VMware HCX Cold Migration. Downtime duration depends upon database size and private network speed to Azure cloud. |
55
55
56
56
## Windows Server Failover Cluster quorum considerations
Copy file name to clipboardExpand all lines: articles/azure-vmware/migrate-sql-server-failover-cluster.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -57,7 +57,7 @@ The following table indicates the estimated downtime for migration of each SQL S
57
57
|:---|:-----|:-----|
58
58
|**SQL Server standalone instance**| Low | Migration is done using VMware vMotion, the database is available during migration time, but it isn't recommended to commit any critical data during it. |
59
59
|**SQL Server Always On Availability Group**| Low | The primary replica will always be available during the migration of the first secondary replica and the secondary replica will become the primary after the initial failover to Azure. |
60
-
|**SQL Server Always On Failover Customer Instance**| High | All nodes of the cluster are shut down and migrated using VMware HCX Cold Migration. Downtime duration depends upon database size and private network speed to Azure cloud. |
60
+
|**SQL Server Always On Failover Cluster Instance**| High | All nodes of the cluster are shut down and migrated using VMware HCX Cold Migration. Downtime duration depends upon database size and private network speed to Azure cloud. |
61
61
62
62
## Windows Server Failover Cluster quorum considerations
Copy file name to clipboardExpand all lines: articles/azure-vmware/migrate-sql-server-standalone-cluster.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -76,7 +76,7 @@ The following table indicates the estimated downtime for migration of each SQL S
76
76
|:---|:-----|:-----|
77
77
|**SQL Server standalone instance**| Low | Migration is done using VMware vMotion, the database is available during migration time, but it isn't recommended to commit any critical data during it. |
78
78
|**SQL Server Always On Availability Group**| Low | The primary replica will always be available during the migration of the first secondary replica and the secondary replica will become the primary after the initial failover to Azure. |
79
-
|**SQL Server Always On Failover Customer Instance**| High | All nodes of the cluster are shut down and migrated using VMware HCX Cold Migration. Downtime duration depends upon database size and private network speed to Azure cloud. |
79
+
|**SQL Server Always On Failover Cluster Instance**| High | All nodes of the cluster are shut down and migrated using VMware HCX Cold Migration. Downtime duration depends upon database size and private network speed to Azure cloud. |
Copy file name to clipboardExpand all lines: articles/cosmos-db/nosql/change-feed-processor.md
+10-10Lines changed: 10 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,9 +26,9 @@ The change feed processor has four main components:
26
26
27
27
***The monitored container**: The monitored container has the data from which the change feed is generated. Any inserts and updates to the monitored container are reflected in the change feed of the container.
28
28
29
-
***The lease container**: The lease container acts as state storage and coordinates processing the change feed across multiple workers. The lease container can be stored in the same account as the monitored container or in a separate account.
29
+
***The lease container**: The lease container acts as state storage and coordinates the processing of the change feed across multiple workers. The lease container can be stored in the same account as the monitored container or in a separate account.
30
30
31
-
***The compute instance**: A compute instance hosts the change feed processor to listen for changes. Depending on the platform, it might be represented by a virtual machine (VM), a kubernetes pod, an Azure App Service instance, or an actual physical machine. The compute instance has a unique identifier that's called the *instance name* throughout this article.
31
+
***The compute instance**: A compute instance hosts the change feed processor to listen for changes. Depending on the platform, it might be represented by a virtual machine (VM), a Kubernetes pod, an Azure App Service instance, or an actual physical machine. The compute instance has a unique identifier that's called the *instance name* throughout this article.
32
32
33
33
***The delegate**: The delegate is the code that defines what you, the developer, want to do with each batch of changes that the change feed processor reads.
34
34
@@ -72,7 +72,7 @@ The normal life cycle of a host instance is:
72
72
The change feed processor is resilient to user code errors. If your delegate implementation has an unhandled exception (step #4), the thread that is processing that particular batch of changes stops, and a new thread is eventually created. The new thread checks the latest point in time that the lease store has saved for that range of partition key values. The new thread restarts from there, effectively sending the same batch of changes to the delegate. This behavior continues until your delegate processes the changes correctly, and it's the reason the change feed processor has an "at least once" guarantee.
73
73
74
74
> [!NOTE]
75
-
> In only one scenario, a batch of changes is not retried. If the failure happens on the firstever delegate execution, the lease store has no previous saved state to be used on the retry. In those cases, the retry uses the [initial starting configuration](#starting-time), which might or might not include the last batch.
75
+
> In only one scenario, a batch of changes is not retried. If the failure happens on the first-ever delegate execution, the lease store has no previous saved state to be used on the retry. In those cases, the retry uses the [initial starting configuration](#starting-time), which might or might not include the last batch.
76
76
77
77
To prevent your change feed processor from getting "stuck" continuously retrying the same batch of changes, you should add logic in your delegate code to write documents, upon exception, to an errored-message queue. This design ensures that you can keep track of unprocessed changes while still being able to continue to process future changes. The errored-message queue might be another Azure Cosmos DB container. The exact data store doesn't matter. You simply want the unprocessed changes to be persisted.
78
78
@@ -104,13 +104,13 @@ As mentioned earlier, within a deployment unit, you can have one or more compute
104
104
105
105
If these three conditions apply, then the change feed processor distributes all the leases that are in the lease container across all running instances of that deployment unit, and it parallelizes compute by using an equal-distribution algorithm. A lease is owned by one instance at any time, so the number of instances shouldn't be greater than the number of leases.
106
106
107
-
The number of instances can grow and shrink. The change feed processor dynamically adjusts the load by redistributing accordingly.
107
+
The number of instances can grow and shrink. The change feed processor dynamically adjusts the load by redistributing it accordingly.
108
108
109
109
Moreover, the change feed processor can dynamically adjust a container's scale if the container's throughput or storage increases. When your container grows, the change feed processor transparently handles the scenario by dynamically increasing the leases and distributing the new leases among existing instances.
110
110
111
111
## Starting time
112
112
113
-
By default, when a change feed processor starts for the first time, it initializes the leases container and start its [processing life cycle](#processing-life-cycle). Any changes that happened in the monitored container before the change feed processor is initialized for the first time aren't detected.
113
+
By default, when a change feed processor starts for the first time, it initializes the lease container and starts its [processing life cycle](#processing-life-cycle). Any changes that happened in the monitored container before the change feed processor is initialized for the first time aren't detected.
114
114
115
115
### Reading from a previous date and time
116
116
@@ -142,13 +142,13 @@ For full working samples, see [here](https://github.com/Azure-Samples/azure-cosm
The delegate implementation for reading the change feed in [all versions and deletes mode](change-feed-modes.md#all-versions-and-deletes-change-feed-mode-preview) is similar, but instead of calling `.handleChanges()`, call `.handleAllVersionsAndDeletesChanges()`. All versions and deletes mode is in preview and is available in Java SDK version >= `4.42.0`.
145
+
The delegate implementation for reading the change feed in [all versions and deletes mode](change-feed-modes.md#all-versions-and-deletes-change-feed-mode-preview) is similar, but instead of calling `.handleChanges()`, call `.handleAllVersionsAndDeletesChanges()`. The All versions and deletes mode is in preview and is available in Java SDK version >= `4.42.0`.
In either change feed mode, you can assign it to `changeFeedProcessorInstance` and pass the parameters of compute instance name (`hostName`), the monitored container (here called `feedContainer`), and the `leaseContainer`. Then start the change feed processor:
151
+
In either change feed mode, you can assign it to `changeFeedProcessorInstance` and pass the parameters of the compute instance name (`hostName`), the monitored container (here called `feedContainer`), and the `leaseContainer`. Then start the change feed processor:
@@ -166,10 +166,10 @@ The normal life cycle of a host instance is:
166
166
167
167
## Error handling
168
168
169
-
The change feed processor is resilient to user code errors. If your delegate implementation has an unhandled exception (step #4), the thread that's processing that particular batch of changes is stopped, and a new thread is created. The new thread checks the latest point in time that the lease store has saved for that range of partition key values, and it restart from there, effectively sending the same batch of changes to the delegate. This behavior continues until your delegate processes the changes correctly. It's the reason why the change feed processor has an "at least once" guarantee.
169
+
The change feed processor is resilient to user code errors. If your delegate implementation has an unhandled exception (step #4), the thread that's processing that particular batch of changes is stopped, and a new thread is created. The new thread checks the latest point in time that the lease store has saved for that range of partition key values, and it restarts from there, effectively sending the same batch of changes to the delegate. This behavior continues until your delegate processes the changes correctly. It's the reason why the change feed processor has an "at least once" guarantee.
170
170
171
171
> [!NOTE]
172
-
> In only one scenario is a batch of changes is not retried. If the failure happens on the firstever delegate execution, the lease store has no previous saved state to be used on the retry. In those cases, the retry uses the [initial starting configuration](#starting-time), which might or might not include the last batch.
172
+
> In only one scenario is a batch of changes is not retried. If the failure happens on the first-ever delegate execution, the lease store has no previous saved state to be used on the retry. In those cases, the retry uses the [initial starting configuration](#starting-time), which might or might not include the last batch.
173
173
174
174
To prevent your change feed processor from getting "stuck" continuously retrying the same batch of changes, you should add logic in your delegate code to write documents, upon exception, to an errored-message. This design ensures that you can keep track of unprocessed changes while still being able to continue to process future changes. The errored-message might be another Azure Cosmos DB container. The exact data store doesn't matter. You simply want the unprocessed changes to be persisted.
175
175
@@ -191,7 +191,7 @@ As mentioned earlier, within a deployment unit, you can have one or more compute
191
191
192
192
If these three conditions apply, then the change feed processor distributes all the leases in the lease container across all running instances of that deployment unit, and it parallelizes compute by using an equal-distribution algorithm. A lease is owned by one instance at any time, so the number of instances shouldn't be greater than the number of leases.
193
193
194
-
The number of instances can grow and shrink. The change feed processor dynamically adjusts the load by redistributing accordingly. Deployment units can share the same lease container, but they should each have a different `leasePrefix` value.
194
+
The number of instances can grow and shrink. The change feed processor dynamically adjusts the load by redistributing it accordingly. Deployment units can share the same lease container, but they should each have a different `leasePrefix` value.
195
195
196
196
Moreover, the change feed processor can dynamically adjust a container's scale if the container's throughput or storage increases. When your container grows, the change feed processor transparently handles the scenario by dynamically increasing the leases and distributing the new leases among existing instances.
Copy file name to clipboardExpand all lines: articles/virtual-machine-scale-sets/virtual-machine-scale-sets-faq.yml
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -52,7 +52,7 @@ sections:
52
52
Create and capture a VM image, and then use that as the source for your scale set. For a tutorial on how to create and use a custom VM image, you can use the [Azure CLI](tutorial-use-custom-image-cli.md) or [Azure PowerShell](tutorial-use-custom-image-powershell.md).
53
53
54
54
- question: |
55
-
What is the difference betweeen OS Image Upgrade and Reimage?
55
+
What is the difference between OS Image Upgrade and Reimage?
56
56
answer: |
57
57
OS Image Upgrade is a gradual and non-disruptive process that updates the OS image for the entire Virtual Machine Scale Set over time, ensuring minimal impact on running workloads.
0 commit comments