Skip to content

Commit c503e12

Browse files
author
Alfredo Santamaria Gomez
committed
improve Acrolinx score 2
1 parent 65bea8f commit c503e12

File tree

1 file changed

+13
-13
lines changed

1 file changed

+13
-13
lines changed

articles/service-fabric/how-to-managed-cluster-availability-zones.md

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -46,16 +46,16 @@ Sample node list depicting FD/UD formats in a virtual machine scale set spanning
4646
![Sample node list depicting FD/UD formats in a virtual machine scale set spanning zones.][sfmc-multi-az-nodes]
4747

4848
**Distribution of Service replicas across zones**:
49-
When a service is deployed on the node types that are spanning zones, the replicas are placed to ensure they land up in separate zones. This separation is ensured as the fault domain’s on the nodes present in each of these node types are configured with the zone information (i.e FD = fd:/zone1/1 etc.). For example: for five replicas or instances of a service, the distribution is 2-2-1 and runtime tries to ensure equal distribution across AZs.
49+
When a service is deployed on the node types that are spanning zones, the replicas are placed to ensure they land up in separate zones. This separation is ensured as the fault domain’s on the nodes present in each of these node types are configured with the zone information (i.e. FD = fd:/zone1/1 etc.). For example: for five replicas or instances of a service, the distribution is 2-2-1 and runtime tries to ensure equal distribution across AZs.
5050

5151
**User Service Replica Configuration**:
5252
Stateful user services deployed on the cross-availability zone node types should be configured with this configuration: replica count with target = 9, min = 5. This configuration helps the service to be working even when one zone goes down since six replicas will be still up in the other two zones. An application upgrade in such a scenario will also go through.
5353

5454
**Zone down scenario**:
55-
When a zone goes down, all the nodes in that zone appear as down. Service replicas on these nodes will also be down. Since there are replicas in the other zones, the service continues to be responsive with primary replicas failing over to the zones that are functioning. The services will appear in warning state as the target replica count is not met and the virtual machine (VM) count is still more than the defined min target replica size. As a result, Service Fabric load balancer brings up replicas in the working zones to match the configured target replica count. At this point, the services should appear healthy. When the zone that was down comes back up, the load balancer will again spread all the service replicas evenly across all the zones.
55+
When a zone goes down, all the nodes in that zone appear as down. Service replicas on these nodes will also be down. Since there are replicas in the other zones, the service continues to be responsive with primary replicas failing over to the zones that are functioning. The services will appear in warning state as the target replica count isn't met and the virtual machine (VM) count is still more than the defined min target replica size. As a result, Service Fabric load balancer brings up replicas in the working zones to match the configured target replica count. At this point, the services should appear healthy. When the zone that was down comes back up, the load balancer will again spread all the service replicas evenly across all the zones.
5656

5757
## Networking Configuration
58-
For more information, see [Configure network settings for Service Fabric managed clusters](./how-to-managed-cluster-networking.md)
58+
For more information, see [Configure network settings for Service Fabric managed clusters](./how-to-managed-cluster-networking.md).
5959

6060
## Enabling a zone resilient Azure Service Fabric managed cluster
6161
To enable a zone resilient Azure Service Fabric managed cluster, you must include the following **ZonalResiliency** property, which specifies if the cluster is zone resilient or not.
@@ -73,10 +73,10 @@ To enable a zone resilient Azure Service Fabric managed cluster, you must includ
7373
```
7474

7575
## Migrate an existing nonzone resilient cluster to Zone Resilient (Preview)
76-
Existing Service Fabric managed clusters that are not spanned across availability zones can now be migrated in-place to span availability zones. Supported scenarios include clusters created in regions that have three availability zones and clusters in regions where three availability zones are made available post-deployment.
76+
Existing Service Fabric managed clusters that aren't spanned across availability zones can now be migrated in-place to span availability zones. Supported scenarios include clusters created in regions that have three availability zones and clusters in regions where three availability zones are made available post-deployment.
7777

7878
Requirements:
79-
* Standard SKU cluster
79+
* Standard SKU cluster.
8080
* Three [availability zones in the region](../availability-zones/az-overview.md#azure-regions-with-availability-zones).
8181

8282
>[!NOTE]
@@ -116,7 +116,7 @@ Requirements:
116116
}
117117
```
118118
119-
If the Public IP resource is not zone resilient, migration of the cluster will cause a brief loss of external connectivity. This connection loss is due to the migration setting up new Public IP and updating the cluster Fully qualified domain name (FQDN) to the new IP. If the Public IP resource is zone resilient, migration will not modify the Public IP resource nor the FQDN, and there will be no external connectivity impact.
119+
If the Public IP resource isn't zone resilient, migration of the cluster will cause a brief loss of external connectivity. This connection loss is due to the migration setting up new Public IP and updating the cluster Fully qualified domain name (FQDN) to the new IP. If the Public IP resource is zone resilient, migration won't modify the Public IP resource nor the FQDN, and there will be no external connectivity impact.
120120
121121
2) Initiate conversion of the underlying storage account created for managed cluster from Locally redundant storage (LRS) to Zone Redundant Storage (ZRS) using [customer-initiated conversion](../storage/common/redundancy-migration.md#customer-initiated-conversion). The resource group of storage account that needs to be migrated would be of the form "SFC_ClusterId"(ex SFC_9240df2f-71ab-4733-a641-53a8464d992d) under the same subscription as the managed cluster resource.
122122
@@ -162,13 +162,13 @@ Requirements:
162162

163163
4) Scale Node types to add **Zonal** nodes and remove **Regional** nodes
164164

165-
At this stage, the VMSS is marked as zone-resilient. So, when scaling up, newly added nodes will be zonal, and when scaling down, regional nodes will be removed. This approach provides the flexibility to scale in any order that aligns with your capacity requirements by adjusting the `vmInstanceCount` property on the node types.
165+
At this stage, the Virtual Machine Scale Sets is marked as zone-resilient. So, when scaling up, newly added nodes will be zonal, and when scaling down, regional nodes will be removed. This approach provides the flexibility to scale in any order that aligns with your capacity requirements by adjusting the `vmInstanceCount` property on the node types.
166166

167-
For example, if the initial vmInstanceCount is set to 6 (indicating 6 regional nodes), you can perform 2 deployments:
167+
For example, if the initial vmInstanceCount is set to 6 (indicating six regional nodes), you can perform two deployments:
168168
- First deployment: Increase the vmInstanceCount to 12 to add 6 **Zonal** nodes.
169169
- Second deployment: Decrease the vmInstanceCount to 6 to remove all **Regional** nodes.
170170

171-
Throughout the process, you can check the `getazresiliencystatus` API to retrieve the progress status, as illustrated below. The process is considered complete once each node type has a minimum of 6 zonal nodes and 0 regional nodes.
171+
Throughout the process, you can check the `getazresiliencystatus` API to retrieve the progress status, as illustrated below. The process is considered complete once each node type has a minimum of six zonal nodes and 0 regional nodes.
172172

173173
```json
174174
{
@@ -204,7 +204,7 @@ Requirements:
204204

205205
5) Mark the cluster resilient to zone failures
206206

207-
This step helps in future deployments, since it ensures all future deployments of node types span across availability zones and thus cluster remains tolerant to AZ failures. Set `zonalResiliency: true` in the cluster ARM template and do a deployment to mark cluster as zone resilient and ensure all new node type deployments span across availability zones. This update is only allowed if all node types have at least 6 zonal nodes and 0 regional nodes.
207+
This step helps in future deployments, since it ensures all future deployments of node types span across availability zones and thus cluster remains tolerant to AZ failures. Set `zonalResiliency: true` in the cluster ARM template and do a deployment to mark cluster as zone resilient and ensure all new node type deployments span across availability zones. This update is only allowed if all node types have at least six zonal nodes and 0 regional nodes.
208208

209209
```json
210210
{
@@ -255,9 +255,9 @@ Requirements:
255255
If you run in to any problems, reach out to support for assistance.
256256

257257
## Enable FastZonalUpdate on Service Fabric managed clusters (preview)
258-
Service Fabric managed clusters support faster cluster and application upgrades by reducing the max upgrade domains per availability zone. The default configuration right now can have at most 15 upgrade domains (UDs) in multiple AZ nodetype. This huge number of UDs reduced the upgrade velocity. The new configuration reduces the max UDs, which results in faster updates, keeping the safety of the upgrades intact.
258+
Service Fabric managed clusters support faster cluster and application upgrades by reducing the max upgrade domains per availability zone. The default configuration right now can have at most 15 upgrade domains (UDs) in multiple AZ nodetype. This huge number of UDs reduced the upgrade velocity. The new configuration reduces the max UDs, which result in faster updates, keeping the safety of the upgrades intact.
259259

260-
The update should be done via ARM template by setting the zonalUpdateMode property to fast and then modifying a node type attribute, such as adding a node and then removing the node to each nodetype (see required steps 2 and 3). The Service Fabric managed cluster resource apiVersion should be 2022-10-01-preview or later.
260+
The update should be done via ARM template by setting the zonalUpdateMode property to "fast" and then modifying a node type attribute, such as adding a node and then removing the node to each nodetype (see required steps 2 and 3). The Service Fabric managed cluster resource apiVersion should be 2022-10-01-preview or later.
261261

262262
1. Modify the ARM template with the new property zonalUpdateMode.
263263
```json
@@ -269,7 +269,7 @@ The update should be done via ARM template by setting the zonalUpdateMode proper
269269
"properties": {
270270
'''
271271
"zonalResiliency": true,
272-
"zonalUpdateMode": fast,
272+
"zonalUpdateMode": "fast",
273273
...
274274
}
275275
}]

0 commit comments

Comments
 (0)