Skip to content

Commit fd41003

Browse files
authored
Merge pull request #204778 from tomvcassidy/metadata2
Mass Metadata Changes - SF
2 parents cdef159 + 240751f commit fd41003

File tree

121 files changed

+888
-528
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

121 files changed

+888
-528
lines changed

articles/service-fabric/service-fabric-cluster-nodetypes.md

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,9 +2,13 @@
22
title: Node types and virtual machine scale sets
33
description: Learn how Azure Service Fabric node types relate to virtual machine scale sets and how to remotely connect to a scale set instance or cluster node.
44
ms.topic: conceptual
5-
ms.date: 03/23/2018
6-
ms.author: pepogors
5+
ms.author: tomcassidy
6+
author: tomvcassidy
7+
ms.service: service-fabric
8+
services: service-fabric
9+
ms.date: 07/14/2022
710
---
11+
812
# Azure Service Fabric node types and virtual machine scale sets
913

1014
[Virtual machine scale sets](../virtual-machine-scale-sets/index.yml) are an Azure compute resource. You can use scale sets to deploy and manage a collection of virtual machines as a set. Each node type that you define in an Azure Service Fabric cluster sets up exactly one scale set: multiple node types cannot be backed by the same scale set and one node type should not be backed by multiple scale sets.
@@ -78,7 +82,7 @@ The following are the property descriptions:
7882
| nicPrefixOverride | string | Subnet Prefix like "10.0.0.0/24" |
7983
| commonNames | string[] | Common Names of installed cluster certificates |
8084
| x509StoreName | string | Name of Store where installed cluster certificate is located |
81-
| typeHandlerVersion | 1.1 | Version of Extension. 1.0 classic version of extension are recommended to upgrade to 1.1 |
85+
| typeHandlerVersion | 1.1 | Version of Extension. 1.0 classic versions of extension are recommended to upgrade to 1.1 |
8286
| dataPath | string | Path to the drive used to save state for Service Fabric system services and application data.
8387

8488
## Next steps

articles/service-fabric/service-fabric-cluster-programmatic-scaling.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
11
---
22
title: Azure Service Fabric Programmatic Scaling
33
description: Scale an Azure Service Fabric cluster in or out programmatically, according to custom triggers
4-
author: mjrousos
5-
6-
ms.topic: conceptual
7-
ms.date: 01/23/2018
8-
ms.author: mikerou
9-
ms.custom: devx-track-csharp, devx-track-azurepowershell
4+
ms.topic: how-to
5+
ms.author: tomcassidy
6+
author: tomvcassidy
7+
ms.service: service-fabric
8+
services: service-fabric
9+
ms.date: 07/14/2022
1010
---
1111

1212
# Scale a Service Fabric cluster programmatically

articles/service-fabric/service-fabric-cluster-region-move.md

Lines changed: 9 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,12 @@
11
---
22
title: Move an Azure Service Fabric cluster to a new region
33
description: How to migrate an Azure Service Fabric cluster and applications to another region.
4-
ms.topic: conceptual
5-
ms.date: 07/20/2021
6-
ms.author: micraft
4+
ms.topic: how-to
5+
ms.author: tomcassidy
6+
author: tomvcassidy
7+
ms.service: service-fabric
8+
services: service-fabric
9+
ms.date: 07/14/2022
710
---
811

912
# Move an Azure Service Fabric cluster to a new region
@@ -33,20 +36,20 @@ Before engaging in any regional migration, we recommend establishing a testbed a
3336
- For all services:
3437
* <p>Ensure that any communication stages between clients and the services are configured similarly to the source cluster. For example, this validation may include ensuring that intermediaries like Event Hubs, Network Load Balancers, App Gateways, or API Management are set up with the rules necessary to allow traffic to flow to the cluster.</p>
3538

36-
3. Redirect traffic from the old region to the new region. We recommend using [Azure Traffic Manager](../traffic-manager/traffic-manager-overview.md) for migration as it offers a range of [routing methods](../traffic-manager/traffic-manager-routing-methods.md). How exactly you update your traffic routing rules will depend on whether you desire to keep the existing region or deprecate it, and will also depend on how traffic flows within your application. You may need to investigate whether private/public IPs or DNS names can be moved between different Azure resources in different regions. Service Fabric is not aware of this part of your system, so please investigate and if necessary involve the Azure teams involved in your traffic flow, particularly if it is more complex or if your workload is latency-critical. Documents such as [Configure Custom Domain](../api-management/configure-custom-domain.md), [Public IP Addresses](../virtual-network/ip-services/public-ip-addresses.md), and [DNS Zones and Records](../dns/dns-zones-records.md) may be useful, and are examples of the information you will need depending on your traffic flows and protocols. Here are two example scenarios demonstrating how one could approach updating traffic routing:
39+
3. Redirect traffic from the old region to the new region. We recommend using [Azure Traffic Manager](../traffic-manager/traffic-manager-overview.md) for migration as it offers a range of [routing methods](../traffic-manager/traffic-manager-routing-methods.md). How exactly you update your traffic routing rules will depend on whether you desire to keep the existing region or deprecate it, and will also depend on how traffic flows within your application. You may need to investigate whether private/public IPs or DNS names can be moved between different Azure resources in different regions. Service Fabric is not aware of this part of your system, so please investigate and if necessary involve the Azure teams involved in your traffic flow, particularly if it is more complex or if your workload is latency-critical. Documents such as [Configure Custom Domain](../api-management/configure-custom-domain.md), [Public IP Addresses](../virtual-network/ip-services/public-ip-addresses.md), and [DNS Zones and Records](../dns/dns-zones-records.md) may be useful to review. Here are two example scenarios demonstrating how one could approach updating traffic routing:
3740
* If you do not plan to keep the existing source region and you have a DNS/CNAME associated with the public IP of a Network Load Balancer that is delivering calls to your original source cluster. Update the DNS/CNAME to be associated with a new public IP of the new network load balancer in the new region. Completing that transfer would cause clients using the existing cluster to switch to using the new cluster.
3841

3942
* If you do plan to keep the existing source region and you have a DNS/CNAME associated with the public IP of a Network Load Balancer that was delivering calls to your original source cluster. Set up an instance of Azure Traffic Manager and then associate the DNS name with that Azure Traffic Manager Instance. The Azure Traffic Manager could be configured to then route to the individual Network Load Balancers within each region.
4043

41-
4. If you do plan to keep both regions, then you will usually have some sort of “back sync”, where the source of truth is kept in some remote store, such as SQL, CosmosDB, or Blob or File Storage, which is then synced between the regions. If this applies to your workload, then it is recommended to confirm that data is flowing between the regions as expected.
44+
4. If you do plan to keep both regions, then you will usually have some sort of “back sync”, where the source of truth is kept in some remote store, such as SQL, Cosmos DB, or Blob or File Storage, which is then synced between the regions. If this applies to your workload, then it is recommended to confirm that data is flowing between the regions as expected.
4245

4346
## Final Validation
4447
1. As a final validation, verify that traffic is flowing as expected and that the services in the new region (and potentially the old region) are operating as expected.
4548

4649
2. If you do not plan to keep the original source region, then at this point the resources in that region can be removed. We recommend waiting for some time before deleting resources, in case some issue is discovered that requires a rollback to the original source region.
4750

4851
## Next Steps
49-
Now that you've moved your cluster and applications to a new region you should validate backups are setup to protect any required data.
52+
Now that you've moved your cluster and applications to a new region you should validate backups are set up to protect any required data.
5053

5154
> [!div class="nextstepaction"]
5255
> [Set up backups after migration](service-fabric-backuprestoreservice-quickstart-azurecluster.md)

articles/service-fabric/service-fabric-cluster-remote-connect-to-azure-cluster-node.md

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,14 @@
11
---
22
title: Remote connect to an Azure Service Fabric cluster node
33
description: Learn how to remotely connect to a scale set instance (a Service Fabric cluster node).
4-
5-
ms.topic: conceptual
6-
ms.date: 03/23/2018
4+
ms.topic: how-to
5+
ms.author: tomcassidy
6+
author: tomvcassidy
7+
ms.service: service-fabric
8+
services: service-fabric
9+
ms.date: 07/14/2022
710
---
11+
812
# Remote connect to a virtual machine scale set instance or a cluster node
913
In a Service Fabric cluster running in Azure, each cluster node type that you define [sets up a virtual machine separate scale](service-fabric-cluster-nodetypes.md). You can remote connect to specific scale set instances (cluster nodes). Unlike single-instance VMs, scale set instances don't have their own virtual IP addresses. This can be challenging when you are looking for an IP address and port that you can use to remotely connect to a specific instance.
1014

articles/service-fabric/service-fabric-cluster-resource-manager-advanced-placement-rules-affinity.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,15 @@
11
---
22
title: Service Fabric Cluster Resource Manager - Affinity
33
description: Overview of service affinity for Azure Service Fabric services and guidance on service affinity configuration.
4-
services: service-fabric
54
documentationcenter: .net
6-
author: masnider
7-
85
ms.topic: conceptual
9-
ms.date: 08/18/2017
10-
ms.author: masnider
11-
ms.custom: devx-track-csharp
6+
ms.author: tomcassidy
7+
author: tomvcassidy
8+
ms.service: service-fabric
9+
services: service-fabric
10+
ms.date: 07/14/2022
1211
---
12+
1313
# Configuring and using service affinity in Service Fabric
1414
Affinity is a control that is provided mainly to help ease the transition of larger monolithic applications into the cloud and microservices world. It is also used as an optimization for improving the performance of services, although doing so can have side effects.
1515

articles/service-fabric/service-fabric-cluster-resource-manager-advanced-placement-rules-placement-policies.md

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,14 @@
11
---
22
title: Service Fabric Cluster Resource Manager - Placement Policies
33
description: Overview of additional placement policies and rules for Service Fabric Services
4-
author: masnider
5-
64
ms.topic: conceptual
7-
ms.date: 08/18/2017
8-
ms.author: masnider
9-
ms.custom: devx-track-csharp
5+
ms.author: tomcassidy
6+
author: tomvcassidy
7+
ms.service: service-fabric
8+
services: service-fabric
9+
ms.date: 07/14/2022
1010
---
11+
1112
# Placement policies for service fabric services
1213
Placement policies are additional rules that can be used to govern service placement in some specific, less-common scenarios. Some examples of those scenarios are:
1314

@@ -99,7 +100,7 @@ Replicas are _normally_ distributed across fault and upgrade domains when the cl
99100
> For more information on constraints and constraint priorities generally, check out [this topic](service-fabric-cluster-resource-manager-management-integration.md#constraint-priorities).
100101
>
101102
102-
If you've ever seen a health message such as "`The Load Balancer has detected a Constraint Violation for this Replica:fabric:/<some service name> Secondary Partition <some partition ID> is violating the Constraint: FaultDomain`", then you've hit this condition or something like it. Usually only one or two replicas are packed together temporarily. So long as there are fewer than a quorum of replicas in a given domain, you're safe. Packing is rare, but it can happen, and usually these situations are transient since the nodes come back. If the nodes do stay down and the Cluster Resource Manager needs to build replacements, usually there are other nodes available in the ideal fault domains.
103+
If you've ever seen a health message such as "`The Load Balancer has detected a Constraint Violation for this Replica:fabric:/<some service name> Secondary Partition <some partition ID> is violating the Constraint: FaultDomain`", then you've hit this condition or something like it. Usually only one or two replicas are packed together temporarily. So long as there is fewer than a quorum of replicas in a given domain, you're safe. Packing is rare, but it can happen, and usually these situations are transient since the nodes come back. If the nodes do stay down and the Cluster Resource Manager needs to build replacements, usually there are other nodes available in the ideal fault domains.
103104

104105
Some workloads would prefer always having the target number of replicas, even if they are packed into fewer domains. These workloads are betting against total simultaneous permanent domain failures and can usually recover local state. Other workloads would rather take the downtime earlier than risk correctness or loss of data. Most production workloads run with more than three replicas, more than three fault domains, and many valid nodes per fault domain. Because of this, the default behavior allows domain packing by default. The default behavior allows normal balancing and failover to handle these extreme cases, even if that means temporary domain packing.
105106

articles/service-fabric/service-fabric-cluster-resource-manager-advanced-throttling.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,12 @@
11
---
22
title: Throttling in the Service Fabric cluster resource manager
33
description: Learn to configure the throttles provided by the Service Fabric Cluster Resource Manager.
4-
author: masnider
5-
64
ms.topic: conceptual
7-
ms.date: 08/18/2017
8-
ms.author: masnider
5+
ms.author: tomcassidy
6+
author: tomvcassidy
7+
ms.service: service-fabric
8+
services: service-fabric
9+
ms.date: 07/14/2022
910
---
1011

1112
# Throttling the Service Fabric Cluster Resource Manager

articles/service-fabric/service-fabric-cluster-resource-manager-application-groups.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,14 @@
11
---
22
title: Service Fabric Cluster Resource Manager - Application Groups
33
description: Overview of the Application Group functionality in the Service Fabric Cluster Resource Manager
4-
author: masnider
5-
64
ms.topic: conceptual
7-
ms.date: 08/18/2017
8-
ms.author: masnider
5+
ms.author: tomcassidy
6+
author: tomvcassidy
7+
ms.service: service-fabric
8+
services: service-fabric
9+
ms.date: 07/14/2022
910
---
11+
1012
# Introduction to Application Groups
1113
Service Fabric's Cluster Resource Manager typically manages cluster resources by spreading the load (represented via [Metrics](service-fabric-cluster-resource-manager-metrics.md)) evenly throughout the cluster. Service Fabric manages the capacity of the nodes in the cluster and the cluster as a whole via [capacity](service-fabric-cluster-resource-manager-cluster-description.md). Metrics and capacity work great for many workloads, but patterns that make heavy use of different Service Fabric Application Instances sometimes bring in additional requirements. For example you may want to:
1214

articles/service-fabric/service-fabric-cluster-resource-manager-architecture.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,14 @@
11
---
22
title: Resource Manager Architecture
33
description: An overview of and architectural information about the Azure Service Fabric Cluster Resource Manager service.
4-
author: masnider
5-
64
ms.topic: conceptual
7-
ms.date: 08/18/2017
8-
ms.author: masnider
5+
ms.author: tomcassidy
6+
author: tomvcassidy
7+
ms.service: service-fabric
8+
services: service-fabric
9+
ms.date: 07/14/2022
910
---
11+
1012
# Cluster resource manager architecture overview
1113
The Service Fabric Cluster Resource Manager is a central service that runs in the cluster. It manages the desired state of the services in the cluster, particularly with respect to resource consumption and any placement rules.
1214

articles/service-fabric/service-fabric-cluster-resource-manager-autoscaling.md

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,14 @@
11
---
22
title: Azure Service Fabric Auto Scaling Services and Containers
33
description: Azure Service Fabric allows you to set auto scaling policies for services and containers.
4-
author: radicmilos
5-
64
ms.topic: conceptual
7-
ms.date: 04/17/2018
8-
ms.author: miradic
9-
ms.custom: devx-track-csharp
5+
ms.author: tomcassidy
6+
author: tomvcassidy
7+
ms.service: service-fabric
8+
services: service-fabric
9+
ms.date: 07/14/2022
1010
---
11+
1112
# Introduction to Auto Scaling
1213
Auto scaling is an additional capability of Service Fabric to dynamically scale your services based on the load that services are reporting, or based on their usage of resources. Auto scaling gives great elasticity and enables provisioning of additional instances or partitions of your service on demand. The entire auto scaling process is automated and transparent, and once you set up your policies on a service there is no need for manual scaling operations at the service level. Auto scaling can be turned on either at service creation time, or at any time by updating the service.
1314

0 commit comments

Comments
 (0)