Skip to content

Commit 93c5d56

Browse files
committed
Merge branch 'main' of https://github.com/MicrosoftDocs/azure-docs-pr into fd-server
2 parents c7b8359 + 95357e5 commit 93c5d56

File tree

5 files changed

+118
-11
lines changed

5 files changed

+118
-11
lines changed

articles/defender-for-cloud/concept-data-security-posture-prepare.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ Sensitive data discovery is available in the Defender CSPM, Defender for Storage
2626

2727
## What's supported
2828

29-
The table summarizes support for data-aware posture management.
29+
The table summarizes availability and supported scenarios for sensitive data discovery.
3030

3131
|**Support** | **Details**|
3232
|--- | ---|

articles/network-watcher/traffic-analytics-schema.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: halkazwini
66
ms.author: halkazwini
77
ms.service: network-watcher
88
ms.topic: concept-article
9-
ms.date: 05/03/2024
9+
ms.date: 05/07/2024
1010

1111
#CustomerIntent: As a administrator, I want learn about traffic analytics schema so I can easily use the queries and understand their output.
1212
---
@@ -198,8 +198,8 @@ The following table lists the fields in the schema and what they signify for vir
198198
> | **DestPort** | Destination Port | Port at which traffic is incoming. |
199199
> | **L4Protocol** | - T <br> - U | Transport Protocol. **T** = TCP <br> **U** = UDP |
200200
> | **L7Protocol** | Protocol Name | Derived from destination port. |
201-
> | **FlowDirection** | - **I** = Inbound <br> - **O** = Outbound | Direction of the flow: in or out of the network security group per flow log. |
202-
> | **FlowStatus** | - **A** = Allowed <br> - **D** = Denied | Status of flow: allowed or denied by network security group per flow log. |
201+
> | **FlowDirection** | - **I** = Inbound <br> - **O** = Outbound | Direction of the flow: in or out of the target resource per flow log. |
202+
> | **FlowStatus** | - **A** = Allowed <br> - **D** = Denied | Status of flow: allowed or denied by target resource per flow log. |
203203
> | **NSGList** | \<SUBSCRIPTIONID\>/\<RESOURCEGROUP_NAME\>/\<NSG_NAME\> | Network security group associated with the flow. |
204204
> | **NSGRule** | NSG_RULENAME | Network security group rule that allowed or denied the flow. |
205205
> | **NSGRuleType** | - User Defined <br> - Default | The type of network security group rule used by the flow. |
@@ -232,10 +232,10 @@ The following table lists the fields in the schema and what they signify for vir
232232
> | **DeniedInFlows** | - | Count of inbound flows that were denied. (Inbound to the network interface at which the flow was captured). |
233233
> | **AllowedOutFlows** | - | Count of outbound flows that were allowed (Outbound to the network interface at which the flow was captured). |
234234
> | **DeniedOutFlows** | - | Count of outbound flows that were denied (Outbound to the network interface at which the flow was captured). |
235-
> | **PacketsDestToSrc** | Represents packets sent from the destination to the source of the flow | Populated only for the Version 2 of network security group flow log schema. |
236-
> | **PacketsSrcToDest** | Represents packets sent from the source to the destination of the flow | Populated only for the Version 2 of network security group flow log schema. |
237-
> | **BytesDestToSrc** | Represents bytes sent from the destination to the source of the flow | Populated only for the Version 2 of network security group flow log schema. |
238-
> | **BytesSrcToDest** | Represents bytes sent from the source to the destination of the flow | Populated only for the Version 2 of network security group flow log schema. |
235+
> | **PacketsDestToSrc** | - | Represents packets sent from the destination to the source of the flow. |
236+
> | **PacketsSrcToDest** | - | Represents packets sent from the source to the destination of the flow . |
237+
> | **BytesDestToSrc** | - | Represents bytes sent from the destination to the source of the flow. |
238+
> | **BytesSrcToDest** | - | Represents bytes sent from the source to the destination of the flow. |
239239
> | **CompletedFlows** | - | Populated with nonzero value only for the Version 2 of network security group flow log schema. |
240240
> | **SrcPublicIPs** | \<SOURCE_PUBLIC_IP\>\|\<FLOW_STARTED_COUNT\>\|\<FLOW_ENDED_COUNT\>\|\<OUTBOUND_PACKETS\>\|\<INBOUND_PACKETS\>\|\<OUTBOUND_BYTES\>\|\<INBOUND_BYTES\> | Entries separated by bars. |
241241
> | **DestPublicIPs** | <DESTINATION_PUBLIC_IP>\|\<FLOW_STARTED_COUNT>\|\<FLOW_ENDED_COUNT>\|\<OUTBOUND_PACKETS>\|\<INBOUND_PACKETS>\|\<OUTBOUND_BYTES>\|\<INBOUND_BYTES> | Entries separated by bars. |

articles/virtual-machines/TOC.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1256,6 +1256,8 @@
12561256
displayName: Performance, cost, perf, disks, disk
12571257
- name: Deploy a ZRS disk
12581258
href: disks-deploy-zrs.md
1259+
- name: Best practices for achieving high availability
1260+
href: disks-high-availability.md
12591261
- name: Share a disk between VMs
12601262
items:
12611263
- name: About shared disks
Lines changed: 105 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,105 @@
1+
---
2+
title: Best practices for high availability with Azure VMs and managed disks
3+
description: Learn the steps you can take to get the best availability with your Azure virtual machines and managed disks.
4+
author: roygara
5+
ms.author: rogarana
6+
ms.date: 05/07/2024
7+
ms.topic: conceptual
8+
ms.service: azure-disk-storage
9+
---
10+
11+
# Best practices for achieving high availability with Azure virtual machines and managed disks
12+
13+
Azure offers several configuration options for ensuring high availability of Azure virtual machines (VMs) and Azure managed disks. This article covers the default availability and durability of managed disks and provides recommendations to further increase your application's availability and resiliency.
14+
15+
## At a glance
16+
17+
|Configuration |Recommendation |Benefits |
18+
|---------|---------|---------|
19+
|[Applications running on a single VM](#recommendations-for-applications-running-on-a-single-vm) |[Use Ultra Disks, Premium SSD v2, and Premium SSD disks](#use-ultra-disks-premium-ssd-v2-or-premium-ssd). |Single VMs using only Ultra Disks, Premium SSD v2, or Premium SSD disks have the highest uptime service level agreement (SLA), and these disk types offer the best performance. |
20+
| |[Use zone-redundant storage (ZRS) disks](#use-zone-redundant-storage-disks). |Access to your data even if an entire zone experiences an outage. |
21+
|[Applications running on multiple VMs](#recommendations-for-applications-running-on-multiple-vms) |Deploy VMs and disks across multiple availability zones using a [zone redundant Virtual Machine Scale Set with flexible orchestration mode](#use-zone-redundant-virtual-machine-scale-sets-with-flexible-orchestration) or by deploying VMs and disks across [three availability zones](#deploy-vms-and-disks-across-three-availability-zones). |Multiple VMs have the highest uptime SLA when deployed across multiple zones. |
22+
| |Deploy VMs and disks across multiple fault domains with either [regional Virtual Machine Scale Sets with flexible orchestration mode](#use-regional-virtual-machine-scale-sets-with-flexible-orchestration) or [availability sets](#use-availability-sets). |Multiple VMs have the second highest uptime SLA when deployed across fault domains. |
23+
| |[Use ZRS disks when sharing disks between VMs](#use-zrs-disks-when-sharing-disks-between-vms). |Prevents a shared disk from becoming a single point of failure. |
24+
25+
26+
## Availability and durability of managed disks
27+
28+
Before going over recommendations for achieving higher availability, you should understand the default availability and durability of managed disks.
29+
30+
Managed disks are designed for 99.999% availability and provide at least 99.999999999% (11 9’s) of durability. With managed disks, your data is replicated three times. If one of the three copies becomes unavailable, Azure automatically spawns a new copy of the data in the background. This ensures the persistence of your data and high fault tolerance.
31+
32+
Locally redundant storage (LRS) disks provide at least 99.999999999% (11 9's) of durability over a given year and zone-redundant storage (ZRS) disks provide at least 99.9999999999% (12 9's) of durability over a given year. This architecture helps Azure consistently deliver enterprise-grade durability for infrastructure as a service (IaaS) disks, with an industry-leading zero percent [annualized failure rate](https://en.wikipedia.org/wiki/Annualized_failure_rate).
33+
34+
## Recommendations for applications running on a single VM
35+
36+
Legacy applications, traditional web servers, line-of-business applications, development and testing environments, and small workloads are all examples of applications that may run on a single VM. These applications can't benefit from replication across multiple VMs, but the data on the disks is still replicated three times, and you can take the following steps to further increase availability.
37+
38+
### Use Ultra Disks, Premium SSD v2, or Premium SSD
39+
40+
Single VMs using only [Ultra Disks](disks-types.md#ultra-disks), [Premium SSD v2](disks-types.md#premium-ssd-v2), or [Premium SSD disks](disks-types.md#premium-ssds) have the [highest single VM uptime SLA](https://www.microsoft.com/licensing/docs/view/Service-Level-Agreements-SLA-for-Online-Services?lang=1), and these disk types offer the best performance.
41+
42+
### Use zone-redundant storage disks
43+
44+
Zone-redundant storage (ZRS) disks synchronously replicate data across three availability zones, which are separated groups of data centers in a region that have independent power, cooling, and networking infrastructure. With ZRS disks, your data is accessible even in the event of a zonal outage. ZRS disks have limitations, see [Zone-redundant storage for managed disks](disks-redundancy.md#zone-redundant-storage-for-managed-disks) for details.
45+
46+
## Recommendations for applications running on multiple VMs
47+
48+
Quorum-based applications, clustered databases (SQL, MongoDB), enterprise-grade web applications, and gaming applications are all examples of applications running on multiple VMs. Applications running on multiple VMs can designate a primary VM and multiple secondary VMs and replicate data across these VMs. This setup enables failover to a secondary VM if the primary VM goes down.
49+
50+
Multiple VMs have the highest uptime service level agreement (SLA) when deployed across multiple availability zones, and they have the second highest uptime SLA when deployed across multiple storage and compute fault domains.
51+
52+
### Deploy VMs and disks across multiple availability zones
53+
54+
Availability zones are separated groups of data centers within a region that have independent power, cooling, and networking infrastructure. They're close enough to have low-latency connections to other availability zones but far enough to reduce the possibility that more than one is affected by local outages or weather. See [What are availability zones?](../reliability/availability-zones-overview.md) for details.
55+
56+
Multiple VMs have the highest [SLA](https://www.microsoft.com/licensing/docs/view/Service-Level-Agreements-SLA-for-Online-Services?lang=1) when deployed across three availability zones. For VMs and disks deployed across multiple availability zones, the disks and their parent VMs are respectively collocated in the same zone, which prevents multiple VMs from going down even if an entire zone experiences an outage. Availability zones aren't currently available in every region, see [Azure regions with availability zone support](../reliability/availability-zones-service-support.md#azure-regions-with-availability-zone-support).
57+
58+
VMs deployed across multiple availability zones may have higher network latency than VMs deployed in a single availability zone, which could be a concern for workloads that require ultra-low latency. If low latency is your top priority, consider the methods described in [Deploy VMs and disks across multiple fault domains](#deploy-vms-and-disks-across-multiple-fault-domains).
59+
60+
To deploy resources across availability zones, you can either use [zone-redundant Virtual Machine Scale Sets](#use-zone-redundant-virtual-machine-scale-sets-with-flexible-orchestration) or [deploy resources across availability zones](#deploy-vms-and-disks-across-three-availability-zones).
61+
62+
#### Use zone-redundant Virtual Machine Scale Sets with flexible orchestration
63+
64+
[Virtual Machine Scale Sets](../virtual-machine-scale-sets/overview.md) let you create and manage a group of load balanced VMs. The number of VM instances can automatically adjust in response to demand or follow a schedule you define. A zone-redundant Virtual Machine Scale Set is a Virtual Machine Scale Set that has been deployed across multiple availability zones. See [Zone redundant or zone spanning](../virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones.md#zone-redundant-or-zone-spanning).
65+
66+
With zone-redundant Virtual Machine Scale Sets using the flexible orchestration mode, VMs, and their disks are replicated to one or more zones within the region they're deployed in to improve the resiliency and availability of your applications and data. This configuration spreads VMs across selected zones in a best effort approach by default but also provides the ability to specify strict zone balance in the deployment.
67+
68+
69+
#### Deploy VMs and disks across three availability zones
70+
71+
Another method to spread VMs and disks across availability zones is to manually deploy the VMs and disks across three availability zones. This deployment provides redundancy in VMs and disks across multiple data centers in a region, allowing you to fail over to another zone if there's a data center or zonal outage.
72+
73+
74+
### Deploy VMs and disks across multiple fault domains
75+
76+
77+
If you can't deploy your VMs and disks across availability zones or have ultra-low latency requirements, you can deploy them across fault domains instead. Fault domains define groups of VMs that share a common power source and a network switch. For details, see [How do availability sets work?](availability-set-overview.md#how-do-availability-sets-work).
78+
79+
For VMs and disks deployed across fault domains via the following methods, the storage fault domains of the disks are aligned with the compute fault domains of their respective parent VMs, which prevents multiple VMs from going down if a single storage fault domain experiences an outage.
80+
81+
Multiple VMs have the second highest uptime SLA when deployed across fault domains. To learn more, see the Virtual Machines section of the [SLA](https://www.microsoft.com/licensing/docs/view/Service-Level-Agreements-SLA-for-Online-Services?lang=1).
82+
83+
To deploy resources across multiple fault domains, you can either use [regional Virtual Machine Scale Sets](#use-regional-virtual-machine-scale-sets-with-flexible-orchestration) or [availability sets](#use-availability-sets).
84+
85+
#### Use regional Virtual Machine Scale Sets with flexible orchestration
86+
87+
A regional Virtual Machine Scale Set is a Virtual Machine Scale Set that has no explicitly defined availability zones. With regional virtual machine scale sets, VM resources are replicated across fault domains within the region they're deployed in to improve the resiliency and availability of applications and data. This configuration spreads VMs across fault domains by default but also provides the ability to assign fault domains on VM creation. See [this section](../virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones.md#regional) for details.
88+
89+
Regional Virtual Machine Scale Sets don't protect against large-scale outages like a data center or region outage, and don't currently support Ultra Disks or Premium SSD v2 disks.
90+
91+
#### Use availability sets
92+
93+
[Availability sets](availability-set-overview.md) are logical groupings of VMs that place VMs in different fault domains to limit the chance of correlated failures bringing related VMs down at the same time. Availability sets also have better VM to VM latencies compared to availability zones.
94+
95+
Availability sets don't let you select the fault domains for your VMs, can't be used with availability zones, don't protect against data center or region-wide outages, and don't currently support Ultra Disks or Premium SSD v2 disks.
96+
97+
### Use ZRS disks when sharing disks between VMs
98+
99+
You should use ZRS when sharing a disk between multiple VMs. If you use LRS, the shared disk becomes a single point of failure for your clustered application. This means that if your shared LRS disk experiences an outage, all the VMs to which this disk is attached will experience downtime. Using a ZRS disk mitigates this, since the disk's data is in three different availability zones. To learn more about shared disks, see [Share an Azure managed disk](disks-shared.md).
100+
101+
## Next steps
102+
103+
- [Zone-redundant storage for managed disks](disks-redundancy.md#zone-redundant-storage-for-managed-disks)
104+
- [What are availability zones?](../reliability/availability-zones-overview.md)
105+
- [Create a Virtual Machine Scale Set that uses Availability Zones](../virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones.md)

includes/expressroute-gateway-performance-include.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
author: duongau
66
ms.service: expressroute
77
ms.topic: include
8-
ms.date: 02/26/2024
8+
ms.date: 05/7/2024
99
ms.author: duau
1010
ms.custom: include file
1111
---
@@ -30,8 +30,8 @@ This table applies to both the Azure Resource Manager and classic deployment mod
3030

3131
| Gateway SKU | Connections per second | Mega-Bits per second | Packets per second | Supported number of VMs in the virtual network <sup>1<sup/> | Flow count limit |
3232
|--|--|--|--|--|--|
33-
| **Standard/ERGw1Az** | 7,000 | 1,000 | 100,000 | 2,000 | 100,000 |
34-
| **High Performance/ERGw2Az** | 14,000 | 2,000 | 200,000 | 4,500 | 200,000 |
33+
| **Standard/ERGw1Az** | 7,000 | 1,000 | 100,000 | 2,000 | 200,000 |
34+
| **High Performance/ERGw2Az** | 14,000 | 2,000 | 200,000 | 4,500 | 400,000 |
3535
| **Ultra Performance/ErGw3Az** | 16,000 | 10,000 | 1,000,000 | 11,000 | 1,000,000 |
3636
| **ErGwScale (per scale unit)** | N/A | 1,000 | 100,000 | 2,000 | 100,000 per scale unit |
3737

0 commit comments

Comments
 (0)