Skip to content

Commit 3cc84e0

Browse files
committed
fixing broken bookmarks
1 parent 16b861f commit 3cc84e0

File tree

4 files changed

+14
-14
lines changed

4 files changed

+14
-14
lines changed

articles/azure-vmware/azure-vmware-solution-known-issues.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ Refer to the table to find details about resolution dates or possible workaround
3232
| [VMSA-2024-0006](https://support.broadcom.com/web/ecx/support-content-notification/-/external/content/SecurityAdvisories/0/24266) ESXi Use-after-free and Out-of-bounds write vulnerability | March 2024 | For ESXi 7.0, Microsoft worked with Broadcom on an AVS specific hotfix as part of the [ESXi 7.0U3o](https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/7-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-70u3o-release-notes.html) rollout. For the 8.0 rollout, Azure VMware Solution is deploying [vCenter Server 8.0 U2b & ESXi 8.0 U2b](architecture-private-clouds.md#vmware-software-versions) which is not vulnerable. | August 2024 - Resolved in [ESXi 7.0U3o](https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/7-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-70u3o-release-notes.html) and [vCenter Server 8.0 U2b & ESXi 8.0 U2b](architecture-private-clouds.md#vmware-software-versions) |
3333
| VMware HCX version 4.8.0 Network Extension (NE) Appliance VMs running in High Availability (HA) mode may experience intermittent Standby to Active failover. For more information, see [HCX - NE appliances in HA mode experience intermittent failover (96352)](https://knowledge.broadcom.com/external/article?legacyId=96352) | Jan 2024 | Avoid upgrading to VMware HCX 4.8.0 if you're using NE appliances in a HA configuration. | Feb 2024 - Resolved in [VMware HCX 4.8.2](https://techdocs.broadcom.com/us/en/vmware-cis/hcx/vmware-hcx/4-8/hcx-48-release-notes/Chunk371499710.html#Chunk371499710) |
3434
| When I run the VMware HCX Service Mesh Diagnostic wizard, all diagnostic tests will be passed (green check mark), yet failed probes will be reported. See [HCX - Service Mesh diagnostics test returns 2 failed probes](https://knowledge.broadcom.com/external/article?legacyId=96708) | 2024 | Fixed in 4.9+. | Resolved in [HCX 4.9.2](https://docs.vmware.com/en/VMware-HCX/4.9.2/rn/vmware-hcx-492-release-notes/index.html#What's%20New) |
35-
| The AV64 SKU currently supports RAID-1 FTT1, RAID-5 FTT1, and RAID-1 FTT2 vSAN storage policies. For more information, see [AV64 supported RAID configuration](introduction.md#av64-supported-raid-configuration) | Nov 2023 | The AV64 SKU now supports 7 Fault Domains and all vSAN storage policies. For more information, see [AV64 supported Azure regions](architecture-private-clouds.md#azure-region-availability-zone-az-to-sku-mapping-table) | June 2024 |
35+
| The AV64 SKU currently supports RAID-1 FTT1, RAID-5 FTT1, and RAID-1 FTT2 vSAN storage policies. For more information, see [AV64 supported RAID configuration](introduction.md#av64-supported-raid-configuration) | Nov 2023 | The AV64 SKU now supports 7 Fault Domains and all vSAN storage policies. For more information, see [AV64 supported Azure regions](architecture-private-clouds.md#azure-region-availability-zone-to-version-mapping-table) | June 2024 |
3636
| [VMSA-2023-023](https://support.broadcom.com/web/ecx/support-content-notification/-/external/content/SecurityAdvisories/0/23677) VMware vCenter Server Out-of-Bounds Write Vulnerability (CVE-2023-34048) publicized in October 2023 | October 2023 | A risk assessment of CVE-2023-03048 was conducted and it was determined that sufficient controls are in place within Azure VMware Solution to reduce the risk of CVE-2023-03048 from a CVSS Base Score of 9.8 to an adjusted Environmental Score of [6.8](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H/MAC:L/MPR:H/MUI:R) or lower. Adjustments from the base score were possible due to the network isolation of the Azure VMware Solution vCenter Server (ports 2012, 2014, and 2020 aren't exposed via any interactive network path) and multiple levels of authentication and authorization necessary to gain interactive access to the vCenter Server network segment. Azure VMware Solution is currently rolling out [7.0U3o](https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/7-0/release-notes/vcenter-server-update-and-patch-releases/vsphere-vcenter-server-70u3o-release-notes.html) to address this issue. | March 2024 - Resolved in [ESXi 7.0U3o](https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/7-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-70u3o-release-notes.html) |
3737
| After my private cloud NSX-T Data Center upgrade to version [3.2.2](https://techdocs.broadcom.com/us/en/vmware-cis/nsx/vmware-nsx/3-2/release-notes/vmware-nsxt-data-center-322-release-notes.html), the NSX-T Manager **DNS - Forwarder Upstream Server Timeout** alarm is raised | February 2023 | [Enable private cloud internet Access](architecture-design-public-internet-access.md), alarm is raised because NSX-T Manager can't access the configured CloudFlare DNS server. Otherwise, [change the default DNS zone to point to a valid and reachable DNS server.](configure-dns-azure-vmware-solution.md) | February 2023 |
3838
| After my private cloud NSX-T Data Center upgrade to version [3.2.2](https://techdocs.broadcom.com/us/en/vmware-cis/nsx/vmware-nsx/3-2/release-notes/vmware-nsxt-data-center-322-release-notes.html), the NSX-T Manager **Capacity - Maximum Capacity Threshold** alarm is raised | 2023 | Alarm raised because there are more than four clusters in the private cloud with the medium form factor for the NSX-T Data Center Unified Appliance. The form factor needs to be scaled up to large. This issue should get detected through Microsoft, however you can also open a support request. | 2023 |

articles/azure-vmware/includes/azure-vmware-solutions-limits.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -22,12 +22,12 @@ The following table describes the maximum limits for Azure VMware Solution.
2222
| Maximum number of vCenter Servers per private cloud | 1 (hard limit) |
2323
| Maximum number of HCX site pairings | 25 (any edition) |
2424
| Maximum number of HCX service meshes | 10 (any edition) |
25-
| Maximum number of Azure VMware Solution private clouds linked Azure ExpressRoute from a single location to a single virtual network gateway | 4<br />The virtual network gateway used determines the actual maximum number of linked private clouds. For more information, see [About ExpressRoute virtual network gateways](../../expressroute/expressroute-about-virtual-network-gateways.md)<br />If you exceed this threshold, use [Azure VMware Solution Interconnect](../connect-multiple-private-clouds-same-region.md) to aggregate private cloud connectivity within the Azure region. |
26-
| Maximum Azure VMware Solution ExpressRoute throughput | 10 Gbps (use Ultra Performance Gateway version with FastPath enabled)**<br />The virtual network gateway that's used determines the actual bandwidth. For more information, see [About ExpressRoute virtual network gateways](../../expressroute/expressroute-about-virtual-network-gateways.md)<br />Azure VMware Solution ExpressRoutes don't have any port speed limitations and perform above 10 Gbps. Rates over 10 Gbps aren't guaranteed because of quality of service.|
25+
| Maximum number of Azure VMware Solution private clouds linked Azure ExpressRoute from a single location to a single virtual network gateway | 4<br />The virtual network gateway used determines the actual maximum number of linked private clouds. For more information, see [About ExpressRoute virtual network gateways](../../expressroute/expressroute-about-virtual-network-gateways.md).<br />If you exceed this threshold, use [Azure VMware Solution Interconnect](../connect-multiple-private-clouds-same-region.md) to aggregate private cloud connectivity within the Azure region. |
26+
| Maximum Azure VMware Solution ExpressRoute throughput | 10 Gbps. (Use Ultra Performance Gateway version with FastPath enabled.)**<br />The virtual network gateway that's used determines the actual bandwidth. For more information, see [About ExpressRoute virtual network gateways](../../expressroute/expressroute-about-virtual-network-gateways.md).<br />Azure VMware Solution ExpressRoutes don't have any port speed limitations and perform above 10 Gbps. Rates over 10 Gbps aren't guaranteed because of quality of service.|
2727
| Maximum number of Azure Public IPv4 addresses assigned to NSX | 2,000 |
2828
| Maximum number of Azure VMware Solution Interconnects per private cloud | 10 |
2929
| Maximum number of Azure ExpressRoute Global Reach connections per Azure VMware Solution private cloud | 8 |
30-
| vSAN capacity limits | 75% of total usable (keep 25% available for service-level agreement) |
30+
| vSAN capacity limits | 75% of total usable. (Keep 25% available for service-level agreement.) |
3131
| VMware Site Recovery Manager - Maximum number of protected virtual machines | 3,000 |
3232
| VMware Site Recovery Manager - Maximum number of virtual machines per recovery plan | 2,000 |
3333
| VMware Site Recovery Manager - Maximum number of protection groups per recovery plan | 250 |
Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Hardware Specifications of the Hosts
2+
title: Hardware specifications of the hosts
33
description: Hosts used to build or scale clusters come from an isolated pool of hosts.
44
ms.topic: include
55
ms.service: azure-vmware
@@ -15,17 +15,17 @@ Azure VMware Solution clusters are based on a hyperconverged infrastructure. The
1515

1616
| Host type | CPU (cores/GHz) | RAM (GB) | vSAN cache tier (TB, raw***) | vSAN capacity tier (TB, raw***) | Regional availability |
1717
| :--- | :---: | :---: | :---: | :---: | :---: |
18-
| AV36 | Dual Intel Xeon Gold 6140 CPUs (Skylake microarchitecture) with 18 cores/CPU @ 2.3 GHz, total 36 physical cores (72 logical cores with hyperthreading) | 576 | 3.2 (NVMe) | 15.20 (SSD) | Selected regions (*) |
19-
| AV36P | Dual Intel Xeon Gold 6240 CPUs (Cascade Lake microarchitecture) with 18 cores/CPU @ 2.6 GHz/3.9 GHz Turbo, total 36 physical cores (72 logical cores with hyperthreading) | 768 | 1.5 (Intel cache) | 19.20 (NVMe) | Selected regions (*) |
20-
| AV52 | Dual Intel Xeon Platinum 8270 CPUs (Cascade Lake microarchitecture) with 26 cores/CPU @ 2.7 GHz/4.0 GHz Turbo, total 52 physical cores (104 logical cores with hyperthreading) | 1,536 | 1.5 (Intel cache) | 38.40 (NVMe) | Selected regions (*) |
21-
| AV64 | Dual Intel Xeon Platinum 8370C CPUs (Ice Lake microarchitecture) with 32 cores/CPU @ 2.8 GHz/3.5 GHz Turbo, total 64 physical cores (128 logical cores with hyperthreading) | 1,024 | 3.84 (NVMe) | 15.36 (NVMe) | Selected regions (**) |
18+
| AV36 | Dual Intel Xeon Gold 6140 CPUs (Skylake microarchitecture) with 18 cores/CPU @ 2.3 GHz, total 36 physical cores (72 logical cores with hyperthreading) | 576 | 3.2 (NVMe) | 15.20 (SSD) | Selected regions* |
19+
| AV36P | Dual Intel Xeon Gold 6240 CPUs (Cascade Lake microarchitecture) with 18 cores/CPU @ 2.6 GHz/3.9 GHz Turbo, total 36 physical cores (72 logical cores with hyperthreading) | 768 | 1.5 (Intel cache) | 19.20 (NVMe) | Selected regions* |
20+
| AV52 | Dual Intel Xeon Platinum 8270 CPUs (Cascade Lake microarchitecture) with 26 cores/CPU @ 2.7 GHz/4.0 GHz Turbo, total 52 physical cores (104 logical cores with hyperthreading) | 1,536 | 1.5 (Intel cache) | 38.40 (NVMe) | Selected regions* |
21+
| AV64 | Dual Intel Xeon Platinum 8370C CPUs (Ice Lake microarchitecture) with 32 cores/CPU @ 2.8 GHz/3.5 GHz Turbo, total 64 physical cores (128 logical cores with hyperthreading) | 1,024 | 3.84 (NVMe) | 15.36 (NVMe) | Selected regions** |
2222

2323
An Azure VMware Solution cluster requires a minimum number of three hosts. You can use hosts of the same type only in a single Azure VMware Solution private cloud. Hosts used to build or scale clusters come from an isolated pool of hosts. Those hosts passed hardware tests and had all data securely deleted before being added to a cluster.
2424

2525
All of the preceding host types have 100-Gbps network interface throughput.
2626

27-
(*) Details are available via the Azure pricing calculator.
27+
*Details are available via the Azure pricing calculator.
2828

29-
(**) AV64 prerequisite: An Azure VMware Solution private cloud deployed with AV36, AV36P, or AV52 is required before adding AV64.
29+
**AV64 prerequisite: An Azure VMware Solution private cloud deployed with AV36, AV36P, or AV52 is required before adding AV64.
3030

31-
(***) Raw is based on [International Standard of Units (SI)](https://en.wikipedia.org/wiki/International_System_of_Units) reported by disk manufacturers. Example: 1 TB Raw = 1000000000000 bytes. Space calculated by a computer in binary (1 TB binary = 1099511627776 bytes binary) equals 931.3 gigabytes converted from the raw decimal.
31+
***Raw is based on [International Standard of Units (SI)](https://en.wikipedia.org/wiki/International_System_of_Units) reported by disk manufacturers. Example: 1 TB Raw = 1000000000000 bytes. Space calculated by a computer in binary (1 TB binary = 1099511627776 bytes binary) equals 931.3 gigabytes converted from the raw decimal.

articles/azure-vmware/introduction.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ When a customer has a deployed Azure VMware Solution private cloud, they can sca
6060

6161
The traditional Azure VMware Solution host clusters don't have explicit vSAN FD configuration. The reasoning is the host allocation logic ensures, within clusters, that no two hosts reside in the same physical fault domain within an Azure region. This feature inherently brings resilience and high availability for storage, which the vSAN FD configuration is supposed to bring. More information on vSAN FD can be found in the [VMware documentation](https://techdocs.broadcom.com/us/en/vmware-cis/vsan/vsan/8-0/vsan-administration/expanding-and-managing-a-vsan-cluster/managing-fault-domains-in-vsan-clusters.html).
6262

63-
The Azure VMware Solution AV64 host clusters have an explicit vSAN fault domain (FD) configuration. Azure VMware Solution control plane configures seven vSAN fault domains (FDs) for AV64 clusters. Hosts are balanced evenly across the seven FDs as users scale up the hosts in a cluster from three nodes to 16 nodes. Some Azure regions still support a maximum of five FDs as part of the initial release of the AV64 SKU. Refer to the [Azure Region Availability Zone (AZ) to SKU mapping table](architecture-private-clouds.md#azure-region-availability-zone-az-to-sku-mapping-table) for more information.
63+
The Azure VMware Solution AV64 host clusters have an explicit vSAN fault domain (FD) configuration. Azure VMware Solution control plane configures seven vSAN fault domains (FDs) for AV64 clusters. Hosts are balanced evenly across the seven FDs as users scale up the hosts in a cluster from three nodes to 16 nodes. Some Azure regions still support a maximum of five FDs as part of the initial release of the AV64 SKU. Refer to the [Azure Region Availability Zone (AZ) to SKU mapping table](architecture-private-clouds.md#azure-region-availability-zone-to-version-mapping-table) for more information.
6464

6565
### Cluster size recommendation
6666

@@ -93,7 +93,7 @@ The following three scenarios show examples of instances that normally error out
9393

9494
### AV64 supported RAID configuration
9595

96-
This table provides the list of RAID configuration supported and host requirements in AV64 clusters. The RAID-6 FTT2 and RAID-1 FTT3 policies are supported with the AV64 SKU in some regions. In Azure regions that are currently constrained to five FDs, Microsoft allows customers to use the RAID-5 FTT1 vSAN storage policy for AV64 clusters with six or more nodes to meet the service level agreement (SLA). Refer to the [Azure Region Availability Zone (AZ) to SKU mapping table](architecture-private-clouds.md#azure-region-availability-zone-az-to-sku-mapping-table) for more information.
96+
This table provides the list of RAID configuration supported and host requirements in AV64 clusters. The RAID-6 FTT2 and RAID-1 FTT3 policies are supported with the AV64 SKU in some regions. In Azure regions that are currently constrained to five FDs, Microsoft allows customers to use the RAID-5 FTT1 vSAN storage policy for AV64 clusters with six or more nodes to meet the service level agreement (SLA). Refer to the [Azure Region Availability Zone (AZ) to SKU mapping table](architecture-private-clouds.md#azure-region-availability-zone-to-version-mapping-table) for more information.
9797

9898
| RAID configuration | Failures to tolerate (FTT) | Minimum hosts required |
9999
|-------------------|--------------------------|------------------------|

0 commit comments

Comments
 (0)