You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-vmware/azure-vmware-solution-known-issues.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -32,7 +32,7 @@ Refer to the table to find details about resolution dates or possible workaround
32
32
|[VMSA-2024-0006](https://support.broadcom.com/web/ecx/support-content-notification/-/external/content/SecurityAdvisories/0/24266) ESXi Use-after-free and Out-of-bounds write vulnerability | March 2024 | For ESXi 7.0, Microsoft worked with Broadcom on an AVS specific hotfix as part of the [ESXi 7.0U3o](https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/7-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-70u3o-release-notes.html) rollout. For the 8.0 rollout, Azure VMware Solution is deploying [vCenter Server 8.0 U2b & ESXi 8.0 U2b](architecture-private-clouds.md#vmware-software-versions) which is not vulnerable. | August 2024 - Resolved in [ESXi 7.0U3o](https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/7-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-70u3o-release-notes.html) and [vCenter Server 8.0 U2b & ESXi 8.0 U2b](architecture-private-clouds.md#vmware-software-versions)|
33
33
| VMware HCX version 4.8.0 Network Extension (NE) Appliance VMs running in High Availability (HA) mode may experience intermittent Standby to Active failover. For more information, see [HCX - NE appliances in HA mode experience intermittent failover (96352)](https://knowledge.broadcom.com/external/article?legacyId=96352)| Jan 2024 | Avoid upgrading to VMware HCX 4.8.0 if you're using NE appliances in a HA configuration. | Feb 2024 - Resolved in [VMware HCX 4.8.2](https://techdocs.broadcom.com/us/en/vmware-cis/hcx/vmware-hcx/4-8/hcx-48-release-notes/Chunk371499710.html#Chunk371499710)|
34
34
| When I run the VMware HCX Service Mesh Diagnostic wizard, all diagnostic tests will be passed (green check mark), yet failed probes will be reported. See [HCX - Service Mesh diagnostics test returns 2 failed probes](https://knowledge.broadcom.com/external/article?legacyId=96708)| 2024 | Fixed in 4.9+. | Resolved in [HCX 4.9.2](https://docs.vmware.com/en/VMware-HCX/4.9.2/rn/vmware-hcx-492-release-notes/index.html#What's%20New)|
35
-
| The AV64 SKU currently supports RAID-1 FTT1, RAID-5 FTT1, and RAID-1 FTT2 vSAN storage policies. For more information, see [AV64 supported RAID configuration](introduction.md#av64-supported-raid-configuration)| Nov 2023 | The AV64 SKU now supports 7 Fault Domains and all vSAN storage policies. For more information, see [AV64 supported Azure regions](architecture-private-clouds.md#azure-region-availability-zone-az-to-sku-mapping-table)| June 2024 |
35
+
| The AV64 SKU currently supports RAID-1 FTT1, RAID-5 FTT1, and RAID-1 FTT2 vSAN storage policies. For more information, see [AV64 supported RAID configuration](introduction.md#av64-supported-raid-configuration)| Nov 2023 | The AV64 SKU now supports 7 Fault Domains and all vSAN storage policies. For more information, see [AV64 supported Azure regions](architecture-private-clouds.md#azure-region-availability-zone-to-version-mapping-table)| June 2024 |
36
36
| [VMSA-2023-023](https://support.broadcom.com/web/ecx/support-content-notification/-/external/content/SecurityAdvisories/0/23677) VMware vCenter Server Out-of-Bounds Write Vulnerability (CVE-2023-34048) publicized in October 2023 | October 2023 | A risk assessment of CVE-2023-03048 was conducted and it was determined that sufficient controls are in place within Azure VMware Solution to reduce the risk of CVE-2023-03048 from a CVSS Base Score of 9.8 to an adjusted Environmental Score of [6.8](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H/MAC:L/MPR:H/MUI:R) or lower. Adjustments from the base score were possible due to the network isolation of the Azure VMware Solution vCenter Server (ports 2012, 2014, and 2020 aren't exposed via any interactive network path) and multiple levels of authentication and authorization necessary to gain interactive access to the vCenter Server network segment. Azure VMware Solution is currently rolling out [7.0U3o](https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/7-0/release-notes/vcenter-server-update-and-patch-releases/vsphere-vcenter-server-70u3o-release-notes.html) to address this issue. | March 2024 - Resolved in [ESXi 7.0U3o](https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/7-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-70u3o-release-notes.html) |
37
37
| After my private cloud NSX-T Data Center upgrade to version [3.2.2](https://techdocs.broadcom.com/us/en/vmware-cis/nsx/vmware-nsx/3-2/release-notes/vmware-nsxt-data-center-322-release-notes.html), the NSX-T Manager **DNS - Forwarder Upstream Server Timeout** alarm is raised | February 2023 |[Enable private cloud internet Access](architecture-design-public-internet-access.md), alarm is raised because NSX-T Manager can't access the configured CloudFlare DNS server. Otherwise, [change the default DNS zone to point to a valid and reachable DNS server.](configure-dns-azure-vmware-solution.md)| February 2023 |
38
38
| After my private cloud NSX-T Data Center upgrade to version [3.2.2](https://techdocs.broadcom.com/us/en/vmware-cis/nsx/vmware-nsx/3-2/release-notes/vmware-nsxt-data-center-322-release-notes.html), the NSX-T Manager **Capacity - Maximum Capacity Threshold** alarm is raised | 2023 | Alarm raised because there are more than four clusters in the private cloud with the medium form factor for the NSX-T Data Center Unified Appliance. The form factor needs to be scaled up to large. This issue should get detected through Microsoft, however you can also open a support request. | 2023 |
Copy file name to clipboardExpand all lines: articles/azure-vmware/includes/azure-vmware-solutions-limits.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,12 +22,12 @@ The following table describes the maximum limits for Azure VMware Solution.
22
22
| Maximum number of vCenter Servers per private cloud | 1 (hard limit) |
23
23
| Maximum number of HCX site pairings | 25 (any edition) |
24
24
| Maximum number of HCX service meshes | 10 (any edition) |
25
-
| Maximum number of Azure VMware Solution private clouds linked Azure ExpressRoute from a single location to a single virtual network gateway | 4<br />The virtual network gateway used determines the actual maximum number of linked private clouds. For more information, see [About ExpressRoute virtual network gateways](../../expressroute/expressroute-about-virtual-network-gateways.md)<br />If you exceed this threshold, use [Azure VMware Solution Interconnect](../connect-multiple-private-clouds-same-region.md) to aggregate private cloud connectivity within the Azure region. |
26
-
| Maximum Azure VMware Solution ExpressRoute throughput | 10 Gbps (use Ultra Performance Gateway version with FastPath enabled)**<br />The virtual network gateway that's used determines the actual bandwidth. For more information, see [About ExpressRoute virtual network gateways](../../expressroute/expressroute-about-virtual-network-gateways.md)<br />Azure VMware Solution ExpressRoutes don't have any port speed limitations and perform above 10 Gbps. Rates over 10 Gbps aren't guaranteed because of quality of service.|
25
+
| Maximum number of Azure VMware Solution private clouds linked Azure ExpressRoute from a single location to a single virtual network gateway | 4<br />The virtual network gateway used determines the actual maximum number of linked private clouds. For more information, see [About ExpressRoute virtual network gateways](../../expressroute/expressroute-about-virtual-network-gateways.md).<br />If you exceed this threshold, use [Azure VMware Solution Interconnect](../connect-multiple-private-clouds-same-region.md) to aggregate private cloud connectivity within the Azure region. |
26
+
| Maximum Azure VMware Solution ExpressRoute throughput | 10 Gbps. (Use Ultra Performance Gateway version with FastPath enabled.)**<br />The virtual network gateway that's used determines the actual bandwidth. For more information, see [About ExpressRoute virtual network gateways](../../expressroute/expressroute-about-virtual-network-gateways.md).<br />Azure VMware Solution ExpressRoutes don't have any port speed limitations and perform above 10 Gbps. Rates over 10 Gbps aren't guaranteed because of quality of service.|
27
27
| Maximum number of Azure Public IPv4 addresses assigned to NSX | 2,000 |
28
28
| Maximum number of Azure VMware Solution Interconnects per private cloud | 10 |
29
29
| Maximum number of Azure ExpressRoute Global Reach connections per Azure VMware Solution private cloud | 8 |
30
-
| vSAN capacity limits | 75% of total usable (keep 25% available for service-level agreement) |
30
+
| vSAN capacity limits | 75% of total usable. (Keep 25% available for service-level agreement.) |
31
31
| VMware Site Recovery Manager - Maximum number of protected virtual machines | 3,000 |
32
32
| VMware Site Recovery Manager - Maximum number of virtual machines per recovery plan | 2,000 |
33
33
| VMware Site Recovery Manager - Maximum number of protection groups per recovery plan | 250 |
| AV64 | Dual Intel Xeon Platinum 8370C CPUs (Ice Lake microarchitecture) with 32 cores/CPU @ 2.8 GHz/3.5 GHz Turbo, total 64 physical cores (128 logical cores with hyperthreading) | 1,024 | 3.84 (NVMe) | 15.36 (NVMe) | Selected regions**|
22
22
23
23
An Azure VMware Solution cluster requires a minimum number of three hosts. You can use hosts of the same type only in a single Azure VMware Solution private cloud. Hosts used to build or scale clusters come from an isolated pool of hosts. Those hosts passed hardware tests and had all data securely deleted before being added to a cluster.
24
24
25
25
All of the preceding host types have 100-Gbps network interface throughput.
26
26
27
-
(*) Details are available via the Azure pricing calculator.
27
+
*Details are available via the Azure pricing calculator.
28
28
29
-
(**) AV64 prerequisite: An Azure VMware Solution private cloud deployed with AV36, AV36P, or AV52 is required before adding AV64.
29
+
**AV64 prerequisite: An Azure VMware Solution private cloud deployed with AV36, AV36P, or AV52 is required before adding AV64.
30
30
31
-
(***) Raw is based on [International Standard of Units (SI)](https://en.wikipedia.org/wiki/International_System_of_Units) reported by disk manufacturers. Example: 1 TB Raw = 1000000000000 bytes. Space calculated by a computer in binary (1 TB binary = 1099511627776 bytes binary) equals 931.3 gigabytes converted from the raw decimal.
31
+
***Raw is based on [International Standard of Units (SI)](https://en.wikipedia.org/wiki/International_System_of_Units) reported by disk manufacturers. Example: 1 TB Raw = 1000000000000 bytes. Space calculated by a computer in binary (1 TB binary = 1099511627776 bytes binary) equals 931.3 gigabytes converted from the raw decimal.
Copy file name to clipboardExpand all lines: articles/azure-vmware/introduction.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -60,7 +60,7 @@ When a customer has a deployed Azure VMware Solution private cloud, they can sca
60
60
61
61
The traditional Azure VMware Solution host clusters don't have explicit vSAN FD configuration. The reasoning is the host allocation logic ensures, within clusters, that no two hosts reside in the same physical fault domain within an Azure region. This feature inherently brings resilience and high availability for storage, which the vSAN FD configuration is supposed to bring. More information on vSAN FD can be found in the [VMware documentation](https://techdocs.broadcom.com/us/en/vmware-cis/vsan/vsan/8-0/vsan-administration/expanding-and-managing-a-vsan-cluster/managing-fault-domains-in-vsan-clusters.html).
62
62
63
-
The Azure VMware Solution AV64 host clusters have an explicit vSAN fault domain (FD) configuration. Azure VMware Solution control plane configures seven vSAN fault domains (FDs) for AV64 clusters. Hosts are balanced evenly across the seven FDs as users scale up the hosts in a cluster from three nodes to 16 nodes. Some Azure regions still support a maximum of five FDs as part of the initial release of the AV64 SKU. Refer to the [Azure Region Availability Zone (AZ) to SKU mapping table](architecture-private-clouds.md#azure-region-availability-zone-az-to-sku-mapping-table) for more information.
63
+
The Azure VMware Solution AV64 host clusters have an explicit vSAN fault domain (FD) configuration. Azure VMware Solution control plane configures seven vSAN fault domains (FDs) for AV64 clusters. Hosts are balanced evenly across the seven FDs as users scale up the hosts in a cluster from three nodes to 16 nodes. Some Azure regions still support a maximum of five FDs as part of the initial release of the AV64 SKU. Refer to the [Azure Region Availability Zone (AZ) to SKU mapping table](architecture-private-clouds.md#azure-region-availability-zone-to-version-mapping-table) for more information.
64
64
65
65
### Cluster size recommendation
66
66
@@ -93,7 +93,7 @@ The following three scenarios show examples of instances that normally error out
93
93
94
94
### AV64 supported RAID configuration
95
95
96
-
This table provides the list of RAID configuration supported and host requirements in AV64 clusters. The RAID-6 FTT2 and RAID-1 FTT3 policies are supported with the AV64 SKU in some regions. In Azure regions that are currently constrained to five FDs, Microsoft allows customers to use the RAID-5 FTT1 vSAN storage policy for AV64 clusters with six or more nodes to meet the service level agreement (SLA). Refer to the [Azure Region Availability Zone (AZ) to SKU mapping table](architecture-private-clouds.md#azure-region-availability-zone-az-to-sku-mapping-table) for more information.
96
+
This table provides the list of RAID configuration supported and host requirements in AV64 clusters. The RAID-6 FTT2 and RAID-1 FTT3 policies are supported with the AV64 SKU in some regions. In Azure regions that are currently constrained to five FDs, Microsoft allows customers to use the RAID-5 FTT1 vSAN storage policy for AV64 clusters with six or more nodes to meet the service level agreement (SLA). Refer to the [Azure Region Availability Zone (AZ) to SKU mapping table](architecture-private-clouds.md#azure-region-availability-zone-to-version-mapping-table) for more information.
0 commit comments