|
| 1 | +--- |
| 2 | +title: Install Hybrid Cloud Extension |
| 3 | +description: set up the VMWare Hybrid Cloud Extension (HCX) solution for your Azure VMWare Solution (AVS) private cloud |
| 4 | +ms.topic: how-to |
| 5 | +ms.date: 05/15/2020 |
| 6 | +--- |
| 7 | + |
| 8 | +# HCX installation |
| 9 | + |
| 10 | +In this article, set up the VMWare Hybrid Cloud Extension (HCX) solution for your Azure VMWare Solution (AVS) private cloud. The HCX solution supports up to three external enterprise sites, where each external site must have its own HCX Enterprise activation (HEA) key in order to migrate to the AVS Private Cloud target. Hot/cold vMotion migrations for VMs to and from on-premises environments and your AVS private cloud are enabled by the solution. |
| 11 | + |
| 12 | + |
| 13 | +**Before you begin:** |
| 14 | + |
| 15 | +* Review the basic AVS Software Defined Datacenter (SDDC) [tutorial series](tutorial-network-checklist.md). |
| 16 | +* Related VMware materials on HCX, such as Megie’s VMware vSphere [blog series](https://blogs.vmware.com/vsphere/2019/10/cloud-migration-series-part-1.html) on HCX. |
| 17 | +* Order an AVS HCX Enterprise activation through AVS support channels. |
| 18 | + |
| 19 | +Sizing workloads against compute and storage resources is an essential planning step when preparing to use the AVS Private Cloud HCX solution. This sizing step should be addressed as part of initial private cloud environment planning. |
| 20 | + |
| 21 | +## Software version requirements |
| 22 | +Infrastructure components must be running the required minimum version. |
| 23 | + |
| 24 | +| Component Type | Source Environment Requirements | Destination Environment Requirements | |
| 25 | +| --- | --- | --- | |
| 26 | +| vCenter Server | 5.1<br/><br/>If using 5.5 U1 or earlier, use the standalone HCX User Interface for HCX operations. | 6.0 U2 and above | |
| 27 | +| ESXi | 5.0 | ESXi 6.0 and above | |
| 28 | +| NSX | For HCX Network Extension of Logical Switches at the Source: NSXv 6.2+ or NSX-T 2.4+ | NSXv 6.2+ or NSX-T 2.4+<br/><br/For HCX Proximity Routing: NSXv 6.4+ (Proximity Routing not supported with NSX-T) | |
| 29 | +| vCloud Director | Not required - no interoperability with vCloud Director at the source site | When the destination environment is integrated with vCloud Director, the minimum is 9.1.0.2. | |
| 30 | + |
| 31 | +## Prerequisites |
| 32 | + |
| 33 | +1. Global reach should be configured between on-premises and AVS SDDC ER |
| 34 | + circuits. |
| 35 | + |
| 36 | +2. All required ports should be open between on-premises and AVS SDDC. |
| 37 | + |
| 38 | +3. One IP address for HCX Manager at on-premises and a minimum of 2 IP addresses |
| 39 | + for Interconnect (IX) and Network Extension (NE) appliance. |
| 40 | + |
| 41 | +4. On-premises HCX IX and NE appliances should be able to reach vCenter |
| 42 | + and ESXi infrastructure. |
| 43 | + |
| 44 | +## Deploy the VMware HCX OVA |
| 45 | + |
| 46 | +1. Sign in to AVS SDDC vCenter and select **HCX**. |
| 47 | + |
| 48 | +  |
| 49 | + |
| 50 | +1. To download the VMware HCX OVA file, select **Administration** > **System Updates**. |
| 51 | + |
| 52 | +  |
| 53 | + |
| 54 | +1. Select an OVF template to deploy to on-premises vCenter. |
| 55 | +  |
| 56 | + |
| 57 | +1. Select a name and location, then select a resource/cluster where HCX needs to be deployed then, review details and required resources. |
| 58 | +  |
| 59 | + |
| 60 | +1. Review license terms, and if you agree, select required storage and network. Then select **Next**. |
| 61 | + |
| 62 | +1. In **Customize template**, enter all required information. |
| 63 | +  |
| 64 | + |
| 65 | +1. Select **Next**, verify configuration, and select **Finish** to deploy HCX |
| 66 | + OVA. |
| 67 | + |
| 68 | +## Activate HCX |
| 69 | + |
| 70 | +After installation, perform the following steps. |
| 71 | + |
| 72 | +1. Open HCX Manager at `https://HCXManagerIP:9443` and sign in with your username |
| 73 | + and your password. |
| 74 | + |
| 75 | +1. In **Licensing**, enter your **HCX Advanced Key**. |
| 76 | +  |
| 77 | + |
| 78 | + > [!NOTE] |
| 79 | + > HCX Manager must have open internet access or a proxy |
| 80 | + configured. |
| 81 | + |
| 82 | +1. Configure vCenter |
| 83 | +  |
| 84 | + |
| 85 | +1. In **Datacenter Location**, if needed, edit the datacenter location |
| 86 | +  |
| 87 | + |
| 88 | +## Configure HCX |
| 89 | + |
| 90 | +1. Sign into on-premises vCenter, then select **Home** > **HCX** |
| 91 | +  |
| 92 | + |
| 93 | +1. Select **Infrastructure** > **Site Pairing** > **Add a site pairing** |
| 94 | +  |
| 95 | + |
| 96 | +1. Enter **Remote HCX URL**, **Username**, and **Password**. Then select **Connect**. |
| 97 | + |
| 98 | + The system shows the connected site |
| 99 | +  |
| 100 | + |
| 101 | +1. Select **Interconnect** > **Multi-Site Service Mesh** > **Network Profiles** > **Create Network Profile** |
| 102 | +  |
| 103 | + |
| 104 | +1. Enter HCX IX and NE IP address ranges (a minimum of 2 IP addresses is |
| 105 | + required for IX and NE appliancees) |
| 106 | +  |
| 107 | + > [!NOTE] |
| 108 | + > The network extension appliance (HCX-NE) has a one-to-one |
| 109 | +relationship with a distributed virtual switch (DVS). |
| 110 | +1. Now select **Compute profile** > **Create compute profile**. |
| 111 | + |
| 112 | +1. Enter a compute profile name and select **Continue**. |
| 113 | +  |
| 114 | + |
| 115 | +1. Select services to enable such as migration, Network Extension, pr Disaster Recovery. Select **Continue**. |
| 116 | +  |
| 117 | + |
| 118 | +1. In **Select Service Resources**, select one or more service resources for |
| 119 | + which the selected HCX services should be enabled. Select **Continue**. |
| 120 | +  |
| 121 | + > [!NOTE] |
| 122 | + > Select specific clusters in which source |
| 123 | +VMs are targeted for migration using HCX. |
| 124 | +1. Select **Datastore** and select **Continue**. |
| 125 | + |
| 126 | + Select each compute and storage resource for deploying the HCX |
| 127 | + Interconnect appliances. When multiple resources are selected, HCX uses the first resource selected until its capacity is |
| 128 | + exhausted. |
| 129 | +  |
| 130 | + |
| 131 | +1. Select the management network profile created in **Network Profiles** |
| 132 | + and select **Continue**. |
| 133 | + |
| 134 | + Select the network profile through which the management interface of |
| 135 | + vCenter and the ESXi hosts can be reached. If you haven't already |
| 136 | + defined such a network profile, you can create it here. |
| 137 | +  |
| 138 | + |
| 139 | +1. Select **Network Uplink** and select **Continue** |
| 140 | + |
| 141 | + Select one or more network profiles such that one of the following |
| 142 | + is true: |
| 143 | + * The Interconnect Appliances on the remote site can be reached via |
| 144 | + this network |
| 145 | + * The remote-side appliances can reach the local Interconnect |
| 146 | + Appliances via this network. |
| 147 | + If you have point-to-point networks like Direct Connect which are |
| 148 | + not shared across multiple sites, you can skip this step, since |
| 149 | + compute profiles are shared with multiple sites. In such cases, |
| 150 | + Uplink Network profiles can be overridden and specified during the |
| 151 | + creation of the Interconnect Service mesh. |
| 152 | +  |
| 153 | + |
| 154 | +1. Select **vMotion Network Profile** and select **Continue** |
| 155 | + |
| 156 | + Select the network profile via which the vMotion interface of the |
| 157 | + ESXi hosts can be reached. If you haven't already defined such a |
| 158 | + network profile, you can create it here. If you don't have vMotion |
| 159 | + Network, select **Management Network Profile**. |
| 160 | +  |
| 161 | + |
| 162 | +1. Select **vSphere Replication Network Profile** and select **Continue** |
| 163 | + |
| 164 | + Select a Network Profile via which the vSphere Replication |
| 165 | + interface of ESXi Hosts can be reached. In most cases, this profile |
| 166 | + is the same as the Management Network Profile. |
| 167 | +  |
| 168 | + |
| 169 | +1. Select **Distributed Switches for Network Extensions** and select |
| 170 | + **Continue** |
| 171 | + |
| 172 | + Select the Distributed Virtual Switches on which you have networks |
| 173 | + to which the Virtual Machines that will be migrated are connected. |
| 174 | + |
| 175 | +  |
| 176 | + |
| 177 | +1. Review connection rules and select **Continue**. Select **Finish** to create the compute profile. |
| 178 | +  |
| 179 | + |
| 180 | +## Configure Network Uplink |
| 181 | + |
| 182 | +Now configure the network profile change in AVS SDDC for Network |
| 183 | +Uplink. |
| 184 | + |
| 185 | +1. Sign in to SDDC NSX-T to create a new logical switch, or use an existing |
| 186 | + logical switch which can be used for Network Uplink between |
| 187 | + on-premises and AVS SDDC. |
| 188 | + |
| 189 | +1. Create a network profile for HCX uplink in AVS SDDC which can be |
| 190 | + used for on-premises to AVS SDDC communication. |
| 191 | +  |
| 192 | + |
| 193 | +1. Enter a name for the network profile and atleast 4-5 free IP addresses |
| 194 | + based on the L2 network extension required. |
| 195 | +  |
| 196 | + |
| 197 | +1. Select **Create** to complete the AVS SDDC configuration |
| 198 | + |
| 199 | +## Configure Service Mesh |
| 200 | + |
| 201 | +Now configure Service Mesh between on-premises and AVS SDDC. |
| 202 | + |
| 203 | +1. Sign in to AVS SDDC vCenter and select **HCX**. |
| 204 | + |
| 205 | +1. Select **Infrastructure** > **Interconnect** > **Service |
| 206 | + Mesh** > **Create Service Mesh**. Configure the network and compute profiles |
| 207 | + created in previous steps. |
| 208 | + |
| 209 | +  |
| 210 | + |
| 211 | +3. Select **Create Service Mesh** and select **Continue** |
| 212 | + |
| 213 | + Select paired sites between which to enable hybrid |
| 214 | + mobility. |
| 215 | +  |
| 216 | + |
| 217 | +4. Select **Compute profile** and select **Continue** |
| 218 | + |
| 219 | + Select one compute profile each in the source and remote sites to |
| 220 | + enable hybridity services. The selections will define the |
| 221 | + resources, where Virtual Machines will be able to consume HCX |
| 222 | + services. |
| 223 | + |
| 224 | +  |
| 225 | + |
| 226 | +5. Select services to be enabled for HCX and select **Continue** |
| 227 | + |
| 228 | +  |
| 229 | + |
| 230 | +6. In **Advanced Configuration - Override Uplink Network profiles** select **Continue** |
| 231 | + |
| 232 | + Uplink network profiles are used to connect to the network via |
| 233 | + which the remote site’s interconnect appliances can be reached. |
| 234 | + |
| 235 | +  |
| 236 | + |
| 237 | +7. In **Advanced Configuration – Network Extension Appliance Scale Out**, select **Configure the Network Extension Appliance Scale Out** |
| 238 | + |
| 239 | +  |
| 240 | + |
| 241 | +8. Enter the appliance count corresponding to the DVS switch count |
| 242 | + |
| 243 | +  |
| 244 | + |
| 245 | +9. In **Advanced Configuration - Traffic Engineering**, select **Continue** |
| 246 | + |
| 247 | +  |
| 248 | + |
| 249 | +10. Review the topology preview and select **Continue**. Then, enter a user-friendly name for this Service Mesh and select |
| 250 | + **Finish** to complete. |
| 251 | + |
| 252 | +  |
| 253 | + |
| 254 | +The Service Mesh is deployed and configured. |
| 255 | + |
| 256 | + |
| 257 | + |
| 258 | +## Check appliance status |
| 259 | +To check the status of the appliance, select **Interconnect** > **Appliances**. |
| 260 | + |
| 261 | + |
| 262 | + |
| 263 | + |
| 264 | + |
| 265 | +## Next steps |
| 266 | + |
| 267 | +When **Tunnel Status** is **UP** and green, you are ready for |
| 268 | + migration and protecting VMs using HCX Disaster Recovery. |
0 commit comments