You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-vmware/migrate-sql-server-failover-cluster.md
+27-26Lines changed: 27 additions & 26 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -40,9 +40,9 @@ The table below indicates the downtime for each Microsoft SQL Server topology.
40
40
41
41
|**Scenario**|**Downtime expected**|**Notes**|
42
42
|:---|:-----|:-----|
43
-
|**Standalone instance**|LOW| Migration will be done using vMotion, the DB will be available during migration time, but it isn't recommended to commit any critical data during it. |
44
-
|**Always-On Availability Group**|LOW| The primary replica will always be available during the migration of the first secondary replica and the secondary replica will become the primary after the initial failover to Azure. |
45
-
|**Failover Cluster Instance**|HIGH| All nodes of the cluster will be shut down and migrated using VMware HCX Cold Migration. Downtime duration will depend upon database size and private network speed to Azure cloud. |
43
+
|**Standalone instance**|Low| Migration will be done using vMotion, the DB will be available during migration time, but it isn't recommended to commit any critical data during it. |
44
+
|**Always-On Availability Group**|Low| The primary replica will always be available during the migration of the first secondary replica and the secondary replica will become the primary after the initial failover to Azure. |
45
+
|**Failover Cluster Instance**|High| All nodes of the cluster will be shut down and migrated using VMware HCX Cold Migration. Downtime duration will depend upon database size and private network speed to Azure cloud. |
46
46
47
47
## Windows Server Failover Cluster quorum considerations
48
48
@@ -73,50 +73,51 @@ For illustration purposes, in this document we're using a two-node cluster with
73
73
74
74
1. From vSphere Client shutdown the second node of the cluster.
75
75
1. Access the first node of the cluster and open **Failover Cluster Manager**.
76
-
1. Verify that the second node is in **Offline** state and that all clustered services and storage are under the control of the first node.
76
+
- Verify that the second node is in **Offline** state and that all clustered services and storage are under the control of the first node.
77
77
78
78
:::image type="content" source="media/sql-server-hybrid-benefit/sql-failover-1.png" alt-text="Diagram showing Windows Server Failover Cluster Manager cluster storage verification." border="false" lightbox="media/sql-server-hybrid-benefit/sql-failover-1.png":::
79
79
80
-
1. Shut down the cluster.
80
+
- Shut down the cluster.
81
81
82
82
:::image type="content" source="media/sql-server-hybrid-benefit/sql-failover-2.png" alt-text="Diagram showing a shut down cluster using Windows Server Failover Cluster Manager." border="false" lightbox="media/sql-server-hybrid-benefit/sql-failover-2.png":::
83
83
84
-
1. Check that all cluster services are successfully stopped without errors.
84
+
- Check that all cluster services are successfully stopped without errors.
85
85
1. Shut down first node of the cluster.
86
86
1. From the **vSphere Client**, edit the settings of the second node of the cluster.
87
-
1. Remove all shared disks from the virtual machine configuration.
88
-
1. Ensure that the **Delete files from datastore** checkbox isn't selected as this will permanently delete the disk from the datastore, and you'll need to recover the cluster from a previous backup.
89
-
1. Set **SCSI Bus Sharing** from **Physical** to **None** in the virtual SCSI controllers used for the shared storage. Usually, these controllers are of VMware Paravirtual type.
87
+
- Remove all shared disks from the virtual machine configuration.
88
+
- Ensure that the **Delete files from datastore** checkbox isn't selected as this will permanently delete the disk from the datastore, and you'll need to recover the cluster from a previous backup.
89
+
- Set **SCSI Bus Sharing** from **Physical** to **None** in the virtual SCSI controllers used for the shared storage. Usually, these controllers are of VMware Paravirtual type.
90
90
1. Edit the first node virtual machine settings. Set **SCSI Bus Sharing** from **Physical** to **None** in the SCSI controllers.
91
91
92
92
1. From the vSphere Client,** go to the HCX plugin area. Under **Services**, select **Migration** > **Migrate**.
93
-
1. Select the second node virtual machine.
94
-
1. Set the vSphere cluster in the remote private cloud that will run the migrated SQL cluster as the **Compute Container**.
95
-
1. Select the **vSAN Datastore** as remote storage.
96
-
1. Select a folder if you want to place the virtual machines in specific folder, this not mandatory but is recommended to separate the different workloads in your Azure VMware Solution private cloud.
97
-
1. Keep **Same format as source**.
98
-
1. Select **Cold migration** as **Migration profile**.
99
-
1. In **Extended****Options** select **Migrate Custom Attributes**.
100
-
1. Verify that on-premises network segments have the correct remote stretched segment in Azure.
101
-
1. Select **Validate** and ensure that all checks are completed with pass status. The most common error here will be one related to the storage configuration. Verify again that there are no SCSI controllers with physical sharing setting.
102
-
1. Select **Go** and the migration will initiate.
93
+
- Select the second node virtual machine.
94
+
- Set the vSphere cluster in the remote private cloud that will run the migrated SQL cluster as the **Compute Container**.
95
+
- Select the **vSAN Datastore** as remote storage.
96
+
- Select a folder if you want to place the virtual machines in specific folder, this not mandatory but is recommended to separate the different workloads in your Azure VMware Solution private cloud.
97
+
- Keep **Same format as source**.
98
+
- Select **Cold migration** as **Migration profile**.
99
+
- In **Extended****Options** select **Migrate Custom Attributes**.
100
+
- Verify that on-premises network segments have the correct remote stretched segment in Azure.
101
+
- Select **Validate** and ensure that all checks are completed with pass status. The most common error here will be one related to the storage configuration. Verify again that there are no SCSI controllers with physical sharing setting.
102
+
- Select **Go** and the migration will initiate.
103
103
1. Repeat the same process for the first node.
104
104
1. Access **Azure VMware Solution vSphere Client** and edit the first node settings and set back to physical SCSI Bus sharing the SCSI controller(s) managing the shared disks.
105
105
106
106
1. Edit node 2 settings in **vSphere Client**.
107
-
1. Set SCSI Bus sharing back to physical in the SCSI controller managing shared storage.
108
-
1. Add the cluster shared disks to the node as additional storage. Assign them to the second SCSI controller.
109
-
1. Ensure that all the storage configuration is the same as the one recorded before the migration.
107
+
- Set SCSI Bus sharing back to physical in the SCSI controller managing shared storage.
108
+
- Add the cluster shared disks to the node as additional storage. Assign them to the second SCSI controller.
109
+
- Ensure that all the storage configuration is the same as the one recorded before the migration.
110
110
1. Power on the first node virtual machine.
111
111
1. Access the first node VM with **VMware Remote Console**.
112
-
1. Verify virtual machine network configuration and ensure it can reach on-premises and Azure resources.
113
-
1. Open **Failover Cluster Manager** and verify cluster services.
112
+
- Verify virtual machine network configuration and ensure it can reach on-premises and Azure resources.
113
+
- Open **Failover Cluster Manager** and verify cluster services.
114
+
114
115
:::image type="content" source="media/sql-server-hybrid-benefit/sql-failover-3.png" alt-text="Diagram showing a cluster summary in Failover Cluster Manager." border="false" lightbox="media/sql-server-hybrid-benefit/sql-failover-3.png":::
115
116
116
117
1. Power on the second node virtual machine.
117
118
1. Access the second node VM from the **VMware Remote Console**.
118
-
1. Verify that Windows Server can reach the storage.
119
-
1. In the **Failover Cluster Manager** review that the second node appears as **Online** status.
119
+
- Verify that Windows Server can reach the storage.
120
+
- In the **Failover Cluster Manager** review that the second node appears as **Online** status.
120
121
:::image type="content" source="media/sql-server-hybrid-benefit/sql-failover-4.png" alt-text="Diagram showing a cluster node status in Failover Cluster Manager." border="false" lightbox="media/sql-server-hybrid-benefit/sql-failover-4.png":::
121
122
122
123
1. Using the **SQL Server Management Studio** connect to the SQL Server cluster resource network name. Check the database is online and accessible.
0 commit comments