Skip to content

Commit 8bf221b

Browse files
committed
Review comments
1 parent 22c3840 commit 8bf221b

File tree

1 file changed

+11
-13
lines changed

1 file changed

+11
-13
lines changed

articles/migrate/troubleshoot-replication-vmware.md

Lines changed: 11 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -13,15 +13,14 @@ ms.custom: mvc, engagement-fy23
1313

1414
This article helps you troubleshoot slow replication or stuck migration issues that you may encounter when you replicate on-premises VMware VMs using the Azure Migrate: Server Migration agentless method.
1515

16-
## Slow or stuck replication
1716

18-
### Replication is slow or stuck for VM
17+
## Replication is slow or stuck for VM
1918

2019
While performing replications, you might observe that replication for a particular VM isn't progressing at the expected pace. Generally, the underlying reason for this issue is an unavailability or scarcity of some resources required for replication. The resources might be consumed by other VMs that are replicating or some other process running on the appliance in the datacenter.
2120

2221
Following are some reasons that generally cause this issue and remediations.
2322

24-
#### NFC buffer size low
23+
### NFC buffer size low
2524

2625
The Azure Migrate appliance operates under the constraint of using 32 MB of NFC buffer to concurrently replicate 8 disks on the ESXi host. An NFC buffer size of less than 32 MB might cause slow replication.
2726
You may also get the following exception:
@@ -72,52 +71,51 @@ You can increase the NFC buffer size beyond 32 MB to increase concurrency. The s
7271
- net start asrgwy 
7372

7473

75-
#### ESXi host available RAM low
74+
### ESXi host available RAM low
7675

7776
When the ESXi host on which the replicating VM is present is too busy, the replication process will slow down due to unavailability of RAM.
7877

7978
#### Remediation
8079

8180
Use VMotion to move the VM with slow replication to an ESXi host, which isn't too busy.
8281

83-
#### Network bandwidth
82+
### Network bandwidth
8483

8584
Replications might be slow because of low network bandwidth available to the Azure Migrate appliance. Low bandwidth might be due to other applications using up the bandwidth or presence of bandwidth throttling applications or a proxy setting restricting the bandwidth use of replication appliance.
8685

8786
#### Remediation
8887

8988
In case of low bandwidth, you can first reduce the number of applications using network bandwidth. Check with your network administrator if any throttling application or proxy setting is present.
9089

91-
#### Disk I/O
90+
### Disk I/O
9291

9392
Replications can be slow because the server that is being replicated has too much load on it and this is causing high I/O operations on disks attached to it. It's advised to reduce the load on the server to increase the replication speed. You may also encounter the following error:
9493

9594
The last replication cycle for the virtual machine ‘VM Name’ failed. Encountered timeout event.
9695

9796
If no action is taken, the replication will proceed and be completed with a delay.
9897

99-
#### Disk write rates
98+
### Disk write rates
10099

101100
Replications can be slower than expected if the data upload speed is higher than the write speed of the disk that you selected while enabling replication. To get better speeds at same upload speeds, you would need to restart the replication and select **Premium** while selecting the disk type for replication.
102101

103102
> [!Caution]
104103
> The disk type recommended during Assessment might not be **Premium** for a particular VM. In this case, switching to Premium disk to improve replication speeds isn't advisable since it might not be required post migration to have a Premium disk attached to this VM.
105104
106-
## Slow or stuck migration
107105

108-
### Migration operation on VM is stuck
106+
## Migration operation on VM is stuck
109107

110-
While triggering migration for a particular VM, you might observe that the migration is stuck at some stage (queued or delta sync) longer than expected. Generally, the underlying reason for this issue is an unavailability or scarcity of some resources required for replication. The resources might be consumed by other VMs that are replicating or some other process running on appliance on in the datacenter. Following are some reasons that generally cause this issue and the remedies.
108+
While triggering migration for a particular VM, you might observe that the migration is stuck at some stage (queued or delta sync) longer than expected. Generally, the underlying reason for this issue is an unavailability or scarcity of some resources required for migration. The resources might be consumed by other VMs that are replicating or some other process running on appliance on in the datacenter. Following are some reasons that generally cause this issue and the remedies.
111109

112-
#### NFC buffer size low
110+
### NFC buffer size low
113111

114112
If an IR cycle for a server with large disks is ongoing while migration is triggered for second VM, the second VM’s migration job can get stuck. Even though migration jobs are given high priority, the NFC buffer might not be available for migration. In this case, it's recommended to stop or pause the initial replication of servers with large disks and complete the migration of the second VM.
115113

116-
#### Ongoing delta sync cycle isn't complete
114+
### Ongoing delta sync cycle isn't complete
117115

118116
If migration is triggered during an ongoing delta replication cycle, it would be queued. The delta replication cycle on the VM will be completed first after which migration will start. The time required to trigger migration depends on the time taken to complete one delta sync cycle.
119117

120-
#### Shutdown of on-premises VM taking longer than usual
118+
### Shutdown of on-premises VM taking longer than usual
121119

122120
Try to migrate without shutting down the VM or turn off the VM manually and then migrate it.
123121

0 commit comments

Comments
 (0)