|
| 1 | +--- |
| 2 | +title: Linux VMs fail to boot due to Hyper-V driver issues |
| 3 | +description: Provides solutions to Azure Linux VM boot failures that occur because Hyper-V drivers are missing or disabled. |
| 4 | +ms.date: 02/26/2025 |
| 5 | +ms.reviewer: divargas, msaenzbo, v-weizhu |
| 6 | +ms.custom: sap:My VM is not booting, linux-related-content |
| 7 | +--- |
| 8 | +# Azure Linux virtual machines fail to boot due to Hyper-V driver issues |
| 9 | + |
| 10 | +**Applies to:** :heavy_check_mark: Linux VMs |
| 11 | + |
| 12 | +When you run a Linux Virtual Machine (VM) on Azure, the Hyper-V drivers, also known as Linux Integration Services (LIS) drivers, are crucial for proper VM operations. These drivers allow the VM to communicate with the underlying Azure hypervisor. If these drivers are missing or not properly loaded, the VM might fail to boot. This article provides solutions to boot failures due to Hyper-V driver issues in Azure Linux VMs. |
| 13 | + |
| 14 | +## Prerequisites |
| 15 | + |
| 16 | +- Access to [Azure Command-Line Interface (CLI)](/cli/azure/) |
| 17 | +- Ability to create a repair/rescue VM |
| 18 | +- Serial console access |
| 19 | +- Familiarity with Linux commands and system administration |
| 20 | + |
| 21 | +## Symptoms |
| 22 | + |
| 23 | +In one of the following scenarios, Linux VMs might fail to boot because Hyper-V drivers are missing or disabled: |
| 24 | + |
| 25 | +- After you migrate a Linux VM from on-premises to Azure. |
| 26 | + |
| 27 | + When a VM is migrated to Azure from another hypervisor (such as VMware or Kernel-based Virtual Machine (KVM)), the necessary Hyper-V drivers `hv_vmbus`, `hv_storvsc`, `hv_netvsc`, and `hv_utils` might not be installed or enabled, preventing the VM from detecting storage and network devices. |
| 28 | + |
| 29 | +- After you disable the Hyper-V drivers and reboot the VM. |
| 30 | +- When the Hyper-V drivers aren't included in the initramfs. |
| 31 | + |
| 32 | +When you review the serial console logs for various Linux VMs (Red Hat, Oracle, SUSE, or Ubuntu), the following issues are commonly observed: |
| 33 | + |
| 34 | +| Symptom | Description | |
| 35 | +|--------------------------------------|------------------------------------------------------------------| |
| 36 | +| **VM Stuck at dracut/initramfs** | VM fails to boot and drops into an initramfs shell or emergency mode due to missing storage drivers. | |
| 37 | +| **Kernel Panic on Boot** | System crashes during boot due to missing critical Hyper-V modules. | |
| 38 | +| **Disk Not Found Errors** | Boot process fails with errors like `cannot find root device` or `unable to mount root filesystem`. | |
| 39 | +| **No Network Connectivity** | Even if the VM boots, network interfaces might not be detected, preventing Secure Shell (SSH) access. | |
| 40 | +| **Grub Boot Failure** | The system might fail to load the bootloader due to missing Hyper-V storage drivers. | |
| 41 | +| **Slow Boot with ACPI Errors** | VM might take a long time to boot with Advanced Configuration and Power Interface (ACPI) related warnings caused by missing Hyper-V support. | |
| 42 | +| **Failure to Attach Azure Disks** | Azure managed disks might not be recognized or mounted correctly due to missing storage drivers. | |
| 43 | +| **Cloud-Init or Waagent Failures** | Azure provisioning tools (`cloud-init` or `waagent`) might fail to configure the VM properly. | |
| 44 | + |
| 45 | +Here are log entry examples: |
| 46 | + |
| 47 | +- Output 1 |
| 48 | + |
| 49 | + ```output |
| 50 | + [ 201.568597] dracut-initqueue[351]: Warning: dracut-initqueue: starting timeout scripts |
| 51 | + [ 202.086401] dracut-initqueue[351]: Warning: dracut-initqueue: timeout, still waiting for following initqueue hooks: |
| 52 | + [ 202.097772] dracut-initqueue[351]: Warning: /lib/dracut/hooks/initqueue/finished/devexists-\x2fdev\x2fmapper\x2frootvg-rootlv.sh: "if ! grep -q After=remote-fs-pre.target /run/systemd/generator/systemd-cryptsetup@*.service 2>/dev/null; then |
| 53 | + [ 202.128885] dracut-initqueue[351]: [ -e "/dev/mapper/rootvg-rootlv" ] |
| 54 | + [ 202.138322] dracut-initqueue[351]: fi" |
| 55 | + [ 202.142466] dracut-initqueue[351]: Warning: dracut-initqueue: starting timeout scripts |
| 56 | + [ 202.674872] dracut-initqueue[351]: Warning: dracut-initqueue: timeout, still waiting for following initqueue hooks: |
| 57 | + [ 202.692200] dracut-initqueue[351]: Warning: /lib/dracut/hooks/initqueue/finished/devexists-\x2fdev\x2fmapper\x2frootvg-rootlv.sh: "if ! grep -q After=remote-fs-pre.target /run/systemd/generator/systemd-cryptsetup@*.service 2>/dev/null; then |
| 58 | + [ 202.724308] dracut-initqueue[351]: [ -e "/dev/mapper/rootvg-rootlv" ] |
| 59 | + [ 202.731292] dracut-initqueue[351]: fi" |
| 60 | + [ 202.732288] dracut-initqueue[351]: Warning: dracut-initqueue: starting timeout scripts |
| 61 | + [ 202.740791] dracut-initqueue[351]: Warning: Could not boot. |
| 62 | + Starting Dracut Emergency Shell... |
| 63 | + Warning: /dev/mapper/rootvg-rootlv does not exist |
| 64 | + Generating "/run/initramfs/rdsosreport.txt" |
| 65 | + Entering emergency mode. Exit the shell to continue. |
| 66 | + Type "journalctl" to view system logs. |
| 67 | + You might want to save "/run/initramfs/rdsosreport.txt" to a USB stick or /boot |
| 68 | + after mounting them and attach it to a bug report. |
| 69 | + dracut:/# |
| 70 | + dracut:/# |
| 71 | + dracut:/# |
| 72 | + ``` |
| 73 | +
|
| 74 | +- Output 2 |
| 75 | +
|
| 76 | + ```output |
| 77 | + Gave up waiting for root file system device. Common problems: |
| 78 | + - Boot args (cat /proc/cmdline) |
| 79 | + - Check rootdelay= (did the system wait long enough?) |
| 80 | + - Missing modules (cat /proc/modules; ls /dev) |
| 81 | + ALERT! UUID=143c811b-9b9c-48f3-b0c8-040f6e65f50aa does not exist. Dropping to a shell! |
| 82 | + BusyBox v1.27.2 (Ubuntu 1:1.27.2-2ubuntu3.4) built-in shell (ash) |
| 83 | + Enter 'help' for a list of built-in commands. |
| 84 | + (initramfs) |
| 85 | + ``` |
| 86 | +
|
| 87 | +If running `cat /proc/partitions` in the dracut shell doesn't display any storage devices, it indicates that the Hyper-V storage driver `hv_storvsc` is missing or not loaded. Without this driver, the VM can't detect its virtual disks, leading to a boot failure. |
| 88 | +
|
| 89 | +```bash |
| 90 | +dracut:/# cat /proc/partitions |
| 91 | +dracut:/# |
| 92 | +dracut:/# |
| 93 | +``` |
| 94 | + |
| 95 | +## Cause |
| 96 | + |
| 97 | +Here are some reasons for VM boot failures: |
| 98 | + |
| 99 | +- Missing disk and network drivers |
| 100 | + |
| 101 | + Azure VMs rely on Hyper-V storage and network drivers (for example, `hv_storvsc` and `hv_netvsc`). Without these drivers, the VM can't detect its operating system (OS) disk, leading to a boot failure. |
| 102 | + |
| 103 | +- Lack of synthetic device support |
| 104 | + |
| 105 | + Azure VMs use synthetic devices (for example, for storage and networking), which require Hyper-V drivers. Without these drivers, the kernel might not recognize critical components. |
| 106 | + |
| 107 | +- Hyper-V VMBus communication failure |
| 108 | + |
| 109 | + Essential components like `hv_vmbus` handle communication between the VM and Azure hypervisor. If this driver is missing, the VM can't properly initialize. |
| 110 | + |
| 111 | +- Kernel panic or initramfs prompt |
| 112 | + |
| 113 | + Without the necessary drivers, the VM might drop to an initramfs shell due to the inability to mount the root filesystem, causing a boot failure. |
| 114 | + |
| 115 | + |
| 116 | +## Solution 1: Enable Hyper-V drivers |
| 117 | + |
| 118 | +> [!NOTE] |
| 119 | +> This solution applies to the scenario where Hyper-V drivers are disabled. |
| 120 | +
|
| 121 | +1. Use [VM repair commands](repair-linux-vm-using-azure-virtual-machine-repair-commands.md) to create a repair VM that has a copy of the affected VM's OS disk attached. |
| 122 | + |
| 123 | + > [!NOTE] |
| 124 | + > Alternatively, you can create a rescue VM manually by using the Azure portal. For more information, see [Troubleshoot a Linux VM by attaching the OS disk to a recovery VM using the Azure portal](troubleshoot-recovery-disks-portal-linux.md). |
| 125 | +
|
| 126 | +2. Mount the copy of the OS file systems in the repair VM by using [chroot](chroot-environment-linux.md). |
| 127 | +3. Once the chroot process is completed, go to the */etc/modprobe.d* directory. |
| 128 | +4. Identify the file that disables the `hv_` driver and the corresponding line numbers: |
| 129 | + |
| 130 | + ```bash |
| 131 | + grep -nr "hv_" /etc/modprobe.d/ |
| 132 | + ``` |
| 133 | + |
| 134 | +5. Modify the corresponding file and comment out or delete the `hv_` entries: |
| 135 | + |
| 136 | + ```bash |
| 137 | + vi /etc/modprobe.d/disable.conf |
| 138 | + ``` |
| 139 | + |
| 140 | + > [!NOTE] |
| 141 | + > - The entries that disable drivers are defined by the Linux OS instead of Microsoft. |
| 142 | + > - Replace `disable.conf` with the corresponding file name where the `hv_` driver is disabled. |
| 143 | + |
| 144 | +## Solution 2: Regenerate missing Hyper-V drivers in the initramfs |
| 145 | + |
| 146 | +> [!NOTE] |
| 147 | +> This solution applies to the scenario where Hyper-V drivers are missing from the initramfs after the VM migration from on-premise to Azure. |
| 148 | + |
| 149 | +1. Use [VM repair commands](repair-linux-vm-using-azure-virtual-machine-repair-commands.md) to create a repair VM that has a copy of the affected VM's OS disk attached. |
| 150 | +
|
| 151 | + > [!NOTE] |
| 152 | + > Alternatively, you can create a rescue VM manually by using the Azure portal. For more information, see [Troubleshoot a Linux VM by attaching the OS disk to a recovery VM using the Azure portal](troubleshoot-recovery-disks-portal-linux.md). |
| 153 | +
|
| 154 | +2. Mount the copy of the OS file systems in the repair VM by using [chroot](chroot-environment-linux.md). |
| 155 | +3. Once the chroot process is completed, verify if `hv_` drivers are missing on the current kernel: |
| 156 | +
|
| 157 | + - For RHEL-based images: |
| 158 | + |
| 159 | + ```bash |
| 160 | + lsinitrd /boot/initramfs-<kernel-version>.img | grep -i hv_ |
| 161 | + ``` |
| 162 | + - For SLES-based images: |
| 163 | + |
| 164 | + ```bash |
| 165 | + lsinitrd /boot/initrd-<versión-del-kernel> | grep -i hv_ |
| 166 | + ``` |
| 167 | + - For Ubuntu/Debian-based images: |
| 168 | + |
| 169 | + ```bash |
| 170 | + lsinitrd /boot/initrd.img-$(uname -r) | grep -i hv_ |
| 171 | + ``` |
| 172 | +
|
| 173 | + > [!NOTE] |
| 174 | + > The Hyper-V storage driver `hv_storvsc` or others might not appear in initramfs because it's sometimes built directly into the kernel, especially in Azure-optimized kernels. In such cases, the system loads it automatically at boot, ensuring storage functionality. If the VM boots and detects storage correctly, no action is needed. |
| 175 | + |
| 176 | +3. Edit the */etc/dracut.conf* or */etc/initramfs-tools/modules* file, depending on your Linux distribution, and add the following line to the file: |
| 177 | + |
| 178 | + - For RHEL/SLES-based images: |
| 179 | + |
| 180 | + ```bash |
| 181 | + vi /etc/dracut.conf |
| 182 | + ``` |
| 183 | + |
| 184 | + ```output |
| 185 | + add_drivers+=" hv_vmbus hv_netvsc hv_storvsc " |
| 186 | + ``` |
| 187 | + |
| 188 | + - For Ubuntu/Debian-based images: |
| 189 | + |
| 190 | + ```bash |
| 191 | + vi /etc/initramfs-tools/modules |
| 192 | + ``` |
| 193 | + |
| 194 | + ```output |
| 195 | + hv_vmbus |
| 196 | + hv_netvsc |
| 197 | + hv_storvsc |
| 198 | + ``` |
| 199 | + |
| 200 | +4. Rebuild the initial RAM disk image for your affected kernel by following the steps in [Regenerate missing initramfs manually](kernel-related-boot-issues.md#missing-initramfs-manual). |
| 201 | + |
| 202 | +## Verify the network driver is functional after a fresh boot or reboot |
| 203 | + |
| 204 | +To confirm that the Hyper-V network driver (`hv_netvsc`) is active and functional, check the system logs and look for the following entry: |
| 205 | + |
| 206 | +```output |
| 207 | +*hv\_vmbus: registering driver hv\_netvsc* |
| 208 | +``` |
| 209 | + |
| 210 | +This message indicates that the driver registration process has started. If no further errors are reported after it, the driver has been successfully loaded and is functioning correctly. This indicates that the synthetic network interface provided by Hyper-V was detected, and the driver is properly handling the network connection. |
| 211 | + |
| 212 | +## References |
| 213 | + |
| 214 | +- [Create and upload a generic Linux Virtual Machine to Azure](/azure/virtual-machines/linux/create-upload-generic) |
| 215 | +- [A Linux VM doesn't start correctly with kernel 3.10.0-514.16 after an LIS upgrade](linux-vm-not-start-kernel-lis.md) |
| 216 | +
|
| 217 | +[!INCLUDE [Azure Help Support](../../../includes/azure-help-support.md)] |
0 commit comments