|
| 1 | +--- |
| 2 | +title: Troubleshooting LIS Driver Issues on Linux Virtual Machines in Azure |
| 3 | +description: Provides a solution to Azure Linux VM issues because of missing Hyper-V drivers in Azure. |
| 4 | +ms.date: 02/24/2025 |
| 5 | +ms.author: msaenzbo |
| 6 | +ms.reviewer: divargas |
| 7 | +ms.topic: troubleshooting-general |
| 8 | +ms.workload: infrastructure-services |
| 9 | +ms.service: azure-virtual-machines |
| 10 | +ms.tgt_pltfrm: vm-linux |
| 11 | +ms.custom: sap:My VM is not booting, linux-related-content |
| 12 | +--- |
| 13 | + |
| 14 | +# Troubleshooting LIS driver issues on Linux Virtual Machines in Azure |
| 15 | + |
| 16 | +**Applies to:** :heavy_check_mark: Linux VMs |
| 17 | + |
| 18 | +When you run a Linux virtual machine (VM) on Microsoft Azure, the Hyper-V drivers (also known as LIS - Linux Integration Services) are essential for proper VM operation. These drivers enable the VM to communicate with the underlying Azure hypervisor. If they're missing or not correctly loaded, the VM might not start because of the following reasons: |
| 19 | + |
| 20 | +- Missing disk and network drivers: Azure VMs rely on Hyper-V storage and network drivers (for example, hv_storvsc, hv_netvsc). Without these drivers, the VM can't detect its OS disk. |
| 21 | + |
| 22 | +- Lack of synthetic device support: Azure VMs use synthetic devices (for example, for storage and networking) that require Hyper-V drivers. Without these devices, the kernel might not recognize critical components. |
| 23 | + |
| 24 | +- Hyper-V bus communication Failure: Essential components such as hv_vmbus handle communication between the VM and the Azure hypervisor. If this driver is missing, the VM can't correctly initialize. |
| 25 | + |
| 26 | +- Kernel panic or initramfs prompt: Without the necessary drivers, the VM might drop to an initramfs shell because it can't mount the root file system. |
| 27 | + |
| 28 | +## Prerequisites |
| 29 | + |
| 30 | +- Access to az-cli |
| 31 | +- Access to create a rescue VM |
| 32 | +- Serial console access |
| 33 | +- Familiarity with Linux commands and system administration |
| 34 | + |
| 35 | +## Symptoms |
| 36 | + |
| 37 | +The following issues can occur because of missing Hyper-V drivers in Azure: |
| 38 | + |
| 39 | +- After you migrate a Linux VM from on-premises to Azure, the VM doesn't start correctly. |
| 40 | + |
| 41 | +- After you disable the hyperv-drivers and restart the VM. |
| 42 | + |
| 43 | +- If the Hyper-V drivers are not included in the initramfs, the VM doesn't start. |
| 44 | + |
| 45 | +When you review the serial console logs for various Linux VMs (Red Hat, Oracle, SUSE, or Ubuntu), you might find entries for any of the following common issues. |
| 46 | + |
| 47 | +| Symptom | Description | |
| 48 | +|--------------------------------------|------------------------------------------------------------------| |
| 49 | +| **VM Stuck at dracut/initramfs** | The VM doesn't start and drops into an initramfs shell or emergency mode because of missing storage drivers. | |
| 50 | +| **Kernel Panic on Boot** | System doesn't respond during startup because of missing critical Hyper-V modules. | |
| 51 | +| **Disk Not Found Errors** | Startup process fails and returns error messages such as "cannot find root device" or "unable to mount root filesystem." | |
| 52 | +| **No Network Connectivity** | Even if the VM boots, network interfaces might not be detected. This situation prevents SSH access. | |
| 53 | +| **Grub Boot Failure** | The system might not load the bootloader because of missing Hyper-V storage drivers. | |
| 54 | +| **Slow Boot with ACPI Errors** | The VM might take a long time to start and might generate ACPI-related warnings that are caused by missing Hyper-V support. | |
| 55 | +| **Failure to Attach Azure Disks** | Azure-managed disks might not be recognized or mounted correctly because of missing storage drivers. | |
| 56 | +| **Cloud-Init or Waagent Failures** | Azure provisioning tools (`cloud-init` or `waagent`) might not configure the VM correctly. | |
| 57 | + |
| 58 | + |
| 59 | +Here are log entry examples: |
| 60 | + |
| 61 | +- Output 1 |
| 62 | + |
| 63 | +```output |
| 64 | +[ 201.568597] dracut-initqueue[351]: Warning: dracut-initqueue: starting timeout scripts |
| 65 | +[ 202.086401] dracut-initqueue[351]: Warning: dracut-initqueue: timeout, still waiting for following initqueue hooks: |
| 66 | +[ 202.097772] dracut-initqueue[351]: Warning: /lib/dracut/hooks/initqueue/finished/devexists-\x2fdev\x2fmapper\x2frootvg-rootlv.sh: "if ! grep -q After=remote-fs-pre.target /run/systemd/generator/systemd-cryptsetup@*.service 2>/dev/null; then |
| 67 | +[ 202.128885] dracut-initqueue[351]: [ -e "/dev/mapper/rootvg-rootlv" ] |
| 68 | +[ 202.138322] dracut-initqueue[351]: fi" |
| 69 | +[ 202.142466] dracut-initqueue[351]: Warning: dracut-initqueue: starting timeout scripts |
| 70 | +[ 202.674872] dracut-initqueue[351]: Warning: dracut-initqueue: timeout, still waiting for following initqueue hooks: |
| 71 | +[ 202.692200] dracut-initqueue[351]: Warning: /lib/dracut/hooks/initqueue/finished/devexists-\x2fdev\x2fmapper\x2frootvg-rootlv.sh: "if ! grep -q After=remote-fs-pre.target /run/systemd/generator/systemd-cryptsetup@*.service 2>/dev/null; then |
| 72 | +[ 202.724308] dracut-initqueue[351]: [ -e "/dev/mapper/rootvg-rootlv" ] |
| 73 | +[ 202.731292] dracut-initqueue[351]: fi" |
| 74 | +[ 202.732288] dracut-initqueue[351]: Warning: dracut-initqueue: starting timeout scripts |
| 75 | +[ 202.740791] dracut-initqueue[351]: Warning: Could not boot. |
| 76 | + Starting Dracut Emergency Shell... |
| 77 | +Warning: /dev/mapper/rootvg-rootlv does not exist |
| 78 | +
|
| 79 | +Generating "/run/initramfs/rdsosreport.txt" |
| 80 | +
|
| 81 | +
|
| 82 | +Entering emergency mode. Exit the shell to continue. |
| 83 | +Type "journalctl" to view system logs. |
| 84 | +You might want to save "/run/initramfs/rdsosreport.txt" to a USB stick or /boot |
| 85 | +after mounting them and attach it to a bug report. |
| 86 | +
|
| 87 | +
|
| 88 | +dracut:/# |
| 89 | +dracut:/# |
| 90 | +dracut:/# |
| 91 | +``` |
| 92 | + |
| 93 | +- Output 2 |
| 94 | + |
| 95 | +```output |
| 96 | +Gave up waiting for root file system device. Common problems: |
| 97 | + - Boot args (cat /proc/cmdline) |
| 98 | + - Check rootdelay= (did the system wait long enough?) |
| 99 | + - Missing modules (cat /proc/modules; ls /dev) |
| 100 | +ALERT! UUID=143c811b-9b9c-48f3-b0c8-040f6e65f50aa does not exist. Dropping to a shell! |
| 101 | +
|
| 102 | +
|
| 103 | +BusyBox v1.27.2 (Ubuntu 1:1.27.2-2ubuntu3.4) built-in shell (ash) |
| 104 | +Enter 'help' for a list of built-in commands. |
| 105 | +
|
| 106 | +(initramfs) |
| 107 | +``` |
| 108 | + |
| 109 | + |
| 110 | +If running `cat /proc/partitions` in the dracut shell doesn't display any storage devices, it indicates that the Hyper-V storage driver `hv_storvsc` is missing or not loaded. Without this driver, the VM can't detect its virtual disks, leading to a boot failure |
| 111 | + |
| 112 | +- Output 2 |
| 113 | + |
| 114 | +```bash |
| 115 | +dracut:/# cat /proc/partitions |
| 116 | +dracut:/# |
| 117 | +dracut:/# |
| 118 | +``` |
| 119 | + |
| 120 | +## Cause |
| 121 | + |
| 122 | +When you run a Linux virtual machine on Azure, Hyper-V drivers are crucial for communication with the underlying hypervisor. If these drivers are missing, disabled, or not correctly loaded, the VM might not start. This issue is common if a VM is migrated to Azure from another hypervisor, such as `VMware` or `KVM`. In these cases, the necessary Hyper-V drivers (`hv_vmbus, hv_storvsc, hv_netvsc, hv_utils`) might not be installed or enabled. This situation prevents the VM from detecting storage and network devices. |
| 123 | + |
| 124 | +## Resolution |
| 125 | + |
| 126 | +If you're experiencing any of these issues, follow the steps in the relevant "Resolution" section. The instructions are divided into two scenarios: one for cases in which Hyper-V modules are disabled and one for cases in which Hyper-V modules are missing from initramfs after migration. |
| 127 | + |
| 128 | + |
| 129 | +### Scenario 1: Hyper-V modules disabled |
| 130 | + |
| 131 | + |
| 132 | +1. Troubleshoot this issue by using a rescue (also known as "repair" or "recovery") VM. Use [vm repair commands](repair-linux-vm-using-azure-virtual-machine-repair-commands.md) to create a repair VM that has a copy of the affected VM's OS disk attached. Mount the copy of the OS file systems to the repair VM by using [chroot](chroot-environment-linux.md). |
| 133 | + |
| 134 | + > [!NOTE] |
| 135 | + > Alternatively, you can create a rescue VM manually by using the Azure portal. For more information, see: [Troubleshoot a Linux VM by attaching the OS disk to a recovery VM using the Azure portal](troubleshoot-recovery-disks-portal-linux.md). |
| 136 | +
|
| 137 | + |
| 138 | +2. After the chroot process is finished, go to the `/etc/modprobe.d` directory, and then locate any line that disables the `hv_ `. |
| 139 | + |
| 140 | + 1. Identify the file that disables the `hv_` driver and the corresponding line numbers by running the following command: |
| 141 | + |
| 142 | + ```bash |
| 143 | + grep -nr "hv_" /etc/modprobe.d/ |
| 144 | + ``` |
| 145 | + |
| 146 | + 2. Modify the corresponding file and comment out or delete the `hv_` entries: |
| 147 | + |
| 148 | + ```bash |
| 149 | + vi /etc/modprobe.d/disable.conf |
| 150 | + ``` |
| 151 | + |
| 152 | + > [!NOTE] |
| 153 | + > - The Linux Operating System, not Microsoft, defines the entries that disable drivers. |
| 154 | + > - Replace `disable.conf` with the corresponding filename on a line where the `hv_` driver is disabled. |
| 155 | + |
| 156 | +3. Rebuild the initial RAMdisk image for your affected kernel by following [Regenerate missing initramfs manually](/troubleshoot/azure/virtual-machines/linux/kernel-related-boot-issues#missing-initramfs-manual) |
| 157 | + |
| 158 | + |
| 159 | +### Scenario 2: Hyper-V modules missing on initramfs after OS migration from on-premise to Azure |
| 160 | + |
| 161 | + |
| 162 | +1. Troubleshoot this issue by using a rescue VM. Use [vm repair commands](repair-linux-vm-using-azure-virtual-machine-repair-commands.md) to create a rescue VM that has a copy of the affected VM's OS disk attached. Mount the copy of the OS file systems to the repair VM by using [chroot](chroot-environment-linux.md). |
| 163 | +
|
| 164 | + > [!NOTE] |
| 165 | + > Alternatively, you can create a rescue VM manually by using the Azure portal. For more information, see: [Troubleshoot a Linux VM by attaching the OS disk to a recovery VM using the Azure portal](troubleshoot-recovery-disks-portal-linux.md). |
| 166 | +
|
| 167 | +
|
| 168 | +2. After the chroot process is finished, check whether `hv_` modules are missing on the current kernel: |
| 169 | +
|
| 170 | + - For RHEL-based images: |
| 171 | +
|
| 172 | + ```bash |
| 173 | + lsinitrd /boot/initramfs-<kernel-version>.img | grep -i hv_ |
| 174 | + ``` |
| 175 | +
|
| 176 | + - For SLES-based images: |
| 177 | + |
| 178 | + ```bash |
| 179 | + lsinitrd /boot/initrd-<versión-del-kernel> | grep -i hv_ |
| 180 | + ``` |
| 181 | + |
| 182 | + - For Ubuntu/Debian-based images: |
| 183 | +
|
| 184 | + ```bash |
| 185 | + lsinitrd /boot/initrd.img-$(uname -r) | grep -i hv_ |
| 186 | + ``` |
| 187 | +
|
| 188 | + > [!NOTE] |
| 189 | + >The Hyper-V storage driver, `hv_storvsc` or others, might not appear in initramfs because it's sometimes built directly into the kernel. This is especially true in Azure-optimized kernels. In such cases, the system loads the driver automatically at startup to ensure storage functionality. If the VM starts and detects storage correctly, no further action is necessary. |
| 190 | + |
| 191 | + |
| 192 | + 3. Edit the `/etc/dracut.conf` or `/etc/initramfs-tools/modules` file, depending on your Linux distribution. Then, add the following line to the file: |
| 193 | + |
| 194 | + - For RHEL/SLES-based images: |
| 195 | + |
| 196 | + ```bash |
| 197 | + vi /etc/dracut.conf |
| 198 | + ``` |
| 199 | + |
| 200 | + ```output |
| 201 | + add_drivers+=" hv_vmbus hv_netvsc hv_storvsc " |
| 202 | + ``` |
| 203 | + |
| 204 | + - For Ubuntu/Debian-based images: |
| 205 | + |
| 206 | + ```bash |
| 207 | + vi /etc/initramfs-tools/modules |
| 208 | + ``` |
| 209 | + |
| 210 | + ```output |
| 211 | + hv_vmbus |
| 212 | + hv_netvsc |
| 213 | + hv_storvsc |
| 214 | + ``` |
| 215 | + |
| 216 | +4. Rebuild the initial RAMdisk image for your affected kernel by following the steps in [Regenerate missing initramfs manually](/troubleshoot/azure/virtual-machines/linux/kernel-related-boot-issues#missing-initramfs-manual). |
| 217 | + |
| 218 | + |
| 219 | +## More resources |
| 220 | + |
| 221 | +- [Create and Upload a Generic Linux Virtual Machine to Azure](/azure/virtual-machines/linux/create-upload-generic) |
| 222 | + |
| 223 | +- [A Linux VM doesn't start correctly with kernel 3.10.0-514.16 after an LIS upgrade](/troubleshoot/azure/virtual-machines/linux/linux-vm-not-start-kernel-lis) |
| 224 | +
|
| 225 | +
|
| 226 | +## FAQ |
| 227 | +
|
| 228 | +
|
| 229 | +### If I'm experiencing some issue, such as connectivity, how can I make sure that the network driver, `hv_netvsc`, is working as expected after a fresh start or restart of the system? |
| 230 | + |
| 231 | +To verify that the `hv_netvsc` (Hyper-V network driver) is active and functional after a system restart, check the system logs, and look for the following entry: |
| 232 | + |
| 233 | +```output |
| 234 | +*hv_vmbus: registering driver hv_netvsc* |
| 235 | +``` |
| 236 | + |
| 237 | +This message indicates that the driver registration process started. If no additional errors are reported after this line, the driver is successfully loaded and in a functional state. Therefore, the synthetic network interface that's provided by Hyper-V was detected, and the driver is correctly handling the network connection. |
| 238 | +
|
| 239 | +[!INCLUDE [Azure Help Support](../../../includes/azure-help-support.md)] |
0 commit comments