You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/getting_started/try_opennebula/opennebula_sandbox_deployment/deploy_opennebula_onprem_with_poc_iso.md
+11-10Lines changed: 11 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -431,13 +431,14 @@ On a workstation with access to the frontend, a local route to the virtual net c
431
431
432
432
After the route exists, the workstation should be able to reach the virtual machines running on the frontend without further configuration.
433
433
434
-
## GPU Configuration - Host
434
+
## GPU Configuration
435
435
436
-
If the OpenNebula evaluation involves GPU management, GPU should be configured in pass-through mode. For the detailed process check [our official documentation](https://docs.opennebula.io/7.0/product/cluster_configuration/hosts_and_clusters/nvidia_gpu_passthrough/)
437
-
Overall, a GPU configuration in OpenNebula consists from 2 main stages:
436
+
If the OpenNebula evaluation involves GPU management, GPU should be configured in pass-through mode. For the detailed process check [this guide from the official documentation]({{% relref "/product/cluster_configuration/hosts_and_clusters/nvidia_gpu_passthrough" %}}). Overall, a GPU configuration in OpenNebula consists from 2 main stages:
438
437
- Host preparation and driver configuration
439
438
- OpenNebula settings for PCI pass-through devices
440
439
440
+
### Host Configuration
441
+
441
442
To prepare the OpenNebula host complete the following steps:
442
443
- Check that IOMMU was enabled on the host using the following command:
443
444
```default
@@ -479,7 +480,7 @@ At the next step GPU has to be bound to the vfio driver. For this, perform the f
479
480
Kernel driver in use: vfio-pci
480
481
```
481
482
482
-
### VFIO Device Ownership
483
+
#### VFIO Device Ownership
483
484
484
485
For OpenNebula to manage the GPU, the VFIO device files in `/dev/vfio/` must be owned by the `root:kvm` user and group. This is achieved by creating a `udev` rule.
485
486
@@ -512,9 +513,7 @@ For OpenNebula to manage the GPU, the VFIO device files in `/dev/vfio/` must be
512
513
# ls -la /dev/vfio/
513
514
crw-rw-rw- 1 root kvm 509, 0 Oct 16 10:00 85
514
515
515
-
## GPU Configuration - OpenNebula
516
-
517
-
### Monitoring PCI Devices
516
+
### OpenNebula Configuration
518
517
519
518
To make the GPUs available in OpenNebula, configure the PCI probe on the front-end node to monitor NVIDIA devices.
520
519
@@ -534,7 +533,7 @@ To make the GPUs available in OpenNebula, configure the PCI probe on the front-e
534
533
535
534
After a few moments, you can check if the GPU is being monitored correctly by showing the host information (`onehost show <HOST_ID>`). The GPU should appear in the `PCI DEVICES` section.
536
535
537
-
## VM with GPU instantiation
536
+
### VM with GPU instantiation
538
537
To instantiate VM with a GPU login into the OpenNebula GUI and navigate to the VMs tab. Click “Create”. Then select one of the VM templates On the next screen enter the VM name and click “Next”.
@@ -549,8 +548,10 @@ In the dropdown menu select available GPU device which will be attached to the V
549
548
550
549
Click the “Finish” button to start VM instantiation. After a while, the VM will be instantiated and may be used.
551
550
552
-
## vLLM appliance validation
553
-
The vLLM appliance is available through the OpenNebula Marketplace. Follow steps from the official documentation page - https://docs.opennebula.io/7.0/solutions/deployment_blueprints/ai-ready_opennebula/llm_inference_certification/. To download vLLM appliance and instantiate with a GPU in passthrough mode, the following steps have to be performed:
551
+
### vLLM appliance validation
552
+
553
+
The vLLM appliance is available through the OpenNebula Marketplace. Follow steps from [this guide from the official documentation]({{% relref "/solutions/deployment_blueprints/ai-ready_opennebula/llm_inferrence_certification" %}}). To download vLLM appliance and instantiate with a GPU in passthrough mode, the following steps have to be performed:
554
+
554
555
1. Go to Storage -> Apps section.
555
556
Search for vLLM appliance and import it. Select DataStore where to save image
0 commit comments