Skip to content

Commit b42c296

Browse files
authored
Revise GPU configuration details in documentation
Updated GPU configuration instructions for OpenNebula, including links to official documentation and improved clarity on the setup process.
1 parent bd6b736 commit b42c296

File tree

1 file changed

+11
-10
lines changed

1 file changed

+11
-10
lines changed

content/getting_started/try_opennebula/opennebula_sandbox_deployment/deploy_opennebula_onprem_with_poc_iso.md

Lines changed: 11 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -431,13 +431,14 @@ On a workstation with access to the frontend, a local route to the virtual net c
431431

432432
After the route exists, the workstation should be able to reach the virtual machines running on the frontend without further configuration.
433433

434-
## GPU Configuration - Host
434+
## GPU Configuration
435435

436-
If the OpenNebula evaluation involves GPU management, GPU should be configured in pass-through mode. For the detailed process check [our official documentation](https://docs.opennebula.io/7.0/product/cluster_configuration/hosts_and_clusters/nvidia_gpu_passthrough/)
437-
Overall, a GPU configuration in OpenNebula consists from 2 main stages:
436+
If the OpenNebula evaluation involves GPU management, GPU should be configured in pass-through mode. For the detailed process check [this guide from the official documentation]({{% relref "/product/cluster_configuration/hosts_and_clusters/nvidia_gpu_passthrough" %}}). Overall, a GPU configuration in OpenNebula consists from 2 main stages:
438437
- Host preparation and driver configuration
439438
- OpenNebula settings for PCI pass-through devices
440439

440+
### Host Configuration
441+
441442
To prepare the OpenNebula host complete the following steps:
442443
- Check that IOMMU was enabled on the host using the following command:
443444
```default
@@ -479,7 +480,7 @@ At the next step GPU has to be bound to the vfio driver. For this, perform the f
479480
Kernel driver in use: vfio-pci
480481
```
481482
482-
### VFIO Device Ownership
483+
#### VFIO Device Ownership
483484
484485
For OpenNebula to manage the GPU, the VFIO device files in `/dev/vfio/` must be owned by the `root:kvm` user and group. This is achieved by creating a `udev` rule.
485486
@@ -512,9 +513,7 @@ For OpenNebula to manage the GPU, the VFIO device files in `/dev/vfio/` must be
512513
# ls -la /dev/vfio/
513514
crw-rw-rw- 1 root kvm 509, 0 Oct 16 10:00 85
514515
515-
## GPU Configuration - OpenNebula
516-
517-
### Monitoring PCI Devices
516+
### OpenNebula Configuration
518517
519518
To make the GPUs available in OpenNebula, configure the PCI probe on the front-end node to monitor NVIDIA devices.
520519
@@ -534,7 +533,7 @@ To make the GPUs available in OpenNebula, configure the PCI probe on the front-e
534533
535534
After a few moments, you can check if the GPU is being monitored correctly by showing the host information (`onehost show <HOST_ID>`). The GPU should appear in the `PCI DEVICES` section.
536535
537-
## VM with GPU instantiation
536+
### VM with GPU instantiation
538537
To instantiate VM with a GPU login into the OpenNebula GUI and navigate to the VMs tab. Click “Create”. Then select one of the VM templates On the next screen enter the VM name and click “Next”.
539538
540539
![VM Instantiation](/images/ISO/06-vm-instantiate-1.png)
@@ -549,8 +548,10 @@ In the dropdown menu select available GPU device which will be attached to the V
549548
550549
Click the “Finish” button to start VM instantiation. After a while, the VM will be instantiated and may be used.
551550
552-
## vLLM appliance validation
553-
The vLLM appliance is available through the OpenNebula Marketplace. Follow steps from the official documentation page - https://docs.opennebula.io/7.0/solutions/deployment_blueprints/ai-ready_opennebula/llm_inference_certification/. To download vLLM appliance and instantiate with a GPU in passthrough mode, the following steps have to be performed:
551+
### vLLM appliance validation
552+
553+
The vLLM appliance is available through the OpenNebula Marketplace. Follow steps from [this guide from the official documentation]({{% relref "/solutions/deployment_blueprints/ai-ready_opennebula/llm_inferrence_certification" %}}). To download vLLM appliance and instantiate with a GPU in passthrough mode, the following steps have to be performed:
554+
554555
1. Go to Storage -> Apps section.
555556
Search for vLLM appliance and import it. Select DataStore where to save image
556557

0 commit comments

Comments
 (0)