|
| 1 | +# Building NVIDIA blueprint on OCI: Digital twins for fluid simulation |
| 2 | + |
| 3 | +This tutorial explains how to run the NVIDIA Omniverse Digital Twins for Fluid Simulation blueprint on OCI. This example shows how to study the aerodynamics (drag, down force, etc.) of a car using a virtual wind tunnel. |
| 4 | + |
| 5 | + |
| 6 | +## Prerequisites |
| 7 | + |
| 8 | +To run this blueprint, you will need: |
| 9 | +- an OCI tenancy with limits to use a BM.GPU.L40S-NC.4 shape |
| 10 | +- an NVIDIA account for the NGC Catalog |
| 11 | +- an NGC API key to download images from the NGC catalog |
| 12 | + |
| 13 | + |
| 14 | +## Instance configuration |
| 15 | + |
| 16 | +### Compute part |
| 17 | + |
| 18 | +In the OCI Console, create an instance using: |
| 19 | +* a BM.GPU.L40S-NC.4 shape (bare metal server with 4 x NVIDIA L40S GPU) |
| 20 | +* a native Canonical Ubuntu 22.04 image (NVIDIA drivers will be installed afterwards) |
| 21 | +* a boot volume with 200 GB |
| 22 | + |
| 23 | +### Network part |
| 24 | + |
| 25 | +Running this blueprint requires to open several ports for different protocols to allow the client machine (where the blueprint will be accessed through a web browser) to communicate with the instance where the blueprint is deployed. In the Virtual Cloud Network where the instance resides, go to the default security list and add the following ingress rules: |
| 26 | +- web: |
| 27 | + - 5273/tcp, |
| 28 | + - 1024/udp |
| 29 | +- kit: |
| 30 | + - 8011/tcp, |
| 31 | + - 8111/tcp, |
| 32 | + - 47995-48012/tcp, |
| 33 | + - 47995-48012/udp, |
| 34 | + - 49000-49007/tcp, |
| 35 | + - 49100/tcp, |
| 36 | + - 49000-49007/udp |
| 37 | +- other: |
| 38 | + - 1024/udp |
| 39 | + |
| 40 | +### Installing NVIDIA drivers |
| 41 | + |
| 42 | +When the instance is up, a specific version NVIDIA drivers can be installed but beforehands, we must install additional packages to build them: |
| 43 | +``` |
| 44 | +sudo apt install -y build-essential |
| 45 | +``` |
| 46 | +Then we can download the NVIDIA driver version 535.161.07 available [here](https://www.nvidia.com/fr-fr/drivers/details/220428/) and install it. |
| 47 | +``` |
| 48 | +wget https://fr.download.nvidia.com/XFree86/Linux-x86_64/535.161.07/NVIDIA-Linux-x86_64-535.161.07.run |
| 49 | +chmod +x NVIDIA-Linux-x86_64-535.161.07.run |
| 50 | +sudo ./NVIDIA-Linux-x86_64-535.161.07.run |
| 51 | +``` |
| 52 | +The instance must be rebooted for the changes to be taken into account. |
| 53 | +``` |
| 54 | +sudo reboot |
| 55 | +``` |
| 56 | + |
| 57 | + |
| 58 | +### Installing additional packages |
| 59 | + |
| 60 | +As this is a native Ubuntu version, a few additional packages must be installed to clone the repo and add and configure docker. |
| 61 | +``` |
| 62 | +sudo apt install -y git-lfs |
| 63 | +sudo apt install -y docker.io |
| 64 | +sudo apt install -y docker-compose-v2 |
| 65 | +sudo apt install -y docker-buildx |
| 66 | +``` |
| 67 | + |
| 68 | +### Installing and configuring NVIDIA Container Toolkit |
| 69 | + |
| 70 | +First of all, we must add the NVIDIA Container Toolkit repository to the repository list: |
| 71 | +``` |
| 72 | +curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \ |
| 73 | + && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \ |
| 74 | + sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \ |
| 75 | + sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list |
| 76 | +``` |
| 77 | +Then, we can update the list of packages from all repositories, install the `nvidia-container-toolkit` package and configure docker. |
| 78 | +``` |
| 79 | +sudo apt update |
| 80 | +sudo apt install -y nvidia-container-toolkit |
| 81 | +sudo nvidia-ctk runtime configure --runtime=docker |
| 82 | +sudo systemctl restart docker |
| 83 | +``` |
| 84 | + |
| 85 | +## Downloading and building the project |
| 86 | + |
| 87 | +At this stage it is necessary to set your NGC API key as an environment variable to be able to download the right content from the NGC Catalog. |
| 88 | +``` |
| 89 | +echo "export NGC_API_KEY=nvapi-xxx" >> ~/.bashrc |
| 90 | +source ~/.bashrc |
| 91 | +``` |
| 92 | +where `nvapi-xxx` is your own NGC API key. |
| 93 | + |
| 94 | +Once done, we can clone the repository and build the images: |
| 95 | +``` |
| 96 | +git clone ssh://github.com/NVIDIA-Omniverse-Blueprints/digital-twins-for-fluid-simulation $HOME/digital_twins_for_fluid_simulation |
| 97 | +cd $HOME/digital_twins_for_fluid_simulation |
| 98 | +./build-docker.sh |
| 99 | +``` |
| 100 | +2 files have to be modified, namely `.env` and `compose.yml`. |
| 101 | + |
| 102 | +First, create a copy of the environment file template: |
| 103 | +``` |
| 104 | +cp .env_template .env |
| 105 | +``` |
| 106 | +and set the `ZMQ_IP` with the instance private IP address. |
| 107 | +``` |
| 108 | +ZMQ_IP=XXX.XXX.XXX.XXX |
| 109 | +``` |
| 110 | + |
| 111 | +Then, modify `compose.yml` file at 3 different places: |
| 112 | +1. In the `kit` section, replace the `network_mode: host` line by the following block: |
| 113 | +``` |
| 114 | +networks: |
| 115 | + outside: |
| 116 | + ipv4_address: XXX.XXX.XXX.XXX |
| 117 | +``` |
| 118 | +and set the `ipv4_address` variable with the instance public IP address. |
| 119 | + |
| 120 | +2. In the `aeronim` section, comment the `network_mode: host` line. |
| 121 | + |
| 122 | +3. At the bottom of the file, add the following block: |
| 123 | +``` |
| 124 | +networks: |
| 125 | + outside: |
| 126 | + driver: bridge |
| 127 | + ipam: |
| 128 | + driver: default |
| 129 | + config: |
| 130 | + - subnet: XXX.XXX.XXX.0/24 |
| 131 | +``` |
| 132 | +where the subnet mask is your public IP address with the last number replaced by 0. |
| 133 | + |
| 134 | +## Running the blueprint |
| 135 | + |
| 136 | +To start the digital twin, simply run the following command: |
| 137 | +``` |
| 138 | +sudo docker compose up -d |
| 139 | +``` |
| 140 | +The blueprint will take some time to initialize. Expect a minimum of 10 minutes before accessing the GUI in a web browser at `http://XXX.XXX.XXX.XXX:5273` where `XXX.XXX.XXX.XXX` is the public IP address of the instance. When everything is ready, you should see the sports car in the wind tunnel as on the image below. |
| 141 | + |
| 142 | + |
| 143 | + |
| 144 | +You can now interactively modify the car setup (rims, mirrors, spoilers, height, etc.) and visualize the impact it makes on the airflow. |
| 145 | + |
| 146 | +To stop the project, simply run `sudo docker compose down`. |
| 147 | + |
| 148 | + |
| 149 | +## External links |
| 150 | + |
| 151 | +* [Original NVIDIA GitHub repo](https://github.com/NVIDIA-Omniverse-blueprints/digital-twins-for-fluid-simulation) |
| 152 | + |
| 153 | +## License |
| 154 | + |
| 155 | +Copyright (c) 2025 Oracle and/or its affiliates. |
| 156 | + |
| 157 | +Licensed under the Universal Permissive License (UPL), Version 1.0. |
| 158 | + |
| 159 | +See [LICENSE](https://github.com/oracle-devrel/technology-engineering/blob/main/LICENSE) for more details. |
0 commit comments