Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions ai/ai-ecosystem/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# Generative AI Ecosystem

Reviewed: 02.07.2025

# Team Publications

## GitHub

- [NVIDIA Omniverse Digital Twin](https://github.com/oracle-devrel/technology-engineering/tree/main/ai/ai-ecosystem/nvidia-omniverse-digital-twin)
35 changes: 35 additions & 0 deletions ai/ai-ecosystem/nvidia-omniverse-digital-twin/LICENSE
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
Copyright (c) 2025 Oracle and/or its affiliates.

The Universal Permissive License (UPL), Version 1.0

Subject to the condition set forth below, permission is hereby granted to any
person obtaining a copy of this software, associated documentation and/or data
(collectively the "Software"), free of charge and under any and all copyright
rights in the Software, and any and all patent rights owned or freely
licensable by each licensor hereunder covering either (i) the unmodified
Software as contributed to or provided by such licensor, or (ii) the Larger
Works (as defined below), to deal in both

(a) the Software, and
(b) any piece of software and/or hardware listed in the lrgrwrks.txt file if
one is included with the Software (each a "Larger Work" to which the Software
is contributed by such licensors),

without restriction, including without limitation the rights to copy, create
derivative works of, display, perform, and distribute the Software and make,
use, sell, offer for sale, import, export, have made, and have sold the
Software and the Larger Work(s), and to sublicense the foregoing rights on
either these or other terms.

This license is subject to the following condition:
The above copyright notice and either this complete permission notice or at
a minimum a reference to the UPL must be included in all copies or
substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
111 changes: 111 additions & 0 deletions ai/ai-ecosystem/nvidia-omniverse-digital-twin/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
# Digital Twin Example using NVIDIA Omniverse

This solution demonstrates how to run a digital twin of an automobile in a wind
tunnel to evaluate the aerodynamic effect of modifications to the features of
the car.
The digital twin runs on the NVIDIA Omniverse software platform and uses GPU
nodes on Oracle Cloud Infrastructure (OCI) to visualize the airflow over the
car as well as AI inference to quickly assess how changes to the car will
affect the airflow.

Reviewed: 02.07.2025

# When to use this asset?

This asset is ideal for developers, educators, or any professional looking for:

- Demonstrate Oracle Cloud capabilities: This is a great demo asset to showcase
the abilities of OCI to run applications utilizing the NVIDIA Omniverse framework

# How to use this asset?

## Prerequisites

To run this tutorial, you will need:

* An OCI tenancy with limits set for GPU based instances, with a minimum of 2 GPUs available, either:
* NVIDIA A10 for a minimum demonstration
* NVIDIA L40S for optimal visualization performance
* An access key to NVIDIA's NGC Catalog

The software setup is describe in depth [in the NVIDIA Omniverse Blueprint](https://github.com/NVIDIA-Omniverse-blueprints/digital-twins-for-fluid-simulation).

## Deploying the supporting GPU shape

1. Navigate to the "Instances" in the Cloud Console, and create a new instance:
- Select "Canonical Ubuntu 24.04" as the image, and at the minimum "VM.GPU.A10.2" as the shape
- Select a public subnet to place the machine in
- Note the VCN and subnet used
- Upload or paste your public SSH key
- Increase the boot volume size to 150 Gb

2. After the instance has been created, navigate to the VCN that the instance uses, and create a new security list under the tab "Security". Use the following settings:
<center><img src="files/subnet.png" alt="Security list settings opening ports for Omniverse kit and web applications" style="max-width:90%; height:auto;" /></center>

## Deploying the Digital Twin

1. SSH into the deployed shape and first enable the NVIDIA container toolkit repository:
```console
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
```

2. Update the package cache and install the required software:
```console
sudo apt update
sudo apt install -y build-essential git-lfs docker-compose-v2
sudo apt install -y nvidia-driver-570-server nvidia-container-toolkit
```

3. Configure the container runtime, add the current user to the Docker group and reboot:
```console
sudo nvidia-ctk runtime configure --runtime=docker
sudo usermod -aG docker ubuntu
sudo reboot
```

4. Open the ZeroMQ ports required for the web app to communicate with the
inferencing backend:
```console
sudo iptables -I INPUT -p tcp -m multiport --dports 5555:5560 -j ACCEPT
sudo iptables -I INPUT -p tcp -m multiport --sports 5555:5560 -j ACCEPT
```

5. Log into the NVIDIA container registry using `$oauthtoken` as user and your
NGC token as password. Then clone the digital twin example and build it:
```console
docker login nvcr.io
git clone https://github.com/NVIDIA-Omniverse-blueprints/digital-twins-for-fluid-simulation.git
cd digital-twins-for-fluid-simulation
./build-docker.sh
```

6. Copy the [configuration script `setup.sh`](./files/setup.sh) to the node and
run it, then start the digital twin:
```console
bash ./setup.sh
docker compose up
```

7. You should now be able to navigate to your node's public IP, on port 5273 in
a browser and evaluate the digital twin:
<center><img src="files/twin_running.png" alt="A digital twin of a car in a wind tunnel, running in NVIDIA Omniverse" style="max-width:90%; height:auto;" /></center>


# Acknowledgments

- **Author** - Matthias Wolf (Generative AI Ecosystem Black Belt)

# External links

* [NVIDIA Omniverse Blueprint for Digital Twins for Fluid Simulation](https://github.com/NVIDIA-Omniverse-blueprints/digital-twins-for-fluid-simulation)

# License

Copyright (c) 2025 Oracle and/or its affiliates.

Licensed under the Universal Permissive License (UPL), Version 1.0.

See [LICENSE](https://github.com/oracle-devrel/technology-engineering/blob/main/LICENSE) for more details.
45 changes: 45 additions & 0 deletions ai/ai-ecosystem/nvidia-omniverse-digital-twin/files/setup.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
PRIVATE_IP=$(ip route get 1 | sed 's/^.*src \([^ ]*\).*$/\1/;q')
PUBLIC_IP=$(curl ipinfo.io/ip)

sed -e "s/127.0.0.1/${PRIVATE_IP}/g" .env_template > .env

cat <<EOF | git apply
diff --git a/compose.yml b/compose.yml
index b89118a..06bed98 100644
--- a/compose.yml
+++ b/compose.yml
@@ -7,7 +7,9 @@ services:
dockerfile: kit-app/Dockerfile
network: host
privileged: true
- network_mode: host
+ networks:
+ outside:
+ ipv4_address: ${PUBLIC_IP}
ports:
- "1024:1024/udp"
- "49100:49100/tcp"
@@ -52,8 +54,8 @@ services:
NGC_API_KEY: "\${NGC_API_KEY}"
network_mode: host
ipc: host
- ports:
- - "8080:8080"
+ # ports:
+ # - "8080:8080"
zmq:
image: "rtdt-zmq-service:latest"
restart: unless-stopped
@@ -73,3 +75,11 @@ services:
volumes:
ov-cache:
ov-local-share:
+
+networks:
+ outside:
+ driver: bridge
+ ipam:
+ driver: default
+ config:
+ - subnet: ${PUBLIC_IP%.*}.0/24
EOF
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.