diff --git a/ai/ai-ecosystem/README.md b/ai/ai-ecosystem/README.md new file mode 100644 index 000000000..119bbfd76 --- /dev/null +++ b/ai/ai-ecosystem/README.md @@ -0,0 +1,9 @@ +# Generative AI Ecosystem + +Reviewed: 02.07.2025 + +# Team Publications + +## GitHub + +- [NVIDIA Omniverse Digital Twin](https://github.com/oracle-devrel/technology-engineering/tree/main/ai/ai-ecosystem/nvidia-omniverse-digital-twin) diff --git a/ai/ai-ecosystem/nvidia-omniverse-digital-twin/LICENSE b/ai/ai-ecosystem/nvidia-omniverse-digital-twin/LICENSE new file mode 100644 index 000000000..46c0c79d9 --- /dev/null +++ b/ai/ai-ecosystem/nvidia-omniverse-digital-twin/LICENSE @@ -0,0 +1,35 @@ +Copyright (c) 2025 Oracle and/or its affiliates. + +The Universal Permissive License (UPL), Version 1.0 + +Subject to the condition set forth below, permission is hereby granted to any +person obtaining a copy of this software, associated documentation and/or data +(collectively the "Software"), free of charge and under any and all copyright +rights in the Software, and any and all patent rights owned or freely +licensable by each licensor hereunder covering either (i) the unmodified +Software as contributed to or provided by such licensor, or (ii) the Larger +Works (as defined below), to deal in both + +(a) the Software, and +(b) any piece of software and/or hardware listed in the lrgrwrks.txt file if +one is included with the Software (each a "Larger Work" to which the Software +is contributed by such licensors), + +without restriction, including without limitation the rights to copy, create +derivative works of, display, perform, and distribute the Software and make, +use, sell, offer for sale, import, export, have made, and have sold the +Software and the Larger Work(s), and to sublicense the foregoing rights on +either these or other terms. + +This license is subject to the following condition: +The above copyright notice and either this complete permission notice or at +a minimum a reference to the UPL must be included in all copies or +substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/ai/ai-ecosystem/nvidia-omniverse-digital-twin/README.md b/ai/ai-ecosystem/nvidia-omniverse-digital-twin/README.md new file mode 100644 index 000000000..eef64f626 --- /dev/null +++ b/ai/ai-ecosystem/nvidia-omniverse-digital-twin/README.md @@ -0,0 +1,111 @@ +# Digital Twin Example using NVIDIA Omniverse + +This solution demonstrates how to run a digital twin of an automobile in a wind +tunnel to evaluate the aerodynamic effect of modifications to the features of +the car. +The digital twin runs on the NVIDIA Omniverse software platform and uses GPU +nodes on Oracle Cloud Infrastructure (OCI) to visualize the airflow over the +car as well as AI inference to quickly assess how changes to the car will +affect the airflow. + +Reviewed: 02.07.2025 + +# When to use this asset? + +This asset is ideal for developers, educators, or any professional looking for: + +- Demonstrate Oracle Cloud capabilities: This is a great demo asset to showcase + the abilities of OCI to run applications utilizing the NVIDIA Omniverse framework + +# How to use this asset? + +## Prerequisites + +To run this tutorial, you will need: + +* An OCI tenancy with limits set for GPU based instances, with a minimum of 2 GPUs available, either: + * NVIDIA A10 for a minimum demonstration + * NVIDIA L40S for optimal visualization performance +* An access key to NVIDIA's NGC Catalog + +The software setup is describe in depth [in the NVIDIA Omniverse Blueprint](https://github.com/NVIDIA-Omniverse-blueprints/digital-twins-for-fluid-simulation). + +## Deploying the supporting GPU shape + +1. Navigate to the "Instances" in the Cloud Console, and create a new instance: + - Select "Canonical Ubuntu 24.04" as the image, and at the minimum "VM.GPU.A10.2" as the shape + - Select a public subnet to place the machine in + - Note the VCN and subnet used + - Upload or paste your public SSH key + - Increase the boot volume size to 150 Gb + +2. After the instance has been created, navigate to the VCN that the instance uses, and create a new security list under the tab "Security". Use the following settings: +
Security list settings opening ports for Omniverse kit and web applications
+ +## Deploying the Digital Twin + +1. SSH into the deployed shape and first enable the NVIDIA container toolkit repository: + ```console + curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \ + && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \ + sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \ + sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list + ``` + +2. Update the package cache and install the required software: + ```console + sudo apt update + sudo apt install -y build-essential git-lfs docker-compose-v2 + sudo apt install -y nvidia-driver-570-server nvidia-container-toolkit + ``` + +3. Configure the container runtime, add the current user to the Docker group and reboot: + ```console + sudo nvidia-ctk runtime configure --runtime=docker + sudo usermod -aG docker ubuntu + sudo reboot + ``` + +4. Open the ZeroMQ ports required for the web app to communicate with the + inferencing backend: + ```console + sudo iptables -I INPUT -p tcp -m multiport --dports 5555:5560 -j ACCEPT + sudo iptables -I INPUT -p tcp -m multiport --sports 5555:5560 -j ACCEPT + ``` + +5. Log into the NVIDIA container registry using `$oauthtoken` as user and your + NGC token as password. Then clone the digital twin example and build it: + ```console + docker login nvcr.io + git clone https://github.com/NVIDIA-Omniverse-blueprints/digital-twins-for-fluid-simulation.git + cd digital-twins-for-fluid-simulation + ./build-docker.sh + ``` + +6. Copy the [configuration script `setup.sh`](./files/setup.sh) to the node and + run it, then start the digital twin: + ```console + bash ./setup.sh + docker compose up + ``` + +7. You should now be able to navigate to your node's public IP, on port 5273 in + a browser and evaluate the digital twin: +
A digital twin of a car in a wind tunnel, running in NVIDIA Omniverse
+ + +# Acknowledgments + +- **Author** - Matthias Wolf (Generative AI Ecosystem Black Belt) + +# External links + +* [NVIDIA Omniverse Blueprint for Digital Twins for Fluid Simulation](https://github.com/NVIDIA-Omniverse-blueprints/digital-twins-for-fluid-simulation) + +# License + +Copyright (c) 2025 Oracle and/or its affiliates. + +Licensed under the Universal Permissive License (UPL), Version 1.0. + +See [LICENSE](https://github.com/oracle-devrel/technology-engineering/blob/main/LICENSE) for more details. diff --git a/ai/ai-ecosystem/nvidia-omniverse-digital-twin/files/setup.sh b/ai/ai-ecosystem/nvidia-omniverse-digital-twin/files/setup.sh new file mode 100644 index 000000000..1b5225525 --- /dev/null +++ b/ai/ai-ecosystem/nvidia-omniverse-digital-twin/files/setup.sh @@ -0,0 +1,45 @@ +PRIVATE_IP=$(ip route get 1 | sed 's/^.*src \([^ ]*\).*$/\1/;q') +PUBLIC_IP=$(curl ipinfo.io/ip) + +sed -e "s/127.0.0.1/${PRIVATE_IP}/g" .env_template > .env + +cat <