Skip to content

Commit 98ed011

Browse files
authored
Merge branch 'main' into expensevalidatoros
2 parents f15b0db + ec71e84 commit 98ed011

File tree

23 files changed

+646
-14
lines changed

23 files changed

+646
-14
lines changed

ai/generative-ai-service/sentiment+categorization/files/README.md

Lines changed: 53 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Batch Message Analysis and Categorization Demo
1+
# Customer-Agent Conversation Analysis and Categorization Demo
22
This demo showcases an AI-powered solution for analyzing batches of customer messages, categorizing them into hierarchical levels, extracting sentiment scores, and generating structured reports.
33

44
## Key Features
@@ -16,12 +16,33 @@ This demo showcases an AI-powered solution for analyzing batches of customer mes
1616
* Customer messages should be stored in a CSV file(s) within a folder named `data`.
1717
* Each CSV file should contain a column with the message text.
1818

19+
## Python Version
20+
This project requires **Python 3.13** or later. You can check your current Python version by running:
21+
```
22+
python --version
23+
```
24+
or
25+
```
26+
python3 --version
27+
```
28+
1929
## Getting Started
2030
To run the demo, follow these steps:
2131
1. Clone the repository using `git clone`.
22-
2. Place your CSV files containing customer messages in the `data` folder.
23-
3. Install dependencies using `pip install -r requirements.txt`.
24-
4. Run the application using `streamlit run app.py`.
32+
2. *(Optional but recommended)* Create and activate a Python virtual environment:
33+
- On Windows:
34+
```
35+
python -m venv venv
36+
venv\Scripts\activate
37+
```
38+
- On macOS/Linux:
39+
```
40+
python3 -m venv venv
41+
source venv/bin/activate
42+
```
43+
3. Place your CSV files containing customer messages in the `data` folder. Ensure each includes a column with the message text.
44+
4. Install dependencies using `pip install -r requirements.txt`.
45+
5. Run the application using `streamlit run app.py`.
2546
2647
## Example Use Cases
2748
* Analyze customer feedback from surveys, reviews, or social media platforms to identify trends and patterns.
@@ -34,9 +55,34 @@ To run the demo, follow these steps:
3455
* All aspects of the demo, including:
3556
+ Hierarchical categorization
3657
+ Sentiment analysis
37-
+ Structured report generation
38-
are powered by GenAI, ensuring accurate and efficient analysis of customer messages.
58+
+ Structured report generation are powered by GenAI, ensuring accurate and efficient analysis of customer messages.
59+
60+
61+
## Project Structure
62+
63+
The repository is organized as follows:
3964
65+
```plaintext
66+
│ app.py # Main Streamlit application entry point
67+
│ README.md # Project documentation
68+
│ requirements.txt # Python dependencies
69+
70+
├───backend
71+
│ │ feedback_agent.py # Logic for feedback processing agents
72+
│ │ feedback_wrapper.py # Wrappers and interfaces for feedback functionalities
73+
│ │ message_handler.py # Utilities for handling and preprocessing messages
74+
│ │
75+
│ ├───data
76+
│ │ complaints_messages.csv # Example dataset of customer messages
77+
│ │
78+
│ └───utils
79+
│ config.py # Configuration and setup for the project
80+
│ llm_config.py # Model- and LLM-related configuration
81+
│ prompts.py # Prompt templates for language models
82+
83+
└───pages
84+
SentimentByCat.py # Additional Streamlit page for sentiment by category
85+
```
4086
## Output
4187
The demo will display an interactive dashboard with the generated report, providing valuable insights into customer messages, including:
4288
* Category distribution across all three levels
@@ -51,4 +97,4 @@ Copyright (c) 2025 Oracle and/or its affiliates.
5197

5298
Licensed under the Universal Permissive License (UPL), Version 1.0.
5399

54-
See [LICENSE](https://github.com/oracle-devrel/technology-engineering/blob/main/LICENSE) for more details.
100+
See [LICENSE](https://github.com/oracle-devrel/technology-engineering/blob/main/LICENSE) for more details.

ai/generative-ai-service/sentiment+categorization/files/backend/feedback_agent.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@
1313

1414
# Set up logging
1515
logging.getLogger("oci").setLevel(logging.DEBUG)
16-
messages_path = "ai/generative-ai-service/sentiment+categorization/demo_code/backend/data/complaints_messages.csv"
16+
messages_path = "ai/generative-ai-service/sentiment+categorization/files/backend/data/complaints_messages.csv"
1717

1818

1919
class AgentState(BaseModel):
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
langchain-community==0.3.23
2+
langgraph==0.4.1
3+
oci==2.150.3
4+
plotly==6.0.1
5+
streamlit==1.45.0
Lines changed: 159 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,159 @@
1+
# Building NVIDIA blueprint on OCI: Digital twins for fluid simulation
2+
3+
This tutorial explains how to run the NVIDIA Omniverse Digital Twins for Fluid Simulation blueprint on OCI. This example shows how to study the aerodynamics (drag, down force, etc.) of a car using a virtual wind tunnel.
4+
5+
6+
## Prerequisites
7+
8+
To run this blueprint, you will need:
9+
- an OCI tenancy with limits to use a BM.GPU.L40S-NC.4 shape
10+
- an NVIDIA account for the NGC Catalog
11+
- an NGC API key to download images from the NGC catalog
12+
13+
14+
## Instance configuration
15+
16+
### Compute part
17+
18+
In the OCI Console, create an instance using:
19+
* a BM.GPU.L40S-NC.4 shape (bare metal server with 4 x NVIDIA L40S GPU)
20+
* a native Canonical Ubuntu 22.04 image (NVIDIA drivers will be installed afterwards)
21+
* a boot volume with 200 GB
22+
23+
### Network part
24+
25+
Running this blueprint requires to open several ports for different protocols to allow the client machine (where the blueprint will be accessed through a web browser) to communicate with the instance where the blueprint is deployed. In the Virtual Cloud Network where the instance resides, go to the default security list and add the following ingress rules:
26+
- web:
27+
- 5273/tcp,
28+
- 1024/udp
29+
- kit:
30+
- 8011/tcp,
31+
- 8111/tcp,
32+
- 47995-48012/tcp,
33+
- 47995-48012/udp,
34+
- 49000-49007/tcp,
35+
- 49100/tcp,
36+
- 49000-49007/udp
37+
- other:
38+
- 1024/udp
39+
40+
### Installing NVIDIA drivers
41+
42+
When the instance is up, a specific version NVIDIA drivers can be installed but beforehands, we must install additional packages to build them:
43+
```
44+
sudo apt install -y build-essential
45+
```
46+
Then we can download the NVIDIA driver version 535.161.07 available [here](https://www.nvidia.com/fr-fr/drivers/details/220428/) and install it.
47+
```
48+
wget https://fr.download.nvidia.com/XFree86/Linux-x86_64/535.161.07/NVIDIA-Linux-x86_64-535.161.07.run
49+
chmod +x NVIDIA-Linux-x86_64-535.161.07.run
50+
sudo ./NVIDIA-Linux-x86_64-535.161.07.run
51+
```
52+
The instance must be rebooted for the changes to be taken into account.
53+
```
54+
sudo reboot
55+
```
56+
57+
58+
### Installing additional packages
59+
60+
As this is a native Ubuntu version, a few additional packages must be installed to clone the repo and add and configure docker.
61+
```
62+
sudo apt install -y git-lfs
63+
sudo apt install -y docker.io
64+
sudo apt install -y docker-compose-v2
65+
sudo apt install -y docker-buildx
66+
```
67+
68+
### Installing and configuring NVIDIA Container Toolkit
69+
70+
First of all, we must add the NVIDIA Container Toolkit repository to the repository list:
71+
```
72+
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
73+
&& curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
74+
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
75+
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
76+
```
77+
Then, we can update the list of packages from all repositories, install the `nvidia-container-toolkit` package and configure docker.
78+
```
79+
sudo apt update
80+
sudo apt install -y nvidia-container-toolkit
81+
sudo nvidia-ctk runtime configure --runtime=docker
82+
sudo systemctl restart docker
83+
```
84+
85+
## Downloading and building the project
86+
87+
At this stage it is necessary to set your NGC API key as an environment variable to be able to download the right content from the NGC Catalog.
88+
```
89+
echo "export NGC_API_KEY=nvapi-xxx" >> ~/.bashrc
90+
source ~/.bashrc
91+
```
92+
where `nvapi-xxx` is your own NGC API key.
93+
94+
Once done, we can clone the repository and build the images:
95+
```
96+
git clone ssh://github.com/NVIDIA-Omniverse-Blueprints/digital-twins-for-fluid-simulation $HOME/digital_twins_for_fluid_simulation
97+
cd $HOME/digital_twins_for_fluid_simulation
98+
./build-docker.sh
99+
```
100+
2 files have to be modified, namely `.env` and `compose.yml`.
101+
102+
First, create a copy of the environment file template:
103+
```
104+
cp .env_template .env
105+
```
106+
and set the `ZMQ_IP` with the instance private IP address.
107+
```
108+
ZMQ_IP=XXX.XXX.XXX.XXX
109+
```
110+
111+
Then, modify `compose.yml` file at 3 different places:
112+
1. In the `kit` section, replace the `network_mode: host` line by the following block:
113+
```
114+
networks:
115+
outside:
116+
ipv4_address: XXX.XXX.XXX.XXX
117+
```
118+
and set the `ipv4_address` variable with the instance public IP address.
119+
120+
2. In the `aeronim` section, comment the `network_mode: host` line.
121+
122+
3. At the bottom of the file, add the following block:
123+
```
124+
networks:
125+
outside:
126+
driver: bridge
127+
ipam:
128+
driver: default
129+
config:
130+
- subnet: XXX.XXX.XXX.0/24
131+
```
132+
where the subnet mask is your public IP address with the last number replaced by 0.
133+
134+
## Running the blueprint
135+
136+
To start the digital twin, simply run the following command:
137+
```
138+
sudo docker compose up -d
139+
```
140+
The blueprint will take some time to initialize. Expect a minimum of 10 minutes before accessing the GUI in a web browser at `http://XXX.XXX.XXX.XXX:5273` where `XXX.XXX.XXX.XXX` is the public IP address of the instance. When everything is ready, you should see the sports car in the wind tunnel as on the image below.
141+
142+
![NVIDIA Omniverse Digital Twin for Fluid Simulation Blueprint](assets/images/omniverse-blueprint-digital-twin-gui.png "NVIDIA Omniverse Digital Twin for Fluid Simulation Blueprint")
143+
144+
You can now interactively modify the car setup (rims, mirrors, spoilers, height, etc.) and visualize the impact it makes on the airflow.
145+
146+
To stop the project, simply run `sudo docker compose down`.
147+
148+
149+
## External links
150+
151+
* [Original NVIDIA GitHub repo](https://github.com/NVIDIA-Omniverse-blueprints/digital-twins-for-fluid-simulation)
152+
153+
## License
154+
155+
Copyright (c) 2025 Oracle and/or its affiliates.
156+
157+
Licensed under the Universal Permissive License (UPL), Version 1.0.
158+
159+
See [LICENSE](https://github.com/oracle-devrel/technology-engineering/blob/main/LICENSE) for more details.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
Copyright (c) 2024 Oracle and/or its affiliates.
2+
3+
The Universal Permissive License (UPL), Version 1.0
4+
5+
Subject to the condition set forth below, permission is hereby granted to any
6+
person obtaining a copy of this software, associated documentation and/or data
7+
(collectively the "Software"), free of charge and under any and all copyright
8+
rights in the Software, and any and all patent rights owned or freely
9+
licensable by each licensor hereunder covering either (i) the unmodified
10+
Software as contributed to or provided by such licensor, or (ii) the Larger
11+
Works (as defined below), to deal in both
12+
13+
(a) the Software, and
14+
(b) any piece of software and/or hardware listed in the lrgrwrks.txt file if
15+
one is included with the Software (each a "Larger Work" to which the Software
16+
is contributed by such licensors),
17+
18+
without restriction, including without limitation the rights to copy, create
19+
derivative works of, display, perform, and distribute the Software and make,
20+
use, sell, offer for sale, import, export, have made, and have sold the
21+
Software and the Larger Work(s), and to sublicense the foregoing rights on
22+
either these or other terms.
23+
24+
This license is subject to the following condition:
25+
The above copyright notice and either this complete permission notice or at
26+
a minimum a reference to the UPL must be included in all copies or
27+
substantial portions of the Software.
28+
29+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
30+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
31+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
32+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
33+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
34+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
35+
SOFTWARE.
Loading

0 commit comments

Comments
 (0)