Skip to content

Commit 9b185f1

Browse files
authored
Merge branch 'main' into draft-public-cloud-lb-aan
2 parents 32b7cf2 + d685846 commit 9b185f1

File tree

256 files changed

+138374
-131
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

256 files changed

+138374
-131
lines changed

.github/pull_request_template.md

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
<!-- Please set a comprehensive title for your pull request. -->
2+
<!-- Please fill, if applicable, the following sections. -->
3+
4+
## Kind of the Pull Request
5+
6+
- [ ] ✨ New demos ✨
7+
- [ ] 🐛 Bug 🐛
8+
- [ ] 🌟 Enhancement 🌟
9+
- [ ] 📝 Documentation 📝
10+
11+
## Products targeted
12+
13+
- [ ] 🧠 AI 🧠
14+
- [ ] 💿 DBaaS 💿
15+
- [ ] ⎈ Managed Kubernetes Service / Managed Private Registry / Managed Rancher Service ⎈
16+
- [ ] 🏗️ IaC 🏗️
17+
18+
## Purpose of this Pull Request
19+
20+
<!-- Describe here what is the purpose of your pull request. -->
21+
22+
- [ ] detailed README provided

.github/release.yml

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
changelog:
2+
categories:
3+
- title: ✨ Features ✨
4+
labels:
5+
- '✨ New demos ✨'
6+
- '🌟 Enhancement 🌟'
7+
- '📝 Documentation 📝'
8+
- title: 🐛 Bug 🐛
9+
labels:
10+
- '🐛 Bug 🐛'
11+
- title: 🤷‍♂️ Other Changes 🤷‍♂️
12+
labels:
13+
- "*"

.gitignore

Lines changed: 45 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,3 +16,48 @@ basics/utils/vars.csv
1616
containers-orchestration/managed-kubernetes/create-cluster-with-tf/my_kube_cluster.yml
1717
containers-orchestration/managed-private-registry/create-registry-with-pulumi/ovhcloud-registry-go/Pulumi.dev.yaml
1818
.DS_Store
19+
20+
*.tfvars
21+
kubeconfig.yml
22+
.quarkus/
23+
containers-orchestration/managed-kubernetes/create-cluster-with-cdktf-and-go/kubeconfig.yaml
24+
containers-orchestration/managed-kubernetes/create-cluster-with-cdktf-and-go/generated
25+
containers-orchestration/managed-kubernetes/create-cluster-with-cdktf-and-go/cdktf.out
26+
.vscode/settings.json
27+
28+
# Python
29+
venv/
30+
.venv/
31+
__pycache__/
32+
.rock-paper-scisors
33+
34+
35+
# JS
36+
node_modules/
37+
38+
# Java
39+
target/
40+
41+
# Never commit your credentials
42+
**/ca.pem
43+
**/service.cert
44+
**/service.key
45+
ai/ai-endpoints/java-langchain4j-chatbot/target/test-classes/com/ovhcloud/examples/aiendpoints/AppTest.class
46+
47+
# IDEs
48+
.idea/
49+
.vscode/
50+
51+
# Dot env files
52+
.env
53+
use-cases/kubeflow/ovhrc.sh
54+
use-cases/kubeflow/kubeconfig
55+
my_kube_cluster.yaml
56+
kustomize
57+
containers-orchestration/managed-rancher/create-rancher-with-tf/variables.tf
58+
use-cases/create-and-use-object-storage-as-tf-backend/my-app/backend.tf
59+
use-cases/create-and-use-object-storage-as-tf-backend/object-storage-tf/variables.tf
60+
61+
# LLM Fine Tune data
62+
ai/llm-fine-tune/dataset/docs/*
63+
ai/llm-fine-tune/dataset/generated/*

README.md

Lines changed: 24 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,19 @@
11
# Public cloud examples
22

3-
![Work in progess](./docs/assets/wip.jpg)
3+
<table>
4+
<tr>
5+
<td>
6+
<p align="center">
7+
<img src="./docs/assets/wip.jpg"/>
8+
</p>
9+
</td>
10+
<td>
11+
<b style="font-size:24px">⚠️ This code is only for demonstration purposes </br>and should not be used in production. ⚠️ </b>
12+
</td>
13+
</tr>
14+
</table>
15+
16+
417

518
Here is a list of examples which use multiple OVHcloud [Public Cloud products](https://www.ovhcloud.com/fr/public-cloud/).
619

@@ -18,15 +31,19 @@ All examples are organized depending on the main used product (Network, AI, ...)
1831
Here is the several topics:
1932
```bash
2033
.
21-
├── ai-machine-learning ## Here you find demos about AI Products: AI Notebooks, AI Training and AI Deploy
22-
├── containers-orchestration ## Here you find demos about Kubernetes, Rancher and Harbor
34+
├── ai ## Here you find demos about AI Products: AI Notebooks, AI Training, AI Deploy, AI Endpoints...
35+
│ ├── ai-endpoints
36+
│ ├── deploy
37+
│ ├── notebooks
38+
│ └── training
39+
├── containers-orchestration ## Here you find demos about Kubernetes, Rancher & Harbor
2340
│ ├── managed-kubernetes
24-
│ └── managed-private-registry
41+
│ ├── managed-private-registry
42+
│ └── managed-rancher
2543
├── databases-analytics ## Here you find demo about databases, data streaming, data integration, ...
26-
│ └── databases
2744
├── iam ## Here you find demo about IAM (roles, identity, ...)
28-
├── network ## Here you find demo about network (private network, load balancer, gateway, ...)
29-
├── private-network
45+
├── network ## Here you find demo about network (private network, load balancer, gateway, VPS ...)
46+
├── storage ## Here you find demo about storage (object storage, block storage, ...)
3047
└── use-cases ## Here you find use cases (examples using several services: kubernetes, databases...)
3148
```
3249

ai/ai-endpoints/README.md

Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,41 @@
1+
## AI Endpoints demos
2+
3+
Here you find all demos using AI endpoints.
4+
Each demo uses a specific language (Python, Java, JS, ...) and illustrate a specific notion (streaming chatbot, RAG, ...).
5+
6+
Don't hesitate to use the source code and give us feedback.
7+
8+
### 🛠️ Prerequisites / links 🔗
9+
10+
- 🔗 AI Endpoints: https://endpoints.ai.cloud.ovh.net/
11+
- 🔗 LangChain: https://www.langchain.com/
12+
- 🔗 LangChain4j: https://docs.langchain4j.dev/intro/
13+
- 🔗 Quarkus: https://quarkus.io/
14+
- 💬 Discord dedicated [channel](https://discord.com/channels/850031577277792286/1217892323640344626)
15+
16+
### ☕️ Java demos ☕️
17+
18+
- [MCP server / client](./mcp-quarkus-langchain4j)
19+
- [Function calling with LangChain4J](./function-calling-langchain4j)
20+
- [Simple Structured Output](./structured-output-langchain4j/)
21+
- [Natural Language Processing](./java-nlp)
22+
- [Chatbot with LangChain4j](./java-langchain4j-chatbot/): blocking mode, streaming mode and RAG mode.
23+
- [Blocking chatbot](./quarkus-langchain4j/) with LangChain4j and Quarkus
24+
- [Streaming chatbot](./quarkus-langchain4j-streaming/) with langChain4j and Quarkus.
25+
26+
### 🐍 Python 🐍
27+
28+
- [Podcast audio transcript](./podcast-transcript-whisper/python/)
29+
- [Chatbot with LangChain](./python-langchain-chatbot/): blocking mode, streaming mode, RAG mode.
30+
- [Streaming chatbot](./python-langchain-chatbot/) with LangChain
31+
- [Audio Summarizer Assistant](./audio-summarizer-assistant/) by connecting Speech-To-Text and LLM
32+
- [Audio Virtual Assistant](./audio-virtual-assistant/) by putting ASR, LLM and TTS working together
33+
- [Conversational Memory for chatbot](./python-langchain-conversational-memory/) by using Mistral7B and LangChain Memory module
34+
- [Video Translator](./speech-ai-video-translator) with ASR, NMT and TTS to subtitle and dub video voices
35+
- [ASR features](./asr-features) to better understand how Automatic Speech Recognition models work
36+
- [TTS features](./tts-features) to be able to use all Text To Speech models easily
37+
- [Car Damage Verification with VLM](./car-damage-verification-using-vlm/) - Interactive fact-checking using Vision Language Models to verify car claims against photos
38+
39+
### 🕸️ Javascript 🕸️
40+
41+
- [Streaming chatbot](./js-langchain-chatbot/) with LangChain
742 KB
Loading
422 KB
Binary file not shown.

ai/ai-endpoints/asr-features/tutorial-asr-diarization.ipynb

Lines changed: 423 additions & 0 deletions
Large diffs are not rendered by default.
Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
This project illustrate how to use Automatic Speech Recognition (ASR) and Large Language Models (LLM) to build an **Audio Summarizer Assistant** with [AI Endpoints](https://endpoints.ai.cloud.ovh.net/).
2+
3+
## How to use the project
4+
5+
- install the required dependencies: `pip install -r requirements.txt`
6+
- install `ffmpeg` and `ffprob`
7+
8+
- create the `.env` file:
9+
```
10+
ASR_AI_ENDPOINT=https://nvr-asr-en-gb.endpoints.kepler.ai.cloud.ovh.net/api/v1/asr/recognize
11+
LLM_AI_ENDPOINT=https://mixtral-8x7b-instruct-v01.endpoints.kepler.ai.cloud.ovh.net/api/openai_compat/v1
12+
OVH_AI_ENDPOINTS_ACCESS_TOKEN=<ai-endpoints-api-token>
13+
```
14+
15+
- launch the Gradio app: `python audio-summarizer-app.py`
16+
17+
![image](audio-summarizer-web-app.png)
Lines changed: 142 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,142 @@
1+
import gradio as gr
2+
import io
3+
import os
4+
import requests
5+
from pydub import AudioSegment
6+
from dotenv import load_dotenv
7+
from openai import OpenAI
8+
9+
10+
# access the environment variables from the .env file
11+
load_dotenv()
12+
asr_ai_endpoint_url = os.environ.get('ASR_AI_ENDPOINT')
13+
llm_ai_endpoint_url = os.getenv("LLM_AI_ENDPOINT")
14+
ai_endpoint_token = os.getenv("OVH_AI_ENDPOINTS_ACCESS_TOKEN")
15+
16+
17+
# automatic speech recognition
18+
def asr_transcription(audio):
19+
20+
if audio is None:
21+
return " "
22+
23+
else:
24+
# preprocess audio
25+
processed_audio = "/tmp/my_audio.wav"
26+
audio_input = AudioSegment.from_file(audio, "mp3")
27+
process_audio_to_wav = audio_input.set_channels(1)
28+
process_audio_to_wav = process_audio_to_wav.set_frame_rate(16000)
29+
process_audio_to_wav.export(processed_audio, format="wav")
30+
31+
# headers
32+
headers = headers = {
33+
'accept': 'application/json',
34+
"Authorization": f"Bearer {ai_endpoint_token}",
35+
}
36+
37+
# put processed audio file as endpoint input
38+
files = [
39+
('audio', open(processed_audio, 'rb')),
40+
]
41+
42+
# get response from endpoint
43+
response = requests.post(
44+
asr_ai_endpoint_url,
45+
files=files,
46+
headers=headers
47+
)
48+
49+
# return complete transcription
50+
if response.status_code == 200:
51+
# Handle response
52+
response_data = response.json()
53+
resp=''
54+
for alternative in response_data:
55+
resp+=alternative['alternatives'][0]['transcript']
56+
else:
57+
print("Error:", response.status_code)
58+
59+
return resp
60+
61+
62+
# ask Mixtral 8x22b for summarization
63+
def chat_completion(new_message):
64+
65+
if new_message==" ":
66+
return "Please, send an input audio to get its summary!"
67+
68+
else:
69+
# auth
70+
client = OpenAI(
71+
base_url=llm_ai_endpoint_url,
72+
api_key=ai_endpoint_token
73+
)
74+
75+
# prompt
76+
history_openai_format = [{"role": "user", "content": f"Summarize the following text in a few words: {new_message}"}]
77+
# return summary
78+
return client.chat.completions.create(
79+
model="Mixtral-8x7B-Instruct-v0.1",
80+
messages=history_openai_format,
81+
temperature=0,
82+
max_tokens=1024
83+
).choices.pop().message.content
84+
85+
86+
# gradio
87+
with gr.Blocks(theme=gr.themes.Default(primary_hue="blue"), fill_height=True) as demo:
88+
89+
# add title and description
90+
with gr.Row():
91+
gr.HTML(
92+
"""
93+
<div align="center">
94+
<h1>Welcome on Audio Summarizer web app 💬!</h1>
95+
<i>Transcribe and summarize your broadcast, meetings, conversations, potcasts and much more...</i>
96+
</div>
97+
<br>
98+
"""
99+
)
100+
101+
# audio zone for user question
102+
gr.Markdown("## Upload your audio file 📢")
103+
with gr.Row():
104+
inp_audio = gr.Audio(
105+
label = "Audio file in .wav or .mp3 format:",
106+
sources = ['upload'],
107+
type = "filepath",
108+
)
109+
110+
# written transcription of user question
111+
with gr.Row():
112+
inp_text = gr.Textbox(
113+
label = "Audio transcription into text:",
114+
)
115+
116+
# chabot answer
117+
gr.Markdown("## Chatbot summarization 🤖")
118+
with gr.Row():
119+
out_resp = gr.Textbox(
120+
label = "Get a summary of your audio:",
121+
)
122+
123+
with gr.Row():
124+
125+
# clear inputs
126+
clear = gr.ClearButton([inp_audio, inp_text, out_resp])
127+
128+
# update functions
129+
inp_audio.change(
130+
fn = asr_transcription,
131+
inputs = inp_audio,
132+
outputs = inp_text
133+
)
134+
inp_text.change(
135+
fn = chat_completion,
136+
inputs = inp_text,
137+
outputs = out_resp
138+
)
139+
140+
if __name__ == '__main__':
141+
142+
demo.launch(server_name="0.0.0.0", server_port=8000)

0 commit comments

Comments
 (0)