Skip to content

Commit 02bb89f

Browse files
committed
proofreading & ducplication
1 parent 3542cf5 commit 02bb89f

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

60 files changed

+260
-330
lines changed

pages/public_cloud/ai_machine_learning/endpoints_tuto_02_voice_virtual_assistant/guide.de-de.md

Lines changed: 7 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: AI Endpoints - Create your own voice assistant
33
excerpt: Create a voice-enabled chatbot using ASR, LLM, and TTS endpoints in under 100 lines of code
4-
updated: 2025-04-28
4+
updated: 2025-07-31
55
---
66

77
> [!primary]
@@ -57,7 +57,7 @@ Then, create a `requirements.txt` file with the following libraries:
5757
```bash
5858
openai==1.68.2
5959
streamlit==1.36.0
60-
streamlit-mic-recorder==1.16.0
60+
streamlit-mic-recorder==0.0.8
6161
nvidia-riva-client==2.15.1
6262
python-dotenv==1.0.1
6363
```
@@ -101,7 +101,7 @@ First, create the **Automatic Speech Recognition (ASR)** function in order to tr
101101
def asr_transcription(question):
102102

103103
asr_service = riva.client.ASRService(
104-
riva.client.Auth(uri=os.environ.get('ASR_GRPC_ENDPOINT'), use_ssl=True, metadata_args=[["authorization", f"bearer {ai_endpoint_token}"]])
104+
riva.client.Auth(uri=os.environ.get('ASR_GRPC_ENDPOINT'), use_ssl=True, metadata_args=[["authorization", f"bearer {os.environ.get('OVH_AI_ENDPOINTS_ACCESS_TOKEN')}"]])
105105
)
106106

107107
# set up config
@@ -142,7 +142,7 @@ Then, build the **Text To Speech (TTS)** function in order to transform the writ
142142
def tts_synthesis(response):
143143

144144
tts_service = riva.client.SpeechSynthesisService(
145-
riva.client.Auth(uri=os.environ.get('TTS_GRPC_ENDPOINT'), use_ssl=True, metadata_args=[["authorization", f"bearer {ai_endpoint_token}"]])
145+
riva.client.Auth(uri=os.environ.get('TTS_GRPC_ENDPOINT'), use_ssl=True, metadata_args=[["authorization", f"bearer {os.environ.get('OVH_AI_ENDPOINTS_ACCESS_TOKEN')}"]])
146146
)
147147

148148
# set up config
@@ -196,11 +196,11 @@ with st.container():
196196
user_question = asr_transcription(recording['bytes'])
197197

198198
if prompt := user_question:
199-
client = OpenAI(base_url=os.getenv("LLM_AI_ENDPOINT"), api_key=ai_endpoint_token)
199+
client = OpenAI(base_url=os.getenv("LLM_AI_ENDPOINT"), api_key=os.environ.get('OVH_AI_ENDPOINTS_ACCESS_TOKEN'))
200200
st.session_state.messages.append({"role": "user", "content": prompt, "avatar":"👤"})
201201
messages.chat_message("user", avatar="👤").write(prompt)
202202
response = client.chat.completions.create(
203-
model="Mixtral-8x7B-Instruct-v0.",
203+
model="Mixtral-8x7B-Instruct-v0.1",
204204
messages=st.session_state.messages,
205205
temperature=0,
206206
max_tokens=1024,
@@ -214,13 +214,6 @@ with st.container():
214214
placeholder.audio(audio_samples, sample_rate=sample_rate_hz, autoplay=True)
215215
```
216216

217-
Then, you can launch it in the `main`:
218-
219-
```python
220-
if __name__ == '__main__':
221-
demo.launch(server_name="0.0.0.0", server_port=8000)
222-
```
223-
224217
### Launch Streamlit web app locally
225218

226219
🚀 That’s it! Now your web app is ready to be used! You can start this Streamlit app locally by launching the following command:
@@ -252,4 +245,4 @@ If you need training or technical assistance to implement our solutions, contact
252245

253246
Please feel free to send us your questions, feedback, and suggestions regarding AI Endpoints and its features:
254247

255-
- In the #ai-endpoints channel of the OVHcloud [Discord server](https://discord.gg/ovhcloud), where you can engage with the community and OVHcloud team members.
248+
- In the #ai-endpoints channel of the OVHcloud [Discord server](https://discord.gg/ovhcloud), where you can engage with the community and OVHcloud team members.

pages/public_cloud/ai_machine_learning/endpoints_tuto_02_voice_virtual_assistant/guide.en-asia.md

Lines changed: 7 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: AI Endpoints - Create your own voice assistant
33
excerpt: Create a voice-enabled chatbot using ASR, LLM, and TTS endpoints in under 100 lines of code
4-
updated: 2025-04-28
4+
updated: 2025-07-31
55
---
66

77
> [!primary]
@@ -57,7 +57,7 @@ Then, create a `requirements.txt` file with the following libraries:
5757
```bash
5858
openai==1.68.2
5959
streamlit==1.36.0
60-
streamlit-mic-recorder==1.16.0
60+
streamlit-mic-recorder==0.0.8
6161
nvidia-riva-client==2.15.1
6262
python-dotenv==1.0.1
6363
```
@@ -101,7 +101,7 @@ First, create the **Automatic Speech Recognition (ASR)** function in order to tr
101101
def asr_transcription(question):
102102

103103
asr_service = riva.client.ASRService(
104-
riva.client.Auth(uri=os.environ.get('ASR_GRPC_ENDPOINT'), use_ssl=True, metadata_args=[["authorization", f"bearer {ai_endpoint_token}"]])
104+
riva.client.Auth(uri=os.environ.get('ASR_GRPC_ENDPOINT'), use_ssl=True, metadata_args=[["authorization", f"bearer {os.environ.get('OVH_AI_ENDPOINTS_ACCESS_TOKEN')}"]])
105105
)
106106

107107
# set up config
@@ -142,7 +142,7 @@ Then, build the **Text To Speech (TTS)** function in order to transform the writ
142142
def tts_synthesis(response):
143143

144144
tts_service = riva.client.SpeechSynthesisService(
145-
riva.client.Auth(uri=os.environ.get('TTS_GRPC_ENDPOINT'), use_ssl=True, metadata_args=[["authorization", f"bearer {ai_endpoint_token}"]])
145+
riva.client.Auth(uri=os.environ.get('TTS_GRPC_ENDPOINT'), use_ssl=True, metadata_args=[["authorization", f"bearer {os.environ.get('OVH_AI_ENDPOINTS_ACCESS_TOKEN')}"]])
146146
)
147147

148148
# set up config
@@ -196,11 +196,11 @@ with st.container():
196196
user_question = asr_transcription(recording['bytes'])
197197

198198
if prompt := user_question:
199-
client = OpenAI(base_url=os.getenv("LLM_AI_ENDPOINT"), api_key=ai_endpoint_token)
199+
client = OpenAI(base_url=os.getenv("LLM_AI_ENDPOINT"), api_key=os.environ.get('OVH_AI_ENDPOINTS_ACCESS_TOKEN'))
200200
st.session_state.messages.append({"role": "user", "content": prompt, "avatar":"👤"})
201201
messages.chat_message("user", avatar="👤").write(prompt)
202202
response = client.chat.completions.create(
203-
model="Mixtral-8x7B-Instruct-v0.",
203+
model="Mixtral-8x7B-Instruct-v0.1",
204204
messages=st.session_state.messages,
205205
temperature=0,
206206
max_tokens=1024,
@@ -214,13 +214,6 @@ with st.container():
214214
placeholder.audio(audio_samples, sample_rate=sample_rate_hz, autoplay=True)
215215
```
216216

217-
Then, you can launch it in the `main`:
218-
219-
```python
220-
if __name__ == '__main__':
221-
demo.launch(server_name="0.0.0.0", server_port=8000)
222-
```
223-
224217
### Launch Streamlit web app locally
225218

226219
🚀 That’s it! Now your web app is ready to be used! You can start this Streamlit app locally by launching the following command:
@@ -252,4 +245,4 @@ If you need training or technical assistance to implement our solutions, contact
252245

253246
Please feel free to send us your questions, feedback, and suggestions regarding AI Endpoints and its features:
254247

255-
- In the #ai-endpoints channel of the OVHcloud [Discord server](https://discord.gg/ovhcloud), where you can engage with the community and OVHcloud team members.
248+
- In the #ai-endpoints channel of the OVHcloud [Discord server](https://discord.gg/ovhcloud), where you can engage with the community and OVHcloud team members.

pages/public_cloud/ai_machine_learning/endpoints_tuto_02_voice_virtual_assistant/guide.en-au.md

Lines changed: 7 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: AI Endpoints - Create your own voice assistant
33
excerpt: Create a voice-enabled chatbot using ASR, LLM, and TTS endpoints in under 100 lines of code
4-
updated: 2025-04-28
4+
updated: 2025-07-31
55
---
66

77
> [!primary]
@@ -57,7 +57,7 @@ Then, create a `requirements.txt` file with the following libraries:
5757
```bash
5858
openai==1.68.2
5959
streamlit==1.36.0
60-
streamlit-mic-recorder==1.16.0
60+
streamlit-mic-recorder==0.0.8
6161
nvidia-riva-client==2.15.1
6262
python-dotenv==1.0.1
6363
```
@@ -101,7 +101,7 @@ First, create the **Automatic Speech Recognition (ASR)** function in order to tr
101101
def asr_transcription(question):
102102

103103
asr_service = riva.client.ASRService(
104-
riva.client.Auth(uri=os.environ.get('ASR_GRPC_ENDPOINT'), use_ssl=True, metadata_args=[["authorization", f"bearer {ai_endpoint_token}"]])
104+
riva.client.Auth(uri=os.environ.get('ASR_GRPC_ENDPOINT'), use_ssl=True, metadata_args=[["authorization", f"bearer {os.environ.get('OVH_AI_ENDPOINTS_ACCESS_TOKEN')}"]])
105105
)
106106

107107
# set up config
@@ -142,7 +142,7 @@ Then, build the **Text To Speech (TTS)** function in order to transform the writ
142142
def tts_synthesis(response):
143143

144144
tts_service = riva.client.SpeechSynthesisService(
145-
riva.client.Auth(uri=os.environ.get('TTS_GRPC_ENDPOINT'), use_ssl=True, metadata_args=[["authorization", f"bearer {ai_endpoint_token}"]])
145+
riva.client.Auth(uri=os.environ.get('TTS_GRPC_ENDPOINT'), use_ssl=True, metadata_args=[["authorization", f"bearer {os.environ.get('OVH_AI_ENDPOINTS_ACCESS_TOKEN')}"]])
146146
)
147147

148148
# set up config
@@ -196,11 +196,11 @@ with st.container():
196196
user_question = asr_transcription(recording['bytes'])
197197

198198
if prompt := user_question:
199-
client = OpenAI(base_url=os.getenv("LLM_AI_ENDPOINT"), api_key=ai_endpoint_token)
199+
client = OpenAI(base_url=os.getenv("LLM_AI_ENDPOINT"), api_key=os.environ.get('OVH_AI_ENDPOINTS_ACCESS_TOKEN'))
200200
st.session_state.messages.append({"role": "user", "content": prompt, "avatar":"👤"})
201201
messages.chat_message("user", avatar="👤").write(prompt)
202202
response = client.chat.completions.create(
203-
model="Mixtral-8x7B-Instruct-v0.",
203+
model="Mixtral-8x7B-Instruct-v0.1",
204204
messages=st.session_state.messages,
205205
temperature=0,
206206
max_tokens=1024,
@@ -214,13 +214,6 @@ with st.container():
214214
placeholder.audio(audio_samples, sample_rate=sample_rate_hz, autoplay=True)
215215
```
216216

217-
Then, you can launch it in the `main`:
218-
219-
```python
220-
if __name__ == '__main__':
221-
demo.launch(server_name="0.0.0.0", server_port=8000)
222-
```
223-
224217
### Launch Streamlit web app locally
225218

226219
🚀 That’s it! Now your web app is ready to be used! You can start this Streamlit app locally by launching the following command:
@@ -252,4 +245,4 @@ If you need training or technical assistance to implement our solutions, contact
252245

253246
Please feel free to send us your questions, feedback, and suggestions regarding AI Endpoints and its features:
254247

255-
- In the #ai-endpoints channel of the OVHcloud [Discord server](https://discord.gg/ovhcloud), where you can engage with the community and OVHcloud team members.
248+
- In the #ai-endpoints channel of the OVHcloud [Discord server](https://discord.gg/ovhcloud), where you can engage with the community and OVHcloud team members.

pages/public_cloud/ai_machine_learning/endpoints_tuto_02_voice_virtual_assistant/guide.en-ca.md

Lines changed: 7 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: AI Endpoints - Create your own voice assistant
33
excerpt: Create a voice-enabled chatbot using ASR, LLM, and TTS endpoints in under 100 lines of code
4-
updated: 2025-04-28
4+
updated: 2025-07-31
55
---
66

77
> [!primary]
@@ -57,7 +57,7 @@ Then, create a `requirements.txt` file with the following libraries:
5757
```bash
5858
openai==1.68.2
5959
streamlit==1.36.0
60-
streamlit-mic-recorder==1.16.0
60+
streamlit-mic-recorder==0.0.8
6161
nvidia-riva-client==2.15.1
6262
python-dotenv==1.0.1
6363
```
@@ -101,7 +101,7 @@ First, create the **Automatic Speech Recognition (ASR)** function in order to tr
101101
def asr_transcription(question):
102102

103103
asr_service = riva.client.ASRService(
104-
riva.client.Auth(uri=os.environ.get('ASR_GRPC_ENDPOINT'), use_ssl=True, metadata_args=[["authorization", f"bearer {ai_endpoint_token}"]])
104+
riva.client.Auth(uri=os.environ.get('ASR_GRPC_ENDPOINT'), use_ssl=True, metadata_args=[["authorization", f"bearer {os.environ.get('OVH_AI_ENDPOINTS_ACCESS_TOKEN')}"]])
105105
)
106106

107107
# set up config
@@ -142,7 +142,7 @@ Then, build the **Text To Speech (TTS)** function in order to transform the writ
142142
def tts_synthesis(response):
143143

144144
tts_service = riva.client.SpeechSynthesisService(
145-
riva.client.Auth(uri=os.environ.get('TTS_GRPC_ENDPOINT'), use_ssl=True, metadata_args=[["authorization", f"bearer {ai_endpoint_token}"]])
145+
riva.client.Auth(uri=os.environ.get('TTS_GRPC_ENDPOINT'), use_ssl=True, metadata_args=[["authorization", f"bearer {os.environ.get('OVH_AI_ENDPOINTS_ACCESS_TOKEN')}"]])
146146
)
147147

148148
# set up config
@@ -196,11 +196,11 @@ with st.container():
196196
user_question = asr_transcription(recording['bytes'])
197197

198198
if prompt := user_question:
199-
client = OpenAI(base_url=os.getenv("LLM_AI_ENDPOINT"), api_key=ai_endpoint_token)
199+
client = OpenAI(base_url=os.getenv("LLM_AI_ENDPOINT"), api_key=os.environ.get('OVH_AI_ENDPOINTS_ACCESS_TOKEN'))
200200
st.session_state.messages.append({"role": "user", "content": prompt, "avatar":"👤"})
201201
messages.chat_message("user", avatar="👤").write(prompt)
202202
response = client.chat.completions.create(
203-
model="Mixtral-8x7B-Instruct-v0.",
203+
model="Mixtral-8x7B-Instruct-v0.1",
204204
messages=st.session_state.messages,
205205
temperature=0,
206206
max_tokens=1024,
@@ -214,13 +214,6 @@ with st.container():
214214
placeholder.audio(audio_samples, sample_rate=sample_rate_hz, autoplay=True)
215215
```
216216

217-
Then, you can launch it in the `main`:
218-
219-
```python
220-
if __name__ == '__main__':
221-
demo.launch(server_name="0.0.0.0", server_port=8000)
222-
```
223-
224217
### Launch Streamlit web app locally
225218

226219
🚀 That’s it! Now your web app is ready to be used! You can start this Streamlit app locally by launching the following command:
@@ -252,4 +245,4 @@ If you need training or technical assistance to implement our solutions, contact
252245

253246
Please feel free to send us your questions, feedback, and suggestions regarding AI Endpoints and its features:
254247

255-
- In the #ai-endpoints channel of the OVHcloud [Discord server](https://discord.gg/ovhcloud), where you can engage with the community and OVHcloud team members.
248+
- In the #ai-endpoints channel of the OVHcloud [Discord server](https://discord.gg/ovhcloud), where you can engage with the community and OVHcloud team members.

pages/public_cloud/ai_machine_learning/endpoints_tuto_02_voice_virtual_assistant/guide.en-gb.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: AI Endpoints - Create your own voice assistant
33
excerpt: Create a voice-enabled chatbot using ASR, LLM, and TTS endpoints in under 100 lines of code
4-
updated: 2025-04-28
4+
updated: 2025-07-31
55
---
66

77
> [!primary]
@@ -245,4 +245,4 @@ If you need training or technical assistance to implement our solutions, contact
245245

246246
Please feel free to send us your questions, feedback, and suggestions regarding AI Endpoints and its features:
247247

248-
- In the #ai-endpoints channel of the OVHcloud [Discord server](https://discord.gg/ovhcloud), where you can engage with the community and OVHcloud team members.
248+
- In the #ai-endpoints channel of the OVHcloud [Discord server](https://discord.gg/ovhcloud), where you can engage with the community and OVHcloud team members.

pages/public_cloud/ai_machine_learning/endpoints_tuto_02_voice_virtual_assistant/guide.en-ie.md

Lines changed: 7 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: AI Endpoints - Create your own voice assistant
33
excerpt: Create a voice-enabled chatbot using ASR, LLM, and TTS endpoints in under 100 lines of code
4-
updated: 2025-04-28
4+
updated: 2025-07-31
55
---
66

77
> [!primary]
@@ -57,7 +57,7 @@ Then, create a `requirements.txt` file with the following libraries:
5757
```bash
5858
openai==1.68.2
5959
streamlit==1.36.0
60-
streamlit-mic-recorder==1.16.0
60+
streamlit-mic-recorder==0.0.8
6161
nvidia-riva-client==2.15.1
6262
python-dotenv==1.0.1
6363
```
@@ -101,7 +101,7 @@ First, create the **Automatic Speech Recognition (ASR)** function in order to tr
101101
def asr_transcription(question):
102102

103103
asr_service = riva.client.ASRService(
104-
riva.client.Auth(uri=os.environ.get('ASR_GRPC_ENDPOINT'), use_ssl=True, metadata_args=[["authorization", f"bearer {ai_endpoint_token}"]])
104+
riva.client.Auth(uri=os.environ.get('ASR_GRPC_ENDPOINT'), use_ssl=True, metadata_args=[["authorization", f"bearer {os.environ.get('OVH_AI_ENDPOINTS_ACCESS_TOKEN')}"]])
105105
)
106106

107107
# set up config
@@ -142,7 +142,7 @@ Then, build the **Text To Speech (TTS)** function in order to transform the writ
142142
def tts_synthesis(response):
143143

144144
tts_service = riva.client.SpeechSynthesisService(
145-
riva.client.Auth(uri=os.environ.get('TTS_GRPC_ENDPOINT'), use_ssl=True, metadata_args=[["authorization", f"bearer {ai_endpoint_token}"]])
145+
riva.client.Auth(uri=os.environ.get('TTS_GRPC_ENDPOINT'), use_ssl=True, metadata_args=[["authorization", f"bearer {os.environ.get('OVH_AI_ENDPOINTS_ACCESS_TOKEN')}"]])
146146
)
147147

148148
# set up config
@@ -196,11 +196,11 @@ with st.container():
196196
user_question = asr_transcription(recording['bytes'])
197197

198198
if prompt := user_question:
199-
client = OpenAI(base_url=os.getenv("LLM_AI_ENDPOINT"), api_key=ai_endpoint_token)
199+
client = OpenAI(base_url=os.getenv("LLM_AI_ENDPOINT"), api_key=os.environ.get('OVH_AI_ENDPOINTS_ACCESS_TOKEN'))
200200
st.session_state.messages.append({"role": "user", "content": prompt, "avatar":"👤"})
201201
messages.chat_message("user", avatar="👤").write(prompt)
202202
response = client.chat.completions.create(
203-
model="Mixtral-8x7B-Instruct-v0.",
203+
model="Mixtral-8x7B-Instruct-v0.1",
204204
messages=st.session_state.messages,
205205
temperature=0,
206206
max_tokens=1024,
@@ -214,13 +214,6 @@ with st.container():
214214
placeholder.audio(audio_samples, sample_rate=sample_rate_hz, autoplay=True)
215215
```
216216

217-
Then, you can launch it in the `main`:
218-
219-
```python
220-
if __name__ == '__main__':
221-
demo.launch(server_name="0.0.0.0", server_port=8000)
222-
```
223-
224217
### Launch Streamlit web app locally
225218

226219
🚀 That’s it! Now your web app is ready to be used! You can start this Streamlit app locally by launching the following command:
@@ -252,4 +245,4 @@ If you need training or technical assistance to implement our solutions, contact
252245

253246
Please feel free to send us your questions, feedback, and suggestions regarding AI Endpoints and its features:
254247

255-
- In the #ai-endpoints channel of the OVHcloud [Discord server](https://discord.gg/ovhcloud), where you can engage with the community and OVHcloud team members.
248+
- In the #ai-endpoints channel of the OVHcloud [Discord server](https://discord.gg/ovhcloud), where you can engage with the community and OVHcloud team members.

0 commit comments

Comments
 (0)