Skip to content

Commit 3542cf5

Browse files
committed
small fixes across endpoints tutorials
1 parent 82c375f commit 3542cf5

File tree

4 files changed

+11
-16
lines changed

4 files changed

+11
-16
lines changed

pages/public_cloud/ai_machine_learning/endpoints_tuto_02_voice_virtual_assistant/guide.en-gb.md

Lines changed: 5 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ Then, create a `requirements.txt` file with the following libraries:
5757
```bash
5858
openai==1.68.2
5959
streamlit==1.36.0
60-
streamlit-mic-recorder==1.16.0
60+
streamlit-mic-recorder==0.0.8
6161
nvidia-riva-client==2.15.1
6262
python-dotenv==1.0.1
6363
```
@@ -101,7 +101,7 @@ First, create the **Automatic Speech Recognition (ASR)** function in order to tr
101101
def asr_transcription(question):
102102

103103
asr_service = riva.client.ASRService(
104-
riva.client.Auth(uri=os.environ.get('ASR_GRPC_ENDPOINT'), use_ssl=True, metadata_args=[["authorization", f"bearer {ai_endpoint_token}"]])
104+
riva.client.Auth(uri=os.environ.get('ASR_GRPC_ENDPOINT'), use_ssl=True, metadata_args=[["authorization", f"bearer {os.environ.get('OVH_AI_ENDPOINTS_ACCESS_TOKEN')}"]])
105105
)
106106

107107
# set up config
@@ -142,7 +142,7 @@ Then, build the **Text To Speech (TTS)** function in order to transform the writ
142142
def tts_synthesis(response):
143143

144144
tts_service = riva.client.SpeechSynthesisService(
145-
riva.client.Auth(uri=os.environ.get('TTS_GRPC_ENDPOINT'), use_ssl=True, metadata_args=[["authorization", f"bearer {ai_endpoint_token}"]])
145+
riva.client.Auth(uri=os.environ.get('TTS_GRPC_ENDPOINT'), use_ssl=True, metadata_args=[["authorization", f"bearer {os.environ.get('OVH_AI_ENDPOINTS_ACCESS_TOKEN')}"]])
146146
)
147147

148148
# set up config
@@ -196,11 +196,11 @@ with st.container():
196196
user_question = asr_transcription(recording['bytes'])
197197

198198
if prompt := user_question:
199-
client = OpenAI(base_url=os.getenv("LLM_AI_ENDPOINT"), api_key=ai_endpoint_token)
199+
client = OpenAI(base_url=os.getenv("LLM_AI_ENDPOINT"), api_key=os.environ.get('OVH_AI_ENDPOINTS_ACCESS_TOKEN'))
200200
st.session_state.messages.append({"role": "user", "content": prompt, "avatar":"👤"})
201201
messages.chat_message("user", avatar="👤").write(prompt)
202202
response = client.chat.completions.create(
203-
model="Mixtral-8x7B-Instruct-v0.",
203+
model="Mixtral-8x7B-Instruct-v0.1",
204204
messages=st.session_state.messages,
205205
temperature=0,
206206
max_tokens=1024,
@@ -214,13 +214,6 @@ with st.container():
214214
placeholder.audio(audio_samples, sample_rate=sample_rate_hz, autoplay=True)
215215
```
216216

217-
Then, you can launch it in the `main`:
218-
219-
```python
220-
if __name__ == '__main__':
221-
demo.launch(server_name="0.0.0.0", server_port=8000)
222-
```
223-
224217
### Launch Streamlit web app locally
225218

226219
🚀 That’s it! Now your web app is ready to be used! You can start this Streamlit app locally by launching the following command:

pages/public_cloud/ai_machine_learning/endpoints_tuto_04_sentiment_analyzer/guide.en-gb.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -90,7 +90,7 @@ Start by adding the necessary dependencies in your `pom.xml`:
9090

9191
### Define the AI Endpoints client
9292

93-
We define a REST client interface using MicroProfile annotations to interact with the AI Endpoint.
93+
We define a REST client interface using MicroProfile annotations to interact with the AI Endpoint, in a `AISentimentService.java` file:
9494

9595
```java
9696
package com.ovhcloud.examples.aiendpoints.nlp.sentiment;
@@ -119,7 +119,7 @@ public interface AISentimentService {
119119

120120
### Implement the REST resource
121121

122-
Now let’s create the actual REST resource that uses the `AISentimentService`:
122+
Now let’s create the actual REST resource that uses the `AISentimentService`, in a `SentimentsAnalysisResource.java` file:
123123

124124
```java
125125
package com.ovhcloud.examples.aiendpoints.nlp.sentiment;

pages/public_cloud/ai_machine_learning/endpoints_tuto_09_chatbot_memory_langchain/guide.en-gb.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -66,6 +66,7 @@ Then, create a `requirements.txt` file with the required libraries:
6666
python-dotenv==1.0.1
6767
langchain_openai==0.1.14
6868
openai==1.68.2
69+
langchain==0.2.17
6970
```
7071

7172
Then, launch the installation of these dependencies:

pages/public_cloud/ai_machine_learning/endpoints_tuto_11_rag_chatbot_langchain/guide.en-gb.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -70,6 +70,7 @@ Once this is done, you can create a Python file named `chat-bot-streaming-rag.py
7070
from dotenv import load_dotenv
7171
import argparse
7272
import time
73+
import os
7374

7475
from langchain import hub
7576

@@ -124,7 +125,7 @@ def chat_completion(new_message: str):
124125
# Split documents into chunks and vectorize them
125126
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
126127
splits = text_splitter.split_documents(docs)
127-
vectorstore = Chroma.from_documents(documents=splits, embedding=OVHCloudEmbeddings(model_name=_OVH_AI_ENDPOINTS_EMBEDDING_MODEL_NAME))
128+
vectorstore = Chroma.from_documents(documents=splits, embedding=OVHCloudEmbeddings(model_name=_OVH_AI_ENDPOINTS_EMBEDDING_MODEL_NAME, access_token=_OVH_AI_ENDPOINTS_ACCESS_TOKEN))
128129

129130
prompt = hub.pull("rlm/rag-prompt")
130131

@@ -153,7 +154,7 @@ if __name__ == '__main__':
153154

154155
### Prepare your knowledge base
155156

156-
Create a folder named rag-files and place your `.txt`, .`md`, or other text-based documents there. These will be converted into embeddings and used during retrieval.
157+
Create a folder named `rag-files` and place your `.txt`, .`md`, or other text-based documents there. These will be converted into embeddings and used during retrieval.
157158

158159
You can find example files in our [public-cloud-examples GitHub repository](https://github.com/ovh/public-cloud-examples/tree/main/ai/ai-endpoints/python-langchain-chatbot/rag-files).
159160

0 commit comments

Comments
 (0)