Skip to content

Commit 65f80c3

Browse files
committed
fix conflict
2 parents 4b01822 + 465b5e7 commit 65f80c3

File tree

11 files changed

+77
-124
lines changed

11 files changed

+77
-124
lines changed

pages/public_cloud/ai_machine_learning/endpoints_tuto_01_audio_summarizer/guide.en-gb.md

Lines changed: 8 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: AI Endpoints - Create your own audio summarizer
33
excerpt: Summarize hours of meetings ASR and LLM AI endpoints
4-
updated: 2025-04-15
4+
updated: 2025-04-18
55
---
66

77
> [!primary]
@@ -21,23 +21,14 @@ In this tutorial, you will create an Audio Summarizer assistant that can not onl
2121

2222
Indeed, thanks to [AI Endpoints](https://endpoints.ai.cloud.ovh.net/), it’s never been easier to create a virtual assistant that can help you stay on top of your meetings and keep track of important information.
2323

24-
This guide will explore how AI APIs can be connected to create an advanced virtual assistant capable of transcribing and summarizing any audio file using **ASR (Automatic Speech Recognition)** technologies and popular **LLMs (Large Language Models)**. We will also build an app to use our assistant!
24+
This tutorial will explore how AI APIs can be connected to create an advanced virtual assistant capable of transcribing and summarizing any audio file using **ASR (Automatic Speech Recognition)** technologies and popular **LLMs (Large Language Models)**. We will also build an app to use our assistant!
2525

2626
![connect-ai-apis](images/ai-endpoint-puzzles-connexion.png)
2727

2828
## Definitions
2929

30-
**Automatic Speech Recognition (ASR)**
31-
32-
Technology that converts spoken language into written text.
33-
34-
ASR will be used in this context to transcribe long audio recordings into text, which will then be summarized using LLMs.
35-
36-
**Large Language Models (LLMs)**
37-
38-
Advanced models trained to understand context and generate human-like responses.
39-
40-
In this use case, the LLM prompt will be designed to generate a summary of the input text based on the output from the ASR endpoint.
30+
- **Automatic Speech Recognition (ASR)**: Technology that converts spoken language into written text. ASR will be used in this context to transcribe long audio recordings into text, which will then be summarized using LLMs.
31+
- **Large Language Models (LLMs)**: Advanced models trained to understand context and generate human-like responses. In this use case, the LLM prompt will be designed to generate a summary of the input text based on the output from the ASR endpoint.
4132

4233
## Requirements
4334

@@ -69,15 +60,15 @@ python-dotenv==1.0.1
6960

7061
Then, launch the installation of these dependencies:
7162

72-
```
63+
```console
7364
pip install -r requirements.txt
7465
```
7566

7667
*Note that Python 3.11 is used in this tutorial.*
7768

7869
### Importing necessary libraries and variables
7970

80-
Once this is done, you can create a Python file named `audio-summarizer-app.py`, where you will first import Python librairies as follow:
71+
Once this is done, you can create a Python file named `audio-summarizer-app.py`, where you will first import Python librairies as follows:
8172

8273
```python
8374
import gradio as gr
@@ -153,7 +144,7 @@ def asr_transcription(audio):
153144

154145
**In this function:**
155146

156-
- The audio file is preprocessed as follow: `.wav` format, `1` channel, `16000` frame rate
147+
- The audio file is preprocessed as follows: `.wav` format, `1` channel, `16000` frame rate
157148
- The transformed audio `processed_audio` is read
158149
- An API call is made to the ASR endpoint named `nvr-asr-en-gb`
159150
- The full response is stored in `resp` variable and returned by the function
@@ -278,7 +269,7 @@ if __name__ == '__main__':
278269

279270
### Launch Gradio web app locally
280271

281-
🚀 That’s it! Now, your web app is ready to be used! You can you can start this Gradio app locally by launching the following command:
272+
🚀 That’s it! Now, your web app is ready to be used! You can start this Gradio app locally by launching the following command:
282273

283274
```python
284275
python audio-summarizer-app.py

pages/public_cloud/ai_machine_learning/endpoints_tuto_02_voice_virtual_assistant/guide.en-gb.md

Lines changed: 8 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: AI Endpoints - Create your own voice assistant
33
excerpt: Create a voice-enabled chatbot using ASR, LLM, and TTS endpoints in under 100 lines of code
4-
updated: 2025-04-15
4+
updated: 2025-04-18
55
---
66

77
> [!primary]
@@ -17,7 +17,7 @@ Imagine having a virtual assistant that listens to your voice, understands your
1717

1818
## Objective
1919

20-
In this guide, you will learn how to create a fully functional Audio Virtual Assistant that:
20+
In this tutorial, you will learn how to create a fully functional Audio Virtual Assistant that:
2121

2222
- Accepts voice input from a microphone
2323
- Transcribes it using ASR (Automatic Speech Recognition)
@@ -30,23 +30,9 @@ All of this is done by connecting **AI Endpoints** like puzzle pieces—allowing
3030

3131
## Definitions
3232

33-
**Automatic Speech Recognition (ASR)**
34-
35-
Technology that converts spoken language into text.
36-
37-
ASR makes it possible in this context for your assistant to understand voice input.
38-
39-
**Large Language Models (LLMs)**
40-
41-
Advanced models trained to understand context and generate human-like responses.
42-
43-
Here, LLMs will handle the logic and answer your questions intelligently.
44-
45-
**Text-To-Speech (TTS)**
46-
47-
Technology that converts written text into spoken audio.
48-
49-
With TTS, your assistant will respond with natural-sounding speech, completing the conversation loop.
33+
- **Automatic Speech Recognition (ASR)**: Technology that converts spoken language into text. ASR makes it possible in this context for your assistant to understand voice input.
34+
- **Large Language Models (LLMs)**: Advanced models trained to understand context and generate human-like responses. Here, LLMs will handle the logic and answer your questions intelligently.
35+
- **Text-To-Speech (TTS)**: Technology that converts written text into spoken audio. With TTS, your assistant will respond with natural-sounding speech, completing the conversation loop.
5036

5137
## Requirements
5238

@@ -80,15 +66,15 @@ python-dotenv==1.0.1
8066

8167
Then, launch the installation of these dependencies:
8268

83-
```
69+
```console
8470
pip install -r requirements.txt
8571
```
8672

8773
*Note that Python 3.11 is used in this tutorial.*
8874

8975
### Importing necessary libraries and variables
9076

91-
Once this is done, you can create a Python file named `audio-virtual-assistant-app.py`, where you will first import Python librairies as follow:
77+
Once this is done, you can create a Python file named `audio-virtual-assistant-app.py`, where you will first import Python librairies as follows:
9278

9379
```python
9480
import os
@@ -239,7 +225,7 @@ if __name__ == '__main__':
239225

240226
### Launch Streamlit web app locally
241227

242-
🚀 That’s it! Now, your web app is ready to be used! You can you can start this Streamlit app locally by launching the following command:
228+
🚀 That’s it! Now your web app is ready to be used! You can start this Streamlit app locally by launching the following command:
243229

244230
```python
245231
streamlit run audio-virtual-assistant.py

pages/public_cloud/ai_machine_learning/endpoints_tuto_03_code_assistant_continue/guide.en-gb.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: AI Endpoints - Create a code assistant with Continue
33
excerpt: Build your own code assistant directly in VSCode or JetBrains IDEs using the Continue plugin
4-
updated: 2025-04-15
4+
updated: 2025-04-18
55
---
66

77
> [!primary]
@@ -15,7 +15,7 @@ updated: 2025-04-15
1515

1616
Want more control over your code assistant? Looking to integrate your own LLM configuration and use models hosted on **[AI Endpoints](https://endpoints.ai.cloud.ovh.net/)**?
1717

18-
This guide shows you how to build your own developer assistant using **[Continue](https://www.continue.dev/)**, an open-source IDE plugin that works with both VSCode and JetBrains IDEs, in combination with OVHcloud.
18+
This tutorial shows you how to build your own developer assistant using **[Continue](https://www.continue.dev/)**, an open-source IDE plugin that works with both VSCode and JetBrains IDEs, in combination with OVHcloud.
1919

2020
Continue lets you plug in your own LLMs, enabling full control over which models you use and how they interact with your code.
2121

pages/public_cloud/ai_machine_learning/endpoints_tuto_04_sentiment_analyzer/guide.en-gb.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: AI Endpoints - Create a sentiment analyzer
33
excerpt: Build a sentiment analyzer with AI Endpoints and Java using Quarkus
4-
updated: 2025-04-15
4+
updated: 2025-04-18
55
---
66

77
> [!primary]
@@ -13,13 +13,13 @@ updated: 2025-04-15
1313
1414
## Introduction
1515

16-
In this guide, we’ll show you how to create a sentiment analyzer using **[AI Endpoints](https://endpoints.ai.cloud.ovh.net/)** and Java with **[Quarkus](https://github.com/quarkusio/quarkus)**.
16+
In this tutorial, we’ll show you how to create a sentiment analyzer using **[AI Endpoints](https://endpoints.ai.cloud.ovh.net/)** and Java with **[Quarkus](https://github.com/quarkusio/quarkus)**.
1717

1818
We'll use a model from the `Natural Language Processing (NLP)` category, specifically the `roberta-base-go_emotions` model. This model can analyze text and return emotions in response.
1919

2020
### Project setup
2121

22-
To simplify the project, we'll use **[Quarkus](https://github.com/quarkusio/quarkus)**. for fast development and REST exposure.
22+
To simplify the project, we'll use **[Quarkus](https://github.com/quarkusio/quarkus)** for fast development and REST exposure.
2323

2424
Start by adding the necessary dependencies in your `pom.xml`:
2525

pages/public_cloud/ai_machine_learning/endpoints_tuto_05_chatbot_langchain_python/guide.en-gb.md

Lines changed: 9 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: AI Endpoints - Build a Python Chatbot with LangChain
33
excerpt: Learn how to build a chatbot in Python using LangChain and OVHcloud AI Endpoints
4-
updated: 2025-04-15
4+
updated: 2025-04-18
55
---
66

77
> [!primary]
@@ -19,7 +19,7 @@ In this tutorial, we’ll use LangChain (Python edition) with OVHcloud **[AI End
1919

2020
## Objective
2121

22-
This guide demonstrates how to:
22+
This tutorial demonstrates how to:
2323

2424
- Build a chatbot using LangChain and FastAPI
2525
- Connect to OVHcloud AI Endpoints to access LLMs
@@ -28,17 +28,10 @@ This guide demonstrates how to:
2828

2929
## Definitions
3030

31-
**Streaming LLM Response**
32-
Instead of waiting for a full response from the model, streaming allows the application to start processing output tokens as they’re generated. This creates a smoother, faster user experience—especially useful for chatbots.
33-
34-
**[LangChain4j](https://github.com/langchain4j/langchain4j)**
35-
Java-based framework inspired by [LangChain](https://github.com/langchain-ai/langchain), designed to simplify the integration of LLMs (Large Language Models) into applications. It offers abstractions and annotations for building intelligent agents and chatbots. Note that LangChain4j is not officially maintained by the LangChain team, despite the similar name.
36-
37-
**[Quarkus](https://quarkus.io/)**
38-
A Kubernetes-native Java framework designed to optimize Java applications for containers and the cloud. In this tutorial we will use the [quarkus-langchain4j](https://github.com/quarkiverse/quarkus-langchain4j/) extension.
39-
40-
**[AI Endpoints](https://endpoints.ai.cloud.ovh.net/)**
41-
A serverless platform by OVHcloud providing easy access to a variety of world-renowned AI models including Mistral, LLaMA, and more. This platform is designed to be simple, secure, and intuitive, with data privacy as a top priority.
31+
- **Streaming LLM Response**: Instead of waiting for a full response from the model, streaming allows the application to start processing output tokens as they’re generated. This creates a smoother, faster user experience—especially useful for chatbots.
32+
- **[LangChain4j](https://github.com/langchain4j/langchain4j)**: Java-based framework inspired by [LangChain](https://github.com/langchain-ai/langchain), designed to simplify the integration of LLMs (Large Language Models) into applications. It offers abstractions and annotations for building intelligent agents and chatbots. Note that LangChain4j is not officially maintained by the LangChain team, despite the similar name.
33+
- **[Quarkus](https://quarkus.io/)**: A Kubernetes-native Java framework designed to optimize Java applications for containers and the cloud. In this tutorial we will use the [quarkus-langchain4j](https://github.com/quarkiverse/quarkus-langchain4j/) extension.
34+
- **[AI Endpoints](https://endpoints.ai.cloud.ovh.net/)**: A serverless platform by OVHcloud providing easy access to a variety of world-renowned AI models including Mistral, LLaMA, and more. This platform is designed to be simple, secure, and intuitive, with data privacy as a top priority.
4235

4336
## Requirements
4437

@@ -60,7 +53,7 @@ _OVH_AI_ENDPOINTS_MODEL_URL=https://mistral-7b-instruct-v0-3.endpoints.kepler.ai
6053

6154
**Make sure to replace the token value (`OVH_AI_ENDPOINTS_ACCESS_TOKEN`) by yours.** If you do not have one yet, follow the instructions in the [AI Endpoints - Getting Started](/pages/public_cloud/ai_machine_learning/endpoints_guide_01_getting_started) guide.
6255

63-
You will also have to set two other environements variables, related to the model you want to use. You can find these model-specific values in the `documentation` tab of each model. For example, if you want to add the `Mistral-7B-Instruct-v0.3` model, the expected environement variables will be:
56+
You will also have to set two other environments variables, related to the model you want to use. You can find these model-specific values in the `documentation` tab of each model. For example, if you want to add the `Mistral-7B-Instruct-v0.3` model, the expected environment variables will be:
6457

6558
- `OVH_AI_ENDPOINTS_MODEL_NAME`: Mistral-7B-Instruct-v0.3
6659
- `OVH_AI_ENDPOINTS_MODEL_URL`: https://mistral-7b-instruct-v0-3.endpoints.kepler.ai.cloud.ovh.net/api/openai_compat/v1
@@ -76,7 +69,7 @@ python-dotenv==1.0.1
7669

7770
Then, launch the installation of these dependencies:
7871

79-
```
72+
```console
8073
pip install -r requirements.txt
8174
```
8275

@@ -139,7 +132,7 @@ python3 chat-bot.py --question "What is OVHcloud?"
139132

140133
Which will give you an output similar to:
141134

142-
```
135+
```console
143136
🤖: OVHcloud is a global cloud computing company that offers a variety of services such as virtual private servers, dedicated servers,
144137
storage solutions, and other web services.
145138
It was founded in France and has since expanded to become a leading provider of cloud infrastructure, with data centers located around the

pages/public_cloud/ai_machine_learning/endpoints_tuto_06_chatbot_langchain_javascript/guide.en-gb.md

Lines changed: 7 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: AI Endpoints - Build a JavaScript Chatbot with LangChain
33
excerpt: Learn how to build a chatbot using JavaScript, LangChain, and AI Endpoints
4-
updated: 2025-04-15
4+
updated: 2025-04-18
55
---
66

77
> [!primary]
@@ -15,11 +15,11 @@ updated: 2025-04-15
1515

1616
**[LangChain](https://github.com/langchain-ai/langchain)** is a leading framework for building applications powered by Large Language Models (LLMs). While it's well-known for Python, LangChain also provides support for JavaScript/TypeScript—ideal for frontend and fullstack applications.
1717

18-
In this guide, we'll show you how to build a simple command-line chatbot using LangChain and OVHcloud **[AI Endpoints](https://endpoints.ai.cloud.ovh.net/)**, first in **blocking mode**, and then with **streaming** for real-time responses.
18+
In this tutorial, we'll show you how to build a simple command-line chatbot using LangChain and OVHcloud **[AI Endpoints](https://endpoints.ai.cloud.ovh.net/)**, first in **blocking mode**, and then with **streaming** for real-time responses.
1919

2020
## Objective
2121

22-
This guide demonstrates how to:
22+
This tutorial demonstrates how to:
2323

2424
- Set up a Node.js chatbot using LangChain JS
2525
- Connect to OVHcloud AI Endpoints to access LLMs
@@ -45,7 +45,7 @@ _OVH_AI_ENDPOINTS_MODEL_URL=https://mistral-7b-instruct-v0-3.endpoints.kepler.ai
4545

4646
**Make sure to replace the token value (`OVH_AI_ENDPOINTS_ACCESS_TOKEN`) by yours.** If you do not have one yet, follow the instructions in the [AI Endpoints - Getting Started](/pages/public_cloud/ai_machine_learning/endpoints_guide_01_getting_started) guide.
4747

48-
You will also have to set two other environements variables, related to the model you want to use. You can find these model-specific values in the `documentation` tab of each model. For example, if you want to add the `Mistral-7B-Instruct-v0.3` model, the expected environement variables will be:
48+
You will also have to set two other environments variables, related to the model you want to use. You can find these model-specific values in the `documentation` tab of each model. For example, if you want to add the `Mistral-7B-Instruct-v0.3` model, the expected environment variables will be:
4949

5050
- `OVH_AI_ENDPOINTS_MODEL_NAME`: Mistral-7B-Instruct-v0.3
5151
- `OVH_AI_ENDPOINTS_MODEL_URL`: https://mistral-7b-instruct-v0-3.endpoints.kepler.ai.cloud.ovh.net/api/openai_compat/v1
@@ -138,11 +138,9 @@ You can test your new assistant with the following command:
138138
node chatbot.js --question "What is OVHcloud?"
139139
```
140140

141+
Which will give you an output similar to:
141142

142-
143-
Which will give you an output similar to:
144-
145-
```
143+
```console
146144
OVHcloud is a global cloud computing company that offers a wide range of services including web hosting,
147145
virtual private servers, cloud storage, and dedicated servers.
148146
It was founded in 1999 and is headquartered in Roubaix, France.
@@ -154,7 +152,7 @@ OVHcloud is known for its high-performance, scalable, and secure cloud infrastru
154152

155153
### Enable streaming mode
156154

157-
As usual, you certainly want a real chatbot with conversational style. To do that let’s add streaming feature with the following code:
155+
As usual, you certainly want a real chatbot with conversational style. To do that, let’s add a streaming feature with the following code:
158156

159157
```js
160158
import { ChatMistralAI } from "@langchain/mistralai";

0 commit comments

Comments
 (0)