Skip to content

Commit 92aab0b

Browse files
committed
add new tutorials and reindex tutorial order
1 parent 607d502 commit 92aab0b

File tree

34 files changed

+1418
-23
lines changed

34 files changed

+1418
-23
lines changed

pages/index.md

Lines changed: 11 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1079,10 +1079,17 @@
10791079
+ [AI Endpoints - Features, Capabilities and Limitations](public_cloud/ai_machine_learning/endpoints_guide_02_capabilities)
10801080
+ [AI Endpoints - Troubleshooting](public_cloud/ai_machine_learning/endpoints_guide_03_troubleshooting)
10811081
+ [Tutorials](public-cloud-ai-and-machine-learning-ai-endpointstutorials)
1082-
+ [AI Endpoints - Create your own audio summarizer](public_cloud/ai_machine_learning/endpoints_guide_01_getting_started)
1083-
+ [AI Endpoints - Create your own audio assistant](public_cloud/ai_machine_learning/endpoints_guide_02_capabilities)
1084-
+ [AI Endpoints - Enable conversational memory in your chatbot using LangChain](public_cloud/ai_machine_learning/endpoints_guide_02_capabilities)
1085-
+ [AI Endpoints - Create your own AI chatbot using LangChain4j and Quarkus](public_cloud/ai_machine_learning/endpoints_tuto_04_chatbot_langchain4j_quarkus)
1082+
+ [AI Endpoints - Create your own audio summarizer](public_cloud/ai_machine_learning/endpoints_tuto_01_audio_summarizer)
1083+
+ [AI Endpoints - Create your own voice assistant](public_cloud/ai_machine_learning/endpoints_tuto_02_voice_virtual_assistant)
1084+
+ [AI Endpoints - Create a code assistant with Continue](public_cloud/ai_machine_learning/endpoints_tuto_03_code_assistant_continue)
1085+
+ [AI Endpoints - Create a sentiment analyzer](public_cloud/ai_machine_learning/endpoints_tuto_04_sentiment_analyzer)
1086+
+ [AI Endpoints - Build a Python Chatbot with LangChain](public_cloud/ai_machine_learning/endpoints_tuto_05_chatbot_langchain_python)
1087+
+ [AI Endpoints - Build a JavaScript Chatbot with LangChain](public_cloud/ai_machine_learning/endpoints_tuto_06_chatbot_langchain_javascript)
1088+
+ [AI Endpoints - Create your own AI chatbot using LangChain4j and Quarkus](public_cloud/ai_machine_learning/endpoints_tuto_07_chatbot_langchain4j_quarkus)
1089+
+ [AI Endpoints - Streaming Chatbot with LangChain4j and Quarkus](public_cloud/ai_machine_learning/endpoints_tuto_08_streaming_chatbot_langchain4j_quarkus)
1090+
+ [AI Endpoints - Enable conversational memory in your chatbot using LangChain](public_cloud/ai_machine_learning/endpoints_tuto_09_chatbot_memory_langchain)
1091+
+ [AI Endpoints - Create a Memory Chatbot with LangChain4j](public_cloud/ai_machine_learning/endpoints_tuto_10_memory_chatbot_langchain4j)
1092+
+ [AI Endpoints - Build a RAG Chatbot with LangChain](public_cloud/ai_machine_learning/endpoints_tuto_11_rag_chatbot_langchain)
10861093
+ [AI Partners Ecosystem](products/public-cloud-ai-and-machine-learning-ai-ecosystem)
10871094
+ [AI Partners - Guides](public-cloud-ai-and-machine-learning-ai-ecosystem-guides)
10881095
+ [AI Partners Ecosystem - Lettria - Models features, capabilities and billing](public_cloud/ai_machine_learning/ecosystem_01_lettria_billing_features_capabilities)

pages/public_cloud/ai_machine_learning/endpoints_tuto_01_audio_summarizer/guide.en-gb.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: AI Endpoints - Create your own audio summarizer
33
excerpt: Summarize hours of meetings ASR and LLM AI endpoints
4-
updated: 2025-04-10
4+
updated: 2025-04-15
55
---
66

77
> [!primary]
@@ -75,7 +75,7 @@ pip install -r requirements.txt
7575

7676
*Note that Python 3.11 is used in this tutorial.*
7777

78-
### Python scripts
78+
### Importing necessary libraries and variables
7979

8080
Once this is done, you can create a Python file named `audio-summarizer-app.py`, where you will first import Python librairies as follow:
8181

@@ -155,7 +155,7 @@ def asr_transcription(audio):
155155

156156
- The audio file is preprocessed as follow: `.wav` format, `1` channel, `16000` frame rate
157157
- The transformed audio `processed_audio` is read
158-
- An API call is made to the ASR AI Endpoint named `nvr-asr-en-gb`
158+
- An API call is made to the ASR endpoint named `nvr-asr-en-gb`
159159
- The full response is stored in `resp` variable and returned by the function
160160

161161
🎉 Now that you have this function, you are ready to transcribe audio files.
@@ -199,7 +199,7 @@ def chat_completion(new_message):
199199

200200
⚡️ You're almost there! The final step is to build your web app, making your solution easy to use with just a few lines of code.
201201

202-
### Build Gradio app
202+
### Build the app with Gradio
203203

204204
[Gradio](https://www.gradio.app/) is an open-source Python library that allows to quickly create user interfaces for Machine Learning models and demos.
205205

pages/public_cloud/ai_machine_learning/endpoints_tuto_02_audio_virtual_assistant/meta.yaml

Lines changed: 0 additions & 2 deletions
This file was deleted.

pages/public_cloud/ai_machine_learning/endpoints_tuto_02_audio_virtual_assistant/guide.en-gb.md renamed to pages/public_cloud/ai_machine_learning/endpoints_tuto_02_voice_virtual_assistant/guide.en-gb.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
2-
title: AI Endpoints - Create your own audio assistant
2+
title: AI Endpoints - Create your own voice assistant
33
excerpt: Create a voice-enabled chatbot using ASR, LLM, and TTS endpoints in under 100 lines of code
4-
updated: 2025-04-10
4+
updated: 2025-04-15
55
---
66

77
> [!primary]
@@ -86,7 +86,7 @@ pip install -r requirements.txt
8686

8787
*Note that Python 3.11 is used in this tutorial.*
8888

89-
### Python scripts
89+
### Importing necessary libraries and variables
9090

9191
Once this is done, you can create a Python file named `audio-virtual-assistant-app.py`, where you will first import Python librairies as follow:
9292

@@ -151,7 +151,7 @@ Then, build the **Text To Speech (TTS)** function in order to transform the writ
151151
**What to do?**
152152

153153
- The LLM response is retrieved
154-
- A call is made to the TTS AI Endpoint named `nvr-tts-en-us`
154+
- A call is made to the TTS AI endpoint named `nvr-tts-en-us`
155155
- The audio sample and the sample rate are returned to play the audio automatical
156156

157157
```python
@@ -181,7 +181,7 @@ def tts_synthesis(response):
181181

182182
### Build the LLM chat app with Streamlit
183183

184-
In this last step, create the Chatbot app using [Mixtral8x7B](https://endpoints.ai.cloud.ovh.net/models/e2ecb4a7-98d5-420d-9789-e0aa6ddf0ffc) endpoint (or any other model) and [Streamlit](https://streamlit.io/), an open-source Python library that allows to quickly create user interfaces for Machine Learning models and demos. Here is a working code example:
184+
In this last step, create the chatbot app using [Mixtral8x7B](https://endpoints.ai.cloud.ovh.net/models/e2ecb4a7-98d5-420d-9789-e0aa6ddf0ffc) endpoint (or any other model) and [Streamlit](https://streamlit.io/), an open-source Python library that allows to quickly create user interfaces for Machine Learning models and demos. Here is a working code example:
185185

186186
```python
187187
# streamlit interface
Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
id: 5cc03bb6-9650-4c10-9aca-e01890825e17
2+
full_slug: public-cloud-ai-endpoints-voice-virtual-assistant
Lines changed: 128 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,128 @@
1+
---
2+
title: AI Endpoints - Create a code assistant with Continue
3+
excerpt: Build your own code assistant directly in VSCode or JetBrains IDEs using the Continue plugin
4+
updated: 2025-04-15
5+
---
6+
7+
> [!primary]
8+
>
9+
> AI Endpoints is currently in **Beta**. Although we aim to offer a production-ready product even in this testing phase, service availability may not be guaranteed. Please be careful if you use endpoints for production, as the Beta phase is not yet complete.
10+
>
11+
> AI Endpoints is covered by the **[OVHcloud AI Endpoints Conditions](https://storage.gra.cloud.ovh.net/v1/AUTH_325716a587c64897acbef9a4a4726e38/contracts/48743bf-AI_Endpoints-ALL-1.1.pdf)** and the **[OVHcloud Public Cloud Special Conditions](https://storage.gra.cloud.ovh.net/v1/AUTH_325716a587c64897acbef9a4a4726e38/contracts/d2a208c-Conditions_particulieres_OVH_Stack-WE-9.0.pdf)**.
12+
>
13+
14+
## Introduction
15+
16+
Want more control over your code assistant? Looking to integrate your own LLM configuration and use models hosted on **[AI Endpoints](https://endpoints.ai.cloud.ovh.net/)**?
17+
18+
This guide shows you how to build your own developer assistant using **[Continue](https://www.continue.dev/)**, an open-source IDE plugin that works with both VSCode and JetBrains IDEs, in combination with OVHcloud.
19+
20+
Continue lets you plug in your own LLMs, enabling full control over which models you use and how they interact with your code.
21+
22+
## Requirements
23+
24+
- A [Public Cloud project](/links/public-cloud/public-cloud) in your OVHcloud account
25+
- An access token for **OVHcloud AI Endpoints**. To create an API token, follow the instructions in the [AI Endpoints - Getting Started](/pages/public_cloud/ai_machine_learning/endpoints_guide_01_getting_started) guide.
26+
27+
## Instructions
28+
29+
### Install Continue
30+
31+
Continue is distributed as an IDE plugin and supports:
32+
33+
- Visual Studio Code
34+
- JetBrains IDEs (e.g. IntelliJ, PyCharm)
35+
36+
Follow the [official Continue installation instructions](https://docs.continue.dev/docs/getting-started/install) for your IDE.
37+
38+
Once installed, Continue will share the same configuration across your IDEs.
39+
40+
### Configure Continue with AI Endpoints
41+
42+
Continue uses a JSON-based configuration file to manage:
43+
44+
- Chatbot tool models
45+
- Tab autocomplete models
46+
47+
You can customize this configuration file to connect the plugin to AI Endpoints:
48+
49+
```json
50+
{
51+
"tabAutocompleteModel": {
52+
"title": "Qwen2.5-Coder-32B-Instruct",
53+
"model": "Qwen2.5-Coder-32B-Instruct",
54+
"apiBase": "https://qwen-2-5-coder-32b-instruct.endpoints.kepler.ai.cloud.ovh.net/api/openai_compat/v1",
55+
"provider": "openai",
56+
"useLegacyCompletionsEndpoint": true,
57+
"apiKey": "<your API key>"
58+
},
59+
"models": [
60+
{
61+
"title": "Meta-Llama-3_3-70B-Instruct",
62+
"model": "Meta-Llama-3_3-70B-Instruct",
63+
"apiBase": "https://llama-3-3-70b-instruct.endpoints.kepler.ai.cloud.ovh.net/api/openai_compat/v1",
64+
"provider": "openai",
65+
"useLegacyCompletionsEndpoint": false,
66+
"apiKey": "<your API key>"
67+
},
68+
{
69+
"title": "Qwen2.5-Coder-32B-Instruct",
70+
"model": "Qwen2.5-Coder-32B-Instruct",
71+
"apiBase": "https://qwen-2-5-coder-32b-instruct.endpoints.kepler.ai.cloud.ovh.net/api/openai_compat/v1",
72+
"provider": "openai",
73+
"useLegacyCompletionsEndpoint": false,
74+
"apiKey": "<your API key>"
75+
}
76+
]
77+
// ...
78+
}
79+
```
80+
81+
### Tab Completion Configuration
82+
83+
You can define only one model for tab autocomplete. Choose any model from the Code LLM category in AI Endpoints. Here's a quick example:
84+
85+
```json
86+
{
87+
"tabAutocompleteModel": {
88+
"title": "Qwen2.5-Coder-32B-Instruct",
89+
"model": "Qwen2.5-Coder-32B-Instruct",
90+
"apiBase": "https://qwen-2-5-coder-32b-instruct.endpoints.kepler.ai.cloud.ovh.net/api/openai_compat/v1",
91+
"provider": "openai",
92+
"useLegacyCompletionsEndpoint": true,
93+
"apiKey": "<your API key>"
94+
}
95+
}
96+
```
97+
98+
### Chatbot Configuration
99+
100+
For the chatbot tool, you can define multiple models. Try out different LLMs and choose the one that best fits your use case. You can switch between them easily in the IDE UI.
101+
102+
### Try It Out
103+
104+
Once Continue is configured with your AI Endpoints, you're ready to test both features:
105+
106+
**Chatbot Tool**
107+
108+
Use the chatbot sidebar to ask for help, generate code, or refactor logic with any of your configured models.
109+
110+
![image](images/chatbot.gif){.thumbnail}
111+
112+
**Tab Completion Tool**
113+
114+
Just start typing in your editor. The autocomplete model will complete code as you go — powered by your custom-configured model from AI Endpoints.
115+
116+
![image](images/tab-completion.gif){.thumbnail}
117+
118+
## Conclusion
119+
120+
By using Continue and AI Endpoints, you now have access to a fully customizable code assistant, support for cutting-edge open-source large language models such as Qwen, Mixtral, and LLaMA 3, and the ability to manage your own configuration and resources on AI Endpoints.
121+
122+
If you need training or technical assistance to implement our solutions, contact your sales representative or click on [this link](/links/professional-services) to get a quote and ask our Professional Services experts for a custom analysis of your project.
123+
124+
## Feedback
125+
126+
Please feel free to send us your questions, feedback, and suggestions regarding AI Endpoints and its features:
127+
128+
- In the #ai-endpoints channel of the OVHcloud [Discord server](https://discord.gg/ovhcloud), where you can engage with the community and OVHcloud team members.
7.24 MB
Loading
7.43 MB
Loading

0 commit comments

Comments
 (0)