Skip to content

Commit 489830a

Browse files
Merge pull request #4394 from ovh/develop
Develop > Master deployment
2 parents 020fa58 + 087a02b commit 489830a

File tree

146 files changed

+8826
-1808
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

146 files changed

+8826
-1808
lines changed

pages/cloud/private-cloud/eol-storage-migration/guide.fr-fr.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: Gestion de la fin de vie des stockages LV1 and LV2
33
slug: eol-storage-migration
4-
excerpt: Déouvrez la marche à suivre pour effectuer une migration de stockage
4+
excerpt: Découvrez comment effectuer une migration de stockage
55
section: FAQ
66
order: 05
77
hidden: true

pages/index.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -769,6 +769,7 @@
769769
+ [AI Training - Tutorial - Connect to VSCode via remote](platform/ai/training_tuto_04_vscode_remote)
770770
+ [AI Training - Tutorial - Use tensorboard inside a job](platform/ai/training_tuto_05_tensorboard)
771771
+ [AI Training - Tutorial - Compare models with W&B for audio classification task](platform/ai/training_tuto_06_models_comparaison_weights_and_biases)
772+
+ [AI Training - Tutorial - Train a Rasa chatbot with Docker and AI Training](platform/ai/training_tuto_07_train_rasa_chatbot)
772773
+ [AI Deploy](public-cloud-ai-and-machine-learning-ai-deploy)
773774
+ [Guides](public-cloud-ai-and-machine-learning-ai-deploy-guides)
774775
+ [AI Deploy - Capabilities and limitations](platform/ai/deploy_guide_01_capabilities)
@@ -788,6 +789,7 @@
788789
+ [AI Deploy - Tutorial - Deploy and call a spam classifier with FastAPI](platform/ai/deploy_tuto_08_fastapi_spam_classifier)
789790
+ [AI Deploy - Tutorial - Create and deploy a Speech to Text application using Streamlit](platform/ai/deploy_tuto_09_streamlit_speech_to_text_app)
790791
+ [AI Deploy - Tutorial - How to load test your application with Locust](platform/ai/deploy_tuto_10_locust)
792+
+ [AI Deploy - Tutorial - Deploy a Rasa chatbot with a simple Flask app](platform/ai/deploy_tuto_11_rasa_chatbot_flask)
791793
+ [Data Analytics](public-cloud-data-analytics)
792794
+ [Data Processing](public-cloud-data-analytics-data-processing)
793795
+ [Concepts](public-cloud-data-analytics-data-processing-concepts)
Lines changed: 265 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,265 @@
1+
---
2+
title: AI Deploy - Tutorial - Deploy a Rasa chatbot with a simple Flask app
3+
slug: deploy/rasa-chatbot
4+
routes:
5+
canonical: 'https://docs.ovh.com/gb/en/publiccloud/ai/deploy/rasa-chatbot/'
6+
excerpt: Understand how simple it is to deploy a chatbot with AI Deploy
7+
section: AI Deploy - Tutorials
8+
order: 11
9+
updated: 2023-03-21
10+
---
11+
12+
**Last updated 21st March, 2023.**
13+
14+
## Objective
15+
16+
In a previous tutorial, we created and trained a Rasa Chatbot with AI Notebooks: [How to create and train a chatbot on OVHcloud](https://docs.ovh.com/de/publiccloud/ai/notebooks/create-rasa-chatbot).
17+
Now, the aim of this tutorial is to deploy a chatbot with OVHcloud AI Tools. We also train our chatbot with AI Training.
18+
19+
We used the famous open source framework [Rasa](https://rasa.community/) to build the chatbot.
20+
To deploy our chatbot, we will use the [Flask framework](https://flask.palletsprojects.com/en/2.2.x/) and create a web app.
21+
22+
This tutorial's objectives are:
23+
24+
1. Secure the Flask application.
25+
2. Deploy the Rasa model with AI deploy.
26+
3. Deploy the Flask application and converse with the chatbot.
27+
28+
Here is a schema to explain how it works:
29+
30+
![image](images/diagramme.png){.thumbnail}
31+
32+
## Requirements
33+
34+
- Access to the [OVHcloud Control Panel](https://www.ovh.com/auth/?action=gotomanager&from=https://www.ovh.de/&ovhSubsidiary=de)
35+
- A Public Cloud project created
36+
- The ovhai CLI interface installed on your system (more information [here](https://docs.ovh.com/de/publiccloud/ai/cli/install-client/))
37+
- [Docker](https://www.docker.com/get-started) installed on your local computer
38+
- A [Docker Hub account](https://hub.docker.com/)
39+
- Knowledge about building images with [Dockerfile](https://docs.docker.com/engine/reference/builder/)
40+
41+
## Instructions
42+
43+
We will create two AI apps to deploy the Rasa model. First, you will have to create two environment variables for the Flask app.
44+
45+
### Clone our example repository
46+
47+
Please make sure you have cloned the GitHub repository.
48+
You can find it [here](https://github.com/ovh/ai-training-examples).
49+
50+
### Create environment variables
51+
52+
Frontend and backend (the chatbot) have to communicate safely and securely. We will generate security keys for that.
53+
54+
The first variable will be the secret key for the Json web token signature to access your Rasa chatbot. To generate this key, we use Python. If you have Python 2.6+ installed on your machine, you can run Python inside a terminal and then:
55+
56+
```python
57+
import secrets
58+
print(secrets.token_urlsafe())
59+
# Should display something like this:
60+
# dux3BudMxlRSm1GI3IoBEuS7UWVU3nYGJ9l_0Cd3rms
61+
```
62+
63+
The second one will be the algorithm used for the Json web token. The algorithm used will be H256.
64+
65+
You have your two environment variables. Time to save them! Create an `.env` file inside the folder `flask_app`. Your `.env` should look like this:
66+
67+
```
68+
JWT_SECRET_KEY=your-jwebtoken-generated-before
69+
JWT_ALGORITHM=HS256
70+
```
71+
72+
Your environment variables are saved. One more thing to do is to add in the `docker-compose.yml` file in the Rasa image the `JWT_SECRET_KEY` value. The value must be the same in the `.env` file. Otherwise, your model will not be able to run. Now, let's test locally our app or let's deploy our chatbot!
73+
74+
If you have already trained a Rasa model with OVHcloud, you should already have an object storage container with your trained models. If you don't have this one, please continue this tutorial to create one. Otherwise, you can go directly [here](#localtest).
75+
76+
### Add one object storage
77+
78+
The container we will create contain at least one model. This model will be served on a platform with AI Deploy.
79+
80+
We can specify during creation if we want them in read-only, read-write and co.
81+
Splitting input data and output data is a good practice, allowing you faster development code and avoids risk of deleting data.
82+
83+
The obvious goal about using object storage and not the local storage of the AI Notebook is to decorrelate compute and storage, allowing us to stop or delete the notebook while keeping the data safe.
84+
85+
If you want to know more about data storage concept, read this guide: [Create an object container](https://docs.ovh.com/de/storage/object-storage/pcs/create-container/).
86+
87+
For the chatbot deployment, we will create one object storage bucket. It will contain a pretrained model. If you've already trained a model before with other tutorials, don't create a new container.
88+
89+
To create the volume in GRA (Gravelines data centre) in read-only, go into the folder `ai-training-examples/apps/flask/conversational-rasa-chatbot/back-end/models`. After, you will just have to type:
90+
91+
```bash
92+
ovhai data upload GRA <model-output-container> 20221220-094914-yellow-foley.tar.gz
93+
```
94+
95+
The model `20221220-094914-yellow-foley.tar.gz` will be added in your container `<model-output-container>`. That's it, now you can deploy your chatbot.
96+
97+
### Test it locally (optional) <a name="localtest"></a>
98+
99+
A good practice is to test your work locally before going to production.
100+
101+
Open a terminal, move to the project folder (`ai-training-examples/apps/flask/conversational-rasa-chatbot`) then use this command:
102+
103+
```bash
104+
docker compose -f "flask-docker-compose.yml" up -d --build
105+
```
106+
107+
This command will create 2 containers, one for the Rasa model backend and one for the frontend server handled by Flask. Once the two containers are running (it will take 5 minutes max), you can go directly on your [localhost](http://0.0.0.0:5000/) on port 5000, the port of your frontend app.
108+
109+
To stop the containers, run this command:
110+
111+
```bash
112+
docker compose -f flask-docker-compose.yml down
113+
```
114+
115+
#### Deploy the Rasa model in the Cloud
116+
117+
For simplicity, we will use the ovhai CLI. With one command line, you will have your model up and running securely with TLS!
118+
119+
If you have already trained your chatbot with **AI Training** and use the same Dockerfile, you don't have to create and push a new image because the two images are the same. In this case, skip the creation of the container and go directly to the creation of the app.
120+
121+
We will need to create a container in order to deploy the chatbot. Let's create a Dockerfile, build the container and push it to your personal docker account. Here is the Dockerfile:Docker
122+
123+
```docker
124+
FROM python:3.8
125+
126+
WORKDIR /workspace
127+
ADD . /workspace
128+
129+
RUN pip install --no-cache-dir -r requirements_rasa.txt
130+
131+
132+
RUN chown -R 42420:42420 /workspace
133+
ENV HOME=/workspace
134+
135+
#If you deploy the chatbot you expose at port 5005.
136+
EXPOSE 5005
137+
138+
139+
CMD rasa run -m trained-models --cors "*" --debug --connector socketio --credentials "crendentials.yml" --endpoints "endpoints.yml" & rasa run actions
140+
```
141+
142+
This file can be found in the GitHub repository, you don't have to create it. The file is [here](https://github.com/ovh/ai-training-examples/blob/main/apps/flask/conversational-rasa-chatbot/back-end/rasa.Dockerfile).
143+
144+
Now run the following command in this folder (`back-end`) to build and push the container:
145+
146+
```bash
147+
docker build . -f rasa.Dockerfile -t <yourdockerhubId>/rasa-chatbot:latest
148+
docker push <yourdockerhubId>/rasa-chatbot:latest
149+
```
150+
151+
Now that your container is created, let's run our application and deploy our model!
152+
153+
```console
154+
ovhai app run --name rasa-back \
155+
--unsecure-http \
156+
--default-http-port 5005 \
157+
--cpu 4 \
158+
--volume <model-output-container>@GRA:/workspace/trained-models:RO \
159+
-e JWT_SECRET=<JWT_SECRET_KEY> \
160+
<yourdockerhubId>/rasa-chatbot:latest
161+
```
162+
163+
Explanation of each line:
164+
165+
- Launch an app in AI Deploy with the name "rasa-back".
166+
- Specify that our URL is not secured by OVHcloud. The model will be in fact secured with a Json web token. The only person who will access your model is the Flask frontend application. This is also why you created environment variables before. If you want to know more about Json web tokens, please refer to <https://jwt.io/>.
167+
- The port of the rasa model is 5005.
168+
- 4 CPUs are sufficient to deploy the model.
169+
- We will add a volume to get the model file.
170+
- In the `-e` argument, please put the jwt secret key you've generated which is in your `.env` file.
171+
- In the last line you specify the Docker image to load and also the bash command to run inside the Docker container.
172+
173+
Explication of the bash command running the chatbot (you can find it inside the Dockerfile):
174+
175+
- `rasa run`: Run a specific model to be used with others applications.
176+
- `-m trained-models`: Specify the path to the models trained before.
177+
- `--cors "*"`: Enable all cors, our frontend application must have access to the model.
178+
- `--debug`: Print all of the logs for each user connected and disconnected.
179+
- `--connector socketio`: Specify this connector to enable a connection to create a new website.
180+
- `--credentials "crendentials.yml"`: Specify here the path of the credentials file.
181+
- `--endpoints "endpoints.yml"`: Specify the path of the `endpoints.yml` file.
182+
- `rasa run actions`: The custom actions you've made before to launch them and use them.
183+
184+
Now, you can wait until your app is started, then go to the URL. Nothing special will happen, just a small message with **hello from Rasa 3.2.0**!
185+
186+
For better interactions, we will now deploy the Flask frontend.
187+
For simplification, everything is on the cloned [GitHub repository](https://github.com/ovh/ai-training-examples/tree/main/apps/flask/conversational-rasa-chatbot).
188+
189+
#### Deploy the frontend App
190+
191+
- Create the Dockerfile
192+
193+
First, we will need to create the Dockerfile. This Dockerfile is already created and it is in the folder `front-end`. Here is what it looks like:
194+
195+
```docker
196+
FROM python:3.8
197+
198+
WORKDIR /workspace
199+
ADD . /workspace
200+
201+
RUN pip install --no-cache-dir -r requirements_flask.txt
202+
203+
204+
RUN chown -R 42420:42420 /workspace
205+
ENV HOME=/workspace
206+
207+
208+
EXPOSE 5000
209+
210+
CMD python3 app.py
211+
```
212+
213+
Let's now run the app on AI Deploy! To do so, you will need to create a Docker image. Go into the folder `front-end` (`ai-training-examples/apps/flask/conversational-rasa-chatbot/front-end`) and run:
214+
215+
```bash
216+
docker build . -f flask.Dockerfile -t <yourdockerhubId>/flask-app:latest
217+
docker push <yourdockerhubId>/flask-app:latest
218+
```
219+
220+
- Deploy the Docker image
221+
222+
Once built, let's run the Frontend application with the ovhai CLI.
223+
224+
But first, get the URL of your backend Rasa chatbot. It will be something like this: **https://259b36ff-fc61-46a5-9a25-8d9a7b9f8ff6.app.gra.training.ai.cloud.ovh.net/**. You can have it with the CLI by listing all of your apps and locating the one you want.
225+
226+
Now you can run this command:
227+
228+
```bash
229+
ovhai app run --name flask-app \
230+
--token <token> \
231+
--default-http-port 5000 \
232+
-e API_URL=<RasaURL_Previously_Copied> \
233+
--cpu 2 \
234+
<yourdockerhubId>/flask-app:latest
235+
```
236+
237+
That's it! On the URL of this app, you can speak to your chatbot. Try to have a simple conversation! If you reload the page, you will notice that the chatbot goes back to zero. So every user is different on each machine.
238+
239+
Here is an example of a discussion with the chatbot:
240+
241+
![image](images/result.jpg){.thumbnail}
242+
243+
## Go further
244+
245+
If you want to see how the model is created and trained with AI Notebooks, please follow this tutorial.
246+
247+
[How to create and train a rasa chatbot](https://docs.ovh.com/de/publiccloud/ai/notebooks/create-rasa-chatbot)
248+
249+
If you want to train a Rasa chatbot with the tool AI Training, please refer to this tutorial.
250+
251+
[How to train a chatbot with Docker and AI Training](https://docs.ovh.com/de/publiccloud/ai/training/tuto-train-rasa-chatbot)
252+
253+
If you want to use more functionalities of Rasa, please follow this link. We use Rasa Open Source and not Rasa X.
254+
255+
[Rasa Open source](https://rasa.com/docs/rasa/)
256+
257+
If you want to know more about the Flask framework, please go to this link.
258+
259+
[Flask Framework](https://flask.palletsprojects.com/en/2.2.x/)
260+
261+
## Feedback
262+
263+
Please send us your questions, feedback and suggestions to improve the service:
264+
265+
- On the OVHcloud [Discord server](https://discord.com/go/ovhcloud)

0 commit comments

Comments
 (0)