You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
excerpt: How to build a sign language recognition app with Streamlit
5
5
section: AI Deploy - Tutorials
6
6
order: 13
7
-
updated: 2023-03-31
7
+
updated: 2023-04-03
8
8
---
9
9
10
-
**Last updated 31th March, 2023.**
10
+
**Last updated 3rd April, 2023.**
11
11
12
12
## Objective
13
13
@@ -17,18 +17,18 @@ In order to do this, you will use [Streamlit](https://streamlit.io/), a Python f
17
17
18
18
For more information on how to train YOLOv7 on a custom dataset, refer to the following [documentation](https://docs.ovh.com/gb/en/publiccloud/ai/notebooks/yolov7-sign-language/).
19
19
20
-
Overview of the Sign Language recognition app:
20
+
Here is an overview of the Sign Language recognition app:
-access to the [OVHcloud Control Panel](https://www.ovh.com/auth/?action=gotomanager&from=https://www.ovh.co.uk/&ovhSubsidiary=GB)
27
-
-an AI Deploy project created inside a Public Cloud project
28
-
-a[user for AI Deploy](https://docs.ovh.com/gb/en/publiccloud/ai/users)
26
+
-Access to the [OVHcloud Control Panel](https://www.ovh.com/auth/?action=gotomanager&from=https://www.ovh.co.uk/&ovhSubsidiary=GB)
27
+
-An AI Deploy project created inside a Public Cloud project
28
+
-A[user for AI Deploy](https://docs.ovh.com/gb/en/publiccloud/ai/users)
29
29
-[Docker](https://www.docker.com/get-started) installed on your local computer
30
-
-some knowledge about building image and [Dockerfile](https://docs.docker.com/engine/reference/builder/)
31
-
-your weights obtained from training YOLOv7 model on the [ASL letters dataset](https://public.roboflow.com/object-detection/american-sign-language-letters/1) (refer to the *"Export trained weights for future inference"* part of the [notebook for YOLOv7](https://github.com/ovh/ai-training-examples/blob/main/notebooks/computer-vision/object-detection/miniconda/yolov7/notebook_object_detection_yolov7_asl.ipynb)
30
+
-Some knowledge about building image and [Dockerfile](https://docs.docker.com/engine/reference/builder/)
31
+
-Your weights obtained from training YOLOv7 model on the [ASL letters dataset](https://public.roboflow.com/object-detection/american-sign-language-letters/1) (refer to the *"Export trained weights for future inference"* part of the [notebook for YOLOv7](https://github.com/ovh/ai-training-examples/blob/main/notebooks/computer-vision/object-detection/miniconda/yolov7/notebook_object_detection_yolov7_asl.ipynb)
32
32
33
33
## Instructions
34
34
@@ -37,6 +37,7 @@ You are going to follow different steps to build your Streamlit application.
37
37
- More information about Streamlit capabilities can be found [here](https://docs.streamlit.io/en/stable/).
38
38
- Direct link to the full Python script can be found [here](https://github.com/ovh/ai-training-examples/blob/main/apps/streamlit/sign-language-recognition-yolov7-app/main.py).
39
39
40
+
> [!warning]
40
41
> **Warning**
41
42
> You must have previously created an `asl-volov7-model` Object Storage container when training your model via [AI Notebooks](https://docs.ovh.com/gb/en/publiccloud/ai/notebooks/yolov7-sign-language/).
42
43
>
@@ -215,11 +216,12 @@ Launch the following command from the **Dockerfile** directory to build your app
> The dot `.` argument indicates that your build context (place of the **Dockerfile** and other needed files) is the current directory.
220
-
221
-
> **Note**
222
-
> The `-t` argument allows you to choose the identifier to give to your image. Usually image identifiers are composed of a **name** and a **version tag**`<name>:<version>`. For this example we chose **yolov7-streamlit-asl-recognition:latest**.
219
+
> [!primary]
220
+
> **Notes**
221
+
>
222
+
> - The dot `.` argument indicates that your build context (place of the **Dockerfile** and other needed files) is the current directory.
223
+
>
224
+
> - The `-t` argument allows you to choose the identifier to give to your image. Usually image identifiers are composed of a **name** and a **version tag**`<name>:<version>`. For this example we chose **yolov7-streamlit-asl-recognition:latest**.
223
225
224
226
### Test it locally (optional)
225
227
@@ -229,16 +231,18 @@ Launch the following **Docker command** to launch your application locally on yo
229
231
docker run --rm -it -p 8501:8051 --user=42420:42420 yolov7-streamlit-asl-recognition:latest
230
232
```
231
233
232
-
> **Note**
233
-
> The `-p 8501:8501` argument indicates that you want to execute a port redirection from the port **8501** of your local machine into the port **8501** of the Docker container. The port **8501** is the default port used by **Streamlit** applications.
234
-
235
-
> **Note**
236
-
> Don't forget the `--user=42420:42420` argument if you want to simulate the exact same behaviour that will occur on **AI Deploy apps**. It executes the Docker container as the specific OVHcloud user (user **42420:42420**).
234
+
> [!primary]
235
+
> **Notes**
236
+
>
237
+
> - The `-p 8501:8501` argument indicates that you want to execute a port redirection from the port **8501** of your local machine into the port **8501** of the Docker container. The port **8501** is the default port used by **Streamlit** applications.
238
+
>
239
+
> - Don't forget the `--user=42420:42420` argument if you want to simulate the exact same behaviour that will occur on **AI Deploy apps**. It executes the Docker container as the specific OVHcloud user (user **42420:42420**).
237
240
238
241
Once started, your application should be available on `http://localhost:8501`.
239
242
240
243
### Push the image into the shared registry
241
244
245
+
> [!warning]
242
246
> **Warning**
243
247
> The shared registry of AI Deploy should only be used for testing purpose. Please consider attaching your own Docker registry. More information about this can be found [here](https://docs.ovh.com/gb/en/publiccloud/ai/training/add-private-registry).
244
248
@@ -272,16 +276,16 @@ ovhai app run <shared-registry-address>/yolov7-streamlit-asl-recognition:latest
> `--default-http-port 8501` indicates that the port to reach on the app URL is the `8501`.
277
-
278
-
> **Note**
279
-
>`--gpu 1` indicates that we request 4 CPUs for that app.
280
-
281
-
>**Note**
282
-
> Consider adding the `--unsecure-http` attribute if you want your application to be reachable without any authentication.
279
+
> [!primary]
280
+
> **Notes**
281
+
>
282
+
> -`--default-http-port 8501` indicates that the port to reach on the app URL is `8501`.
283
+
>
284
+
> -`--gpu 1` indicates that we request 4 CPUs for that app.
285
+
>
286
+
> -Consider adding the `--unsecure-http` attribute if you want your application to be reachable without any authentication.
283
287
284
288
## Go further
285
289
286
-
- You can imagine deploying an app using YOLO models with an other Python framework: **Flask**. Refer to this [tutorial](https://docs.ovh.com/gb/en/publiccloud/ai/deploy/web-service-yolov5/).
290
+
- You can imagine deploying an app using YOLO models with another Python framework: **Flask**. Refer to this [tutorial](https://docs.ovh.com/gb/en/publiccloud/ai/deploy/web-service-yolov5/).
287
291
- Feel free to use **Streamlit** for other AI tasks! Deploy a Speech-to-Text app [here](https://docs.ovh.com/gb/en/publiccloud/ai/deploy/tuto-streamlit-speech-to-text-app/).
If you want to run it with the CLI, just follow this [guide](https://docs.ovh.com/gb/en/publiccloud/ai/cli/access-object-storage-data). You have to choose the region, the name of your container and the path where your data is located and use the following command:
44
+
If you want to run it with the CLI, just follow this [guide](https://docs.ovh.com/gb/en/publiccloud/ai/cli/access-object-storage-data). You have to choose the region, the name of your container and the path where your data is located and use the following commands.
45
45
46
46
> [!warning]
47
47
>
@@ -102,7 +102,7 @@ You can then reach your notebook’s URL once it is running.
102
102
103
103
### Experimenting YOLOv7 notebook
104
104
105
-
You are know able to train the YOLOv7 model to recognize sign language!
105
+
You are now able to train the YOLOv7 model to recognize sign language!
106
106
107
107
A preview of this notebook can be found on GitHub [here](https://github.com/ovh/ai-training-examples/blob/main/notebooks/computer-vision/object-detection/miniconda/yolov7/notebook_object_detection_yolov7_asl.ipynb).
0 commit comments