Skip to content

Commit afa9fff

Browse files
authored
Update training rock paper scisors (#93)
* feat: ✨ update with workshop code * feat: ⬆️ upgrade Rock/Paper/Scisors to YOLOv11
1 parent b67a7b7 commit afa9fff

File tree

3 files changed

+27
-25
lines changed

3 files changed

+27
-25
lines changed

ai/rock-paper-scissors/application/Dockerfile

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,3 @@ RUN chown -R 42420:42420 /workspace
1919

2020
# run (CMD) the app.py
2121
CMD [ "streamlit" , "run" , "/workspace/app.py", "--server.address=0.0.0.0" ]
22-
23-
24-
#ENV HOME=/workspace

ai/rock-paper-scissors/training/README.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -5,12 +5,14 @@
55

66
### Set up
77
- install `ovhai` CLI, see [documentation](https://help.ovhcloud.com/csm/en-gb-public-cloud-ai-cli-install-client?id=kb_article_view&sysparm_article=KB0047844)
8-
- we assume that you have an S3* compatible bucket named `rock-paper-scissors` with all needed data from the demo Create a Notebook to [play to rock/paper/scissors](../../notebooks/YOLOV8/)
8+
- we assume that you have an S3* compatible bucket named `rock-paper-scissors` with all needed data from the demo Create a Notebook to [play to rock/paper/scissors](../../rock-paper-scissors/notebooks/rock-paper-scissors.ipynb)
99

1010
### Image build for AI Training
1111

12-
- build the image with the Python script: `docker build . -t ovhcom/rock-paper-scissors-training-job:1.0.0`
13-
- push the image to the registry: `docker push ovhcom/rock-paper-scissors-training-job:1.0.0`
12+
- build the image with the Python script: `docker build . -t <Shared Docker Registries>/rock-paper-scissors-training-job:1.0.0`
13+
- push the image to the registry: `docker push <Shared Docker Registries>/rock-paper-scissors-training-job:1.0.0`
14+
15+
> ℹ️ see the [documentation](https://help.ovhcloud.com/csm/fr-public-cloud-ai-manage-registries?id=kb_article_view&sysparm_article=KB0057958) for more information about the _Shared Docker Registries_ ℹ️
1416
1517
### AI Training Job creation
1618

@@ -22,7 +24,7 @@ ovhai job run \
2224
--env NB_OF_EPOCHS=50 \
2325
--volume rock-paper-scissors-data@S3GRA/:/workspace/data:RW:cache \
2426
--unsecure-http \
25-
ovhcom/rock-paper-scissors-training-job:1.0.0
27+
<Shared Docker Registries>/rock-paper-scissors-training-job:1.0.0
2628
```
2729

2830
You can follow the training with the logs: `ovhai job logs -f <job id>`

ai/rock-paper-scissors/training/train.py

Lines changed: 21 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -3,39 +3,42 @@
33
import shutil
44
import os
55

6-
#######################################################################################################################
7-
## 🎯 The aim of this script is to do transfert learning on YOLOv8 model. ##
8-
## ℹ️ Note on the environments variables: ##
9-
## - YOLO_MODEL (default value yolo11n.pt) is the YOLO model you want fine tune ##
10-
## - NB_OF_EPOCHS (default value: 50) is an environment variable passed to the Docker run command to specify ##
11-
## the number of epochs ##
12-
## - DEVICE_TO_USE (default value 0) is to specify to use GPU (0) or CPU (cpu) ##
13-
## - PATH_TO_DATASET (default value is '/workspace/data/data.yaml') is to specify the path to the ##
14-
## training dataset ##
15-
## - PATH_TO_EXPORTED_MODEL (default value is '/workspace/data/') is to specify the path where export the ##
16-
## trained model ##
17-
#######################################################################################################################
6+
#########################################################################################################################
7+
## 🎯 The aim of this script is to do transfert learning on YOLOv11 model. ##
8+
## ℹ️ Note on the environments variables: ##
9+
## - NB_OF_EPOCHS (default value: 50) is an environment variable passed to the Docker run command to specify ##
10+
## the number of epochs ##
11+
## - DEVICE_TO_USE (default value 0) is to specify to use GPU (0) or CPU (cpu) ##
12+
## - PATH_TO_DATASET (default value is '/workspace/attendee/data.yaml') is to specify the path to the ##
13+
## training dataset ##
14+
## - PATH_TO_EXPORTED_MODEL (default value is '/workspace/attendee/') is to specify the path where export the ##
15+
## trained model ##
16+
## - BATCH specifies the number of images used for one training iteration before updating the model's weights. ##
17+
## A larger batch size can lead to faster training but requires more memory. ##
18+
## - FREEZE allows to freeze certain layers of a pre-trained model. This way, these layers are kept unchanged ##
19+
## during training, which allows to preserve knowledge from the pre-trained model. ##
20+
#########################################################################################################################
1821

1922
# ✅ Check configuration
2023
ultralytics.checks()
2124

25+
# 🧠 Load a pretrained YOLO model
26+
model = YOLO('yolo11n.pt')
27+
2228
# 🛠 Get configuration from environment variables
23-
yoloModel = os.getenv('YOLO_MODEL', 'yolo11n.pt')
2429
nbOfEpochs = os.getenv('NB_OF_EPOCHS', 50)
2530
deviceToUse = os.getenv('DEVICE_TO_USE', 0)
2631
pathToDataset = os.getenv('PATH_TO_DATASET', '/workspace/data/data.yaml')
2732
pathToExportedModel = os.getenv('PATH_TO_EXPORTED_MODEL', '/workspace/data/')
28-
print('YOLO model to use:', yoloModel)
33+
batch = os.getenv('BATCH', 64)
34+
freeze = os.getenv('FREEZE', 10)
2935
print('Number of epochs to set:', nbOfEpochs)
3036
print('Device to set:', deviceToUse)
3137
print('Path to the dataset to set:', pathToDataset)
3238
print('Path to the exported model to set:', pathToExportedModel)
3339

34-
# 🧠 Load a pretrained YOLO model
35-
model = YOLO(yoloModel)
36-
3740
# 💪 Train the model with new data ➡️ one GPU / NB_OF_EPOCHS iterations (epochs)
38-
model.train(data=pathToDataset, device=deviceToUse, epochs=int(nbOfEpochs), verbose=True)
41+
model.train(data=pathToDataset, device=deviceToUse, epochs=int(nbOfEpochs), verbose=True, batch=batch, freeze=freeze)
3942

4043
# 💾 Save the model
4144
exportedMetaData = model.export()

0 commit comments

Comments
 (0)