Skip to content

Commit f54126c

Browse files
Merge pull request #6 from SonySemiconductorSolutions/release/1.2.0
release/1.2.0
2 parents 526d315 + cb2a6f1 commit f54126c

25 files changed

+2579
-93
lines changed

README.md

Lines changed: 70 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
This repository provides tutorials on creating and training different machine learning models for deployment on the Raspberry Pi AI Camera, which uses Sony's IMX500 Intelligent Image Sensor.
44
Each tutorial is presented as an interactive [Jupyter notebook](https://docs.jupyter.org/) and contains instructions for dataset setup, model creation, training and quantization. The quantization step is done using the [Model Compression Toolkit (MCT)](https://github.com/sony/model_optimization).
55

6-
To run the Jupyter notebooks, we recommend using the Google Colab links provided at the beginning of each tutorial and in this README file. If the link does not work, you can download the Jupyter notebook and upload it to Google Colab without any issues. For more advanced usage, or if you prefer to run the tutorials locally, each tutorial includes a Docker image with all necessary dependencies pre-installed.
6+
To run the Jupyter notebooks, we recommend using the Google Colab links provided at the beginning of each tutorial and in this README file. If the link does not work, you can download the Jupyter notebook and upload it to Google Colab without any issues. For more advanced usage, or if you prefer to run the tutorials locally, each tutorial includes a Docker image with all necessary dependencies pre-installed. More details at the end of this document.
77

88
## Notice
99

@@ -14,31 +14,77 @@ Please read the Site Policy of GitHub and understand the usage conditions.
1414
## Running tutorials on Colab
1515
**Observe**: As of July 2025, Google Colab has been updated and is using Python 3.12. In order to run the following tutorials one **must change the Runtime from Latest to 2025.07**. This will use the previous version of the virtual machine with Python 3.11. For instructions see [blogpost](https://developers.googleblog.com/en/google-colab-adds-more-back-to-school-improvements).
1616

17-
### [Training mobilenetV2 classifier](./notebooks/mobilenet-rps/custom_mobilenet.ipynb)
17+
### [Training MobileNetV2 classifier](./notebooks/mobilenet-rps/custom_mobilenet.ipynb)
1818
* [Notebook file](notebooks/mobilenet-rps/custom_mobilenet.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/SonySemiconductorSolutions/aitrios-rpi-tutorials-ai-model-training/blob/main/notebooks/mobilenet-rps/custom_mobilenet.ipynb)
1919

2020
### [Training NanoDet object detector](./notebooks/nanodet-ppe/custom_nanodet.ipynb)
2121
* [Notebook file](./notebooks/nanodet-ppe/custom_nanodet.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/SonySemiconductorSolutions/aitrios-rpi-tutorials-ai-model-training/blob/main/notebooks/nanodet-ppe/custom_nanodet.ipynb)
2222

23-
## Running tutorials on Docker
24-
1. Ensure you have Docker installed on your system. You can download and install it from [docker.com](https://www.docker.com/).
25-
26-
2. Clone this repository
27-
```
28-
$ git clone https://github.com/SonySemiconductorSolutions/aitrios-rpi-tutorials-ai-model-training.git
29-
```
30-
3. Navigate to repository folder
31-
```
32-
$ cd aitrios-rpi-tutorials-ai-model-training
33-
```
34-
4. Navigate to the desired tutorial folder
35-
```
36-
$ cd notebooks/<tutorial>
37-
```
38-
5. Run the docker container
39-
```
40-
$ make jupyter-local
41-
```
42-
6. Access Jupyter Notebook, fill in the exposed `port`
43-
* Open your browser and navigate to `http://localhost:<port>`.
44-
* Use the token provided in the terminal output to log in.
23+
### [Training Deeplabv3Plus semantic segmentation model](./notebooks/deeplab3-pothole/custom_deeplab.ipynb)
24+
* [Notebook file](./notebooks/deeplab3-pothole/custom_deeplab.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/SonySemiconductorSolutions/aitrios-rpi-tutorials-ai-model-training/blob/main/notebooks/deeplab3-pothole/custom_deeplab.ipynb)
25+
26+
### [Training PersonLab key-points model](./notebooks/personlab-gauge/custom_personlab.ipynb)
27+
* [Notebook file](./notebooks/personlab-gauge/custom_personlab.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/SonySemiconductorSolutions/aitrios-rpi-tutorials-ai-model-training/blob/main/notebooks/personlab-gauge/custom_personlab.ipynb)
28+
29+
## Running tutorials on Docker - VS Code and Docker Setup Instructions for Jupyter Notebooks
30+
31+
To run the Jupyter notebooks, we recommend using VS Code and connect to the Jupyter server running inside a Docker container. This allows you to edit and run the Jupyter Notebooks in VS Code, with all code executing in the containerized environment (resolving dependency issues).
32+
33+
**Follow these steps:**
34+
35+
1. **Install Prerequisites:** Ensure you have **Docker** installed on your system (download from the [Docker website](https://docs.docker.com/engine/install/) if not already). Also install [Visual Studio Code](https://code.visualstudio.com/docs/setup/setup-overview), the VS Code **Python extension** and VS Code **Jupyter extension**. Both extension are provided by Microsoft.
36+
37+
2. **Clone the Repository (if not done already):**
38+
39+
```bash
40+
$ git clone https://github.com/SonySemiconductorSolutions/aitrios-rpi-tutorials-ai-model-training.git
41+
```
42+
43+
3. **Navigate to the Repository:**
44+
45+
```bash
46+
$ cd aitrios-rpi-tutorials-ai-model-training
47+
```
48+
49+
4. **Go to the Tutorial Directory:** Decide which tutorial you want to run, and navigate to its folder under `notebooks`. For example, for the MobileNet classifier tutorial:
50+
51+
```bash
52+
$ cd notebooks/<tutorial-folder>
53+
```
54+
55+
*(Replace `<tutorial-folder>` with the actual folder name, such as `mobilenet-rps` for the MobilenetV2 classifier, `nanodet-object-detector` for the NanoDet detector, etc.)*
56+
57+
5. **Build and Run the Docker Container with Jupyter:** In the tutorial folder, start the Docker container which will launch a Jupyter Notebook server.
58+
59+
* **Linux/macOS (or Windows using WSL/Git Bash):** Run the provided Makefile command:
60+
61+
```bash
62+
$ make jupyter-local
63+
```
64+
65+
This will build the Docker image (if not already built) and run a container that starts a Jupyter Notebook server on an available port. The terminal will show logs from the container, including the URL for the running Jupyter server.
66+
* **Windows (without Make installed):** If the `make` command is not available, you have a couple of options:
67+
68+
* Install a Make tool or use **WSL** to run the above command.
69+
* **Or**, run the Docker commands manually. For example, you can build the Docker image and run the container yourself:
70+
71+
```bash
72+
$ docker build -t aitrios_tutorial .
73+
$ docker run -p 8888:8888 aitrios_tutorial
74+
```
75+
76+
The above commands (run from the tutorial folder) will build the Docker image (tagged as `aitrios_tutorial`) and start a container, mapping port 8888 to your local machine. (Ensure the port `8888` is free or adjust as needed.) The container’s startup will output a URL with a token, similar to the Makefile approach.
77+
78+
6. **Open the Notebook in VS Code:** Launch Visual Studio Code and open the cloned repository folder (or the specific tutorial folder). Then open the Jupyter Notebook file (`.ipynb`) for the tutorial you’re running. The notebook should open in VS Code’s Notebook Editor view.
79+
80+
7. **Connect VS Code to the Docker’s Jupyter Server:** By default, VS Code might try to use a local Python kernel. Instead, point it to the Jupyter server running inside Docker:
81+
82+
* In the VS Code Notebook Editor, click the **kernel picker** in the top-right corner (it might show a text like “Python 3” or “Select Kernel”).
83+
* In the kernel selection dropdown, choose **“Existing Jupyter Server…”**. (If you don’t see this option, you can also open the Command Palette with **Ctrl+Shift+P** (Windows/Linux) or **Cmd+Shift+P** (macOS) and run the command **“Jupyter: Specify Jupyter Server for Connections”**.)
84+
* When prompted for the server URI, type the Jupyter Notebook URL (running Docker container with default settings): `http://localhost:8888`.
85+
* Press Enter to connect. Press Yes to accept connecting without token. Press Enter to accept Server Display Name (localhost). Press/Select the Jupyter Kernel. VS Code will now attach to the Jupyter server running in the Docker container. You should see an indication in the notebook toolbar or status bar that you’re connected to a remote kernel (for example, it might display “Python 3 (Docker container)” or similar as the selected kernel).
86+
87+
8. **Run the Notebook:** With the kernel connected to the Docker container, you can execute the notebook cells in VS Code as usual (use the Run button (to the left of the cells) or Shift+Enter). All code will run inside the Docker container’s environment. You can verify this by, for example, printing the Python environment or checking for installed packages in the notebook – it should reflect the container’s setup.
88+
Quantized and converted models created inside the Docker container will appear in the shared folder mapped to `$HOST_WORKDIR` (by default `tutorial/`). See the `Makefile` for customization.
89+
90+
9. **(Optional) Stop the Container:** When you are done, stop the Docker container. If you ran it via the Makefile in an attached mode, you can press **Ctrl+C** in that terminal to stop the Jupyter server and then run `docker container ls` and `docker container stop <container-id>` if needed. If you ran it with the `docker run` command manually, you can stop it by pressing Ctrl+C in the terminal that is streaming the logs (since we ran it without `-d` flag), or by using `docker ps` / `docker stop`. Make sure to shut down the container when finished to free resources.
Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
.env
2+
dataset
3+
local_mct
4+
nanodet
5+
workspace
6+
tmp
7+
.buildx-cache
8+
__pycache__
9+
*.pyc
10+
*.pyo
11+
*.pyd
12+
13+
.ipynb_checkpoints
14+
nanodet_model_best-removed-aux.pth
15+
annotated.jpg
16+
nanodet-quant-ppe.keras
17+
18+
**/Makefile_local
19+
**/content
20+
**/tutorial
Lines changed: 54 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
FROM python:3.11-bookworm
2+
3+
ARG DEBIAN_FRONTEND=noninteractive
4+
ARG NB_USER=newuser
5+
ARG NB_UID=1000
6+
ARG NB_GID=1000
7+
8+
RUN apt-get update && apt-get install -y \
9+
build-essential git unzip curl libgl1 nano wget tree \
10+
openjdk-17-jdk ffmpeg libsm6 libxext6 \
11+
&& apt-get clean \
12+
&& rm -rf /var/lib/apt/lists/*
13+
14+
# Python deps first (layer cache)
15+
COPY requirements.txt /tmp/requirements.txt
16+
RUN pip install -U pip && \
17+
pip install --no-cache-dir -r /tmp/requirements.txt
18+
19+
RUN mkdir -p /usr/local/lib/python3.11/site-packages/triton/third_party/cuda/nvvm/libdevice && \
20+
scp /usr/local/lib/python3.11/site-packages/triton/third_party/cuda/lib/libdevice.10.bc /usr/local/lib/python3.11/site-packages/triton/third_party/cuda/nvvm/libdevice/
21+
22+
# --- ensure user and base dirs exist BEFORE touching /home/${NB_USER} ---
23+
RUN groupadd -g ${NB_GID} ${NB_USER} || true \
24+
&& id -u ${NB_UID} >/dev/null 2>&1 || useradd -m -s /bin/bash -u ${NB_UID} -g ${NB_GID} ${NB_USER} \
25+
&& install -d -o ${NB_UID} -g ${NB_GID} /home/${NB_USER}/tutorial \
26+
&& install -d -o ${NB_UID} -g ${NB_GID} /home/${NB_USER}/.local/bin \
27+
&& install -d -o ${NB_UID} -g ${NB_GID} /home/${NB_USER}/.cache
28+
29+
# Clone + install as root into system site-packages
30+
# RUN --mount=type=cache,target=/root/.cache/git,id=git-cache,sharing=locked
31+
WORKDIR /home/${NB_USER}
32+
RUN git clone --depth 1 https://github.com/SonySemiconductorSolutions/aitrios-rpi-training-samples \
33+
/home/${NB_USER}/aitrios-rpi-training-samples \
34+
&& pip install --no-cache-dir --no-deps -e \
35+
/home/${NB_USER}/aitrios-rpi-training-samples \
36+
&& pip install --no-cache-dir -e \
37+
/home/${NB_USER}/aitrios-rpi-training-samples/third_party/nanodet/nanodet \
38+
&& chown -R ${NB_UID}:${NB_GID} /home/${NB_USER}
39+
40+
# Bring in notebooks/Makefile owned by the user
41+
COPY --chown=${NB_UID}:${NB_GID} *ipynb Makefile ./
42+
43+
# Switch to unprivileged user and set env
44+
USER ${NB_UID}:${NB_GID}
45+
ENV XLA_FLAGS=--xla_gpu_cuda_data_dir=/usr/local/lib/python3.11/site-packages/triton/third_party/cuda/
46+
RUN jupyter notebook --generate-config
47+
48+
ENV PYTHONPATH=""
49+
ENV PYTHONPATH="/home/${NB_USER}/.local/lib/python3.11/site-packages:${PYTHONPATH}"
50+
ENV PATH="/home/${NB_USER}/.local/bin:${PATH}"
51+
WORKDIR /home/${NB_USER}/
52+
EXPOSE 8888
53+
54+
CMD ["make", "test-github", "TESTDIR=/home/newuser"]
Lines changed: 45 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,45 @@
1+
TESTDIR := /home/newuser/tutorial
2+
SRC := custom_deeplab.ipynb
3+
CHECK_FILE := aitrios-rpi-training-samples/samples/model/deeplab_v3p_pothole/deeplab_v3p_pothole_quantized.keras
4+
REPORT_FILE_NAME := executed_notebook.ipynb
5+
REPORT_FILE_PATH := $(REPORT_FILE_NAME)
6+
IP_ADDR := 127.0.0.1
7+
PORT := 8888
8+
ALLOW_ROOT :=
9+
10+
# Host folder that mirrors TESTDIR inside the container
11+
HOST_WORKDIR ?= $(CURDIR)/tutorial
12+
13+
SHELL := /bin/bash
14+
15+
docker-image:
16+
# Build with your host UID/GID so mounted files keep correct ownership
17+
docker build \
18+
--build-arg NB_UID=$$(id -u) \
19+
--build-arg NB_GID=$$(id -g) \
20+
-t test-image .
21+
22+
prepare-host-dir:
23+
mkdir -p "$(HOST_WORKDIR)"
24+
25+
# Use to run notebook manually (with persistence)
26+
jupyter-local: docker-container-remove docker-image prepare-host-dir
27+
docker run -it -d --name test-container \
28+
--gpus all --shm-size=2g \
29+
-p $(IP_ADDR):$(PORT):$(PORT) \
30+
-v $(HOST_WORKDIR):$(TESTDIR) \
31+
test-image bash
32+
# Start Jupyter from the work dir so new files land in the mounted folder
33+
docker exec -it test-container bash -lc "jupyter notebook --ip=0.0.0.0 $(ALLOW_ROOT) --no-browser --NotebookApp.token=''"
34+
35+
# This is the test to run using github actions
36+
test-github:
37+
@echo "Running tests..."
38+
jupyter-nbconvert --to notebook --execute $(SRC) --output $(REPORT_FILE_NAME)
39+
# If file absent then make will produce error
40+
[ -f $(CHECK_FILE) ]
41+
[ -f $(REPORT_FILE_PATH) ]
42+
@echo Tests OK
43+
44+
docker-container-remove:
45+
docker container rm -f test-container 2>/dev/null || true
Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
# Keras Deeplabv3 training tutorial for IMX500
2+
3+
## Dataset
4+
https://universe.roboflow.com/sankritya-rai-cldft/roadvis-segmentation/dataset/2
5+
6+
To use Roboflow open source datasets, you need a Roboflow [public account](https://roboflow.com/pricing) and accept Roboflow [Terms of Service](https://roboflow.com/terms)
7+
8+
## Training and quantization
9+
[custom_deeplab.ipynb](./custom_deeplab.ipynb)
10+
11+
## Tests
12+
See Makefile.
1.42 MB
Loading

0 commit comments

Comments
 (0)