|
| 1 | +{ |
| 2 | + "cells": [ |
| 3 | + { |
| 4 | + "cell_type": "markdown", |
| 5 | + "metadata": { |
| 6 | + "id": "1tMAqVl4p58r" |
| 7 | + }, |
| 8 | + "source": [ |
| 9 | + "## YOLO to Rubik TFlite Conversion" |
| 10 | + ] |
| 11 | + }, |
| 12 | + { |
| 13 | + "cell_type": "markdown", |
| 14 | + "metadata": { |
| 15 | + "id": "nAbygyUYp58s" |
| 16 | + }, |
| 17 | + "source": [ |
| 18 | + "#### Requirements\n", |
| 19 | + "\n", |
| 20 | + "This notebook can be run on Colab. However, Colab has some incompatibility issues that result in needing to restart the notebook in the middle of the run. This is normal, and after restarting you should rerun the below cell.\n", |
| 21 | + "\n", |
| 22 | + "Prior to running the notebook, it is necessary to make an account on [Qualcomm's AI Hub](https://app.aihub.qualcomm.com/account/), and obtain your API token. Then, replace <YOUR_API_TOKEN> with your API token in the cell below.\n", |
| 23 | + "\n", |
| 24 | + "Documentation for the Qualcomm AI Hub can be found [here](https://app.aihub.qualcomm.com/docs/index.html).\n", |
| 25 | + "\n", |
| 26 | + "You should also have a PyTorch model (ending in `.pt`) that's been uploaded to the runtime that you intend to convert. After uploading, copy it's absolute path by right-clicking on the file, and replace /PATH/TO/WEIGHTS.\n", |
| 27 | + "\n", |
| 28 | + "**NOTE: your API key will be listed in the output, and should therefore be redacted if the output is shared.**\n", |
| 29 | + "\n", |
| 30 | + "At some point during the run of this cell, it will prompt you to choose whether to clone a repo. You should click on the text box next to the prompt, and answer *y/yes*.\n", |
| 31 | + "\n", |
| 32 | + "Once the run has finished, open the AI Hub link, and download the tflite model for the job you just ran.\n", |
| 33 | + "\n", |
| 34 | + "If you want to use this notebook to convert a yolo11 model, you'll need to replace all instances of `yolov8` in the cell below with `yolov11`." |
| 35 | + ] |
| 36 | + }, |
| 37 | + { |
| 38 | + "cell_type": "code", |
| 39 | + "execution_count": null, |
| 40 | + "metadata": { |
| 41 | + "colab": { |
| 42 | + "base_uri": "https://localhost:8080/", |
| 43 | + "height": 1000 |
| 44 | + }, |
| 45 | + "id": "aX3JcSFKp58s", |
| 46 | + "outputId": "f2cdadd2-c448-4d8c-c681-c19decef7f3e" |
| 47 | + }, |
| 48 | + "outputs": [], |
| 49 | + "source": [ |
| 50 | + "# This installs Python package\n", |
| 51 | + "!pip install qai-hub-models[yolov8_det]\n", |
| 52 | + "# sets up AI Hub enviroment\n", |
| 53 | + "!qai-hub configure --api_token <YOUR_API_TOKEN>\n", |
| 54 | + "# Converts the model to be ran on RB3Gen2\n", |
| 55 | + "!python -m qai_hub_models.models.yolov8_det.export --quantize w8a8 --device=\"RB3 Gen 2 (Proxy)\" --ckpt-name /PATH/TO/WEIGHTS --device-os linux --target-runtime tflite --output-dir .\n" |
| 56 | + ] |
| 57 | + }, |
| 58 | + { |
| 59 | + "cell_type": "markdown", |
| 60 | + "metadata": { |
| 61 | + "id": "0I2cXQO4p58s" |
| 62 | + }, |
| 63 | + "source": [ |
| 64 | + "Modified from https://github.com/ramalamadingdong/yolo-rb3gen2-trainer/blob/main/AI_Hub_Quanitization_RB3Gen2.ipynb" |
| 65 | + ] |
| 66 | + } |
| 67 | + ], |
| 68 | + "metadata": { |
| 69 | + "colab": { |
| 70 | + "provenance": [] |
| 71 | + }, |
| 72 | + "kernelspec": { |
| 73 | + "display_name": "Python 3", |
| 74 | + "language": "python", |
| 75 | + "name": "python3" |
| 76 | + }, |
| 77 | + "language_info": { |
| 78 | + "name": "python", |
| 79 | + "version": "3.11.7" |
| 80 | + } |
| 81 | + }, |
| 82 | + "nbformat": 4, |
| 83 | + "nbformat_minor": 0 |
| 84 | +} |
0 commit comments