Skip to content

NathanK4261/ImageDetect

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 

Repository files navigation

nkImageDetector

A tool for detecting specific objects on a re-trained ssd-mobilenet model.

Built By: Nathan D. Keidel

About

nkImageDetector was built using the "jetson-inference" library and uses the "detectNet" method. nkImageDetector provides a more "streamlined" process to building, training, and exporting your own ML models, while also being more memory efficient

System Requirements

In order to use "nkImageDetector", you MUST have

  • A NVIDIA Jetson (Nano Developer Kit recomended for better compatibility)

  • A computer running "Ubuntu x86_64" (For Jetsons that ARE NOT a Developer Kit)

  • A microSD card flashed with JetPack and with python3.6 installed

  • A USB Keyboard, Mouse, WiFi adapter, and Webcam (A USB hub is optional, but can help with I/O space

  • A USB thumb drive above 5GB storage

  • A 5V | 3A Usb-C power adapter

  • A Google account with acces to "Google Drive"

  • (Optional) VSCode downloaded on your personal computer

Step 1: Setting up your Jetson

In order to run "nkImageDetector", you need to flash an OS onto your Jetson. Look at the documentation from the official "jetson-inference" library to get started with your Jetson

Step 2: Downloading Requirements

The next step is to download "jetson-inference". See the code provided below to build the project from source. (When building from source, Make sure to download PyTorch for python3.6 since we are using python3.6)

  • (Note: DO NOT continue setting up your Jetson after you reached the end of the code provided below, you will download your dataset in the next step)

    sudo apt-get update
    sudo apt-get install git cmake libpython3-dev python3-numpy
    cd /home/nvidia
    git clone --recursive --depth=1 https://github.com/dusty-nv/jetson-inference
    cd jetson-inference
    mkdir build
    cd build
    cmake ../
    make -j$(nproc)
    sudo make install
    sudo ldconfig
    

Step 3: Download dataset

The next step is to download a dataset. I have made a Google Colab file that allows you to download images and train them on a remote runtime, so you can save on memory and keep your Jetson from working too hard.

If you would like to download a dataset and train it ON YOUR JETSON, follow this document:

If you would like to download a dataset on a Google Runtime follow these steps;

For best results, follow these instructions and DO NOT go ahead.

Screenshot 2023-07-07 at 1 57 03 AM
  • Next, run the second cell by pressing the "Play" button again. Again, you will see a green checkmark indicating the cell has finished running:
Screenshot 2023-07-07 at 2 13 44 AM
  • Go to Open Images --> [https://storage.googleapis.com/openimages/web/visualizer/index.html?]

  • When you follow this link, search around the different classes of images and remember the name of the category/catergories you want to download.

  • Now, return to your Colab file and look at the third cell. It should look like this:

    !cd pytorch-detection/ssd; python3 open_images_downloader.py --class-names="INSERT CLASS NAME[S]" --max-images=5000 --data=data/images
    

    In the Colab file, change "INSERT CLASS NAME[S]" with however many categories of images you want. Make sure the catergories are inside the quotes and divided with commas ( , )

    DO NOT CHANGE THE "--data=data/images" ARGUMENT, IT IS NEEDED LATER ON!

  • Run the third cell like the other 2 cells. Wait until a green checkmark has appeared next to the third cell to continiue.

  • Lets examine the fourth cell:

    !cd pytorch-detection/ssd; python3 train_ssd.py --model-dir=models/my-model --data=data/images --batch-size=2 --workers=2 --epochs=30
    

    You can change the "batch-size", "workers", and "epochs" values (Batch size affects number of images processed per-cycle, workers affects how many parallel units are processing images, and epochs affects how many times you want the code to go through all downloaded images)

    DO NOT CHANGE "--model-dir=models/my-model" AND "--data=data/images", THEY ARE NEEDED LATER ON!

  • The fourth cell can take a long time, and can possibly randomly time-out on Google Colab. You can buy "Colab +" which allows for tasks to be ran 24/7 on the remote runtime. It also allows for much faster GPU's and 500 compute units!

  • Once the fourth cell is finished, you should see a green checkmark on the side of the cell.

Thats it! You have trained your model. Now, you must export it to your Jetson.

Keep the Colab tab open, as it is needed for "Step 5".

Step 4: Exporting model to .onnx

  • In the colab file, run the fifth cell. When you see that green checkmark, run the sixth cell. This should download a .zip file of the model.

  • Now, on the computer that is storing the .zip file, find the directory of the downloaded .zip file, open the terminal, and run this command:

    scp /path/to/zip_folder.zip <name of your Jetson>@<Your Jetson's IP adress>:/home/nvidia
    
  • Now, on your Jetson. Open the terminal and run:

    cd /home/nvidia
    unzip models.zip
    rm models.zip
    git clone --recursive https://github.com/NathanK4261/NK-ImageDetector.git
    mv models/* /home/nvidia/NK-ImageDetector/my-project/detection/ssd/models
    rm -rf models/
    cd NK-ImageDetector/my-project/detection/ssd
    python3 onnx_export.py --model-dir=models/my-model
    

Thats it! You are done exporting your model as a .onnx file and you can now begin to run the code!

Step 5: Run "helper.py"

Now that we have correctly set up the detection folder, open the terminal and run these scripts:

  cd /home/nvidia/NK-ImageDetector/my-project/detection/ssd
  python3 helper.py models/my-model/ssd-mobilenet.onnx models/my-model/labels.txt 4

This should run the python script. Which means you are ready to detect objects with your retrained model!

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages