Skip to content

Commit 861b95a

Browse files
committed
YOLO Guide fix
1 parent 7decb33 commit 861b95a

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed
Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
{"cells": [{"cell_type": "markdown", "metadata": {}, "source": ["# YOLOv3 Object Detector"]}, {"cell_type": "markdown", "metadata": {"toc": true}, "source": ["<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n", "<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#YOLOv3-Object-Detector\" data-toc-modified-id=\"YOLOv3-Object-Detector-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>YOLOv3 Object Detector</a></span><ul class=\"toc-item\"><li><span><a href=\"#Introduction\" data-toc-modified-id=\"Introduction-1.1\"><span class=\"toc-item-num\">1.1&nbsp;&nbsp;</span>Introduction</a></span><ul class=\"toc-item\"><li><span><a href=\"#Model-Architecture\" data-toc-modified-id=\"Model-Architecture-1.1.1\"><span class=\"toc-item-num\">1.1.1&nbsp;&nbsp;</span>Model Architecture</a></span></li><li><span><a href=\"#Implementation-in-arcgis.learn\" data-toc-modified-id=\"Implementation-in-arcgis.learn-1.1.2\"><span class=\"toc-item-num\">1.1.2&nbsp;&nbsp;</span>Implementation in <code>arcgis.learn</code></a></span></li><li><span><a href=\"#Using-COCO-pretrained-weights\" data-toc-modified-id=\"Using-COCO-pretrained-weights-1.1.3\"><span class=\"toc-item-num\">1.1.3&nbsp;&nbsp;</span>Using COCO pretrained weights</a></span></li><li><span><a href=\"#References\" data-toc-modified-id=\"References-1.1.4\"><span class=\"toc-item-num\">1.1.4&nbsp;&nbsp;</span>References</a></span></li></ul></li></ul></li></ul></div>"]}, {"cell_type": "markdown", "metadata": {}, "source": ["## Introduction"]}, {"cell_type": "markdown", "metadata": {}, "source": ["**YOLO (You Only Look Once)** is one of the most popular series of object detection models. Its advantage has been in providing real-time detections while approaching the accuracy of state-of-the-art object detection models.\n", "\n", "In the earlier works for object detection, models used to either use a sliding window technique or region proposal network. Sliding window, as the name suggests choses a Region of Interest (RoI) by sliding a window across the image and then performs classification in the chosen RoI to detect an object. Region proposal networks work in two steps - first, they extract region proposals and then using CNN features, classify the proposed regions. Sliding window method is not very precise and accurate, and though some of the region-based networks can be highly accurate they tend to be slower.\n", "\n", "Then came along the one-shot object detectors such as [SSD](https://arxiv.org/abs/1512.02325), [YOLO](https://arxiv.org/pdf/1506.02640.pdf) and [RetinaNet](https://arxiv.org/abs/1708.02002). These models detect objects in a single pass of the image and, thus, are considerably faster, and can match up the accuracy of region-based detectors. The [SSD guide](https://developers.arcgis.com/python/guide/how-ssd-works/) explains the essential components of a one-shot object detection model. You can also read up the RetinaNet guide [here](https://developers.arcgis.com/python/guide/how-retinanet-works/). These models are already a part of ArcGIS API for Python and the addition of [**YOLOv3**](https://arxiv.org/abs/1804.02767) provides another tool in our deep learning toolbox.\n", "\n", "The biggest advantage of YOLOv3 in `arcgis.learn` is that it comes preloaded with weights pretrained on the [COCO dataset](https://cocodataset.org/). This makes it ready-to-use for the 80 common objects (car, truck, person, etc.) that are part of the COCO dataset."]}, {"cell_type": "markdown", "metadata": {}, "source": ["<center><img src=\"../../static/img/yolov3_demo.gif\"/></center>\n", "<center>Figure 1. Real-time Object detection using YOLOv3 [1]</center>"]}, {"cell_type": "markdown", "metadata": {}, "source": ["### Model Architecture"]}, {"cell_type": "markdown", "metadata": {}, "source": ["YOLOv3 uses **Darknet-53** as its backbone. This contrasts with the use of popular ResNet family of backbones by other models such as SSD and RetinaNet. Darknet-53 is a deeper version of Darknet-19 which was used in [YOLOv2](https://arxiv.org/pdf/1612.08242.pdf), a prior version. As the name suggests, this backbone architecture has 53 convolutional layers. Adapting the ResNet style residual layers has improved its accuracy while maintaining the speed advantage. This feature extractor performs better than ResNet101 and similar to ResNet152 while being about 1.5x and 2x faster, respectively [2].\n", "\n", "YOLOv3 has incremental improvements over its prior versions [2]. It uses upsampling and concatenation of feature layers with earlier feature layers which preserve fine-grained features. Another improvement is using three scales for detection. This has made the model good at detecting objects of varying scales in an image. There are other improvements in anchor box selections, loss function, etc. For a detailed analysis of the YOLOv3 architecture, please refer to this [blog](https://towardsdatascience.com/yolo-v3-object-detection-53fb7d3bfe6b)."]}, {"cell_type": "markdown", "metadata": {}, "source": ["<center><img src=\"../../static/img/yolov3.jpg\"/></center>\n", "<center>Figure 2. YOLOv3 architecture [3]</center>"]}, {"cell_type": "markdown", "metadata": {}, "source": ["### Implementation in `arcgis.learn`\n", "\n", "You can create a YOLOv3 model in arcgis.learn using a single line of code.\n", "```\n", "model = YOLOv3(data)\n", "```\n", "where ``data`` is the databunch prepared for training using the `prepare_data` method in the earlier steps.\n", "\n", "For more information about the API, please go to the [API reference](https://developers.arcgis.com/python/api-reference/arcgis.learn.toc.html#yolov3)."]}, {"cell_type": "markdown", "metadata": {}, "source": ["### Using COCO pretrained weights\n", "\n", "To use the model out-of-the-box with COCO pretrained weights, initialize the model as following:\n", "\n", "```\n", "model = YOLOv3()\n", "```\n", "\n", "Note, the model must be initialized without providing any `data`. Because we are not training the model and instead using the pre-trained weights, we do not require a databunch. Any oriented image or video can be used for inferencing using the following commands, respectively:\n", "\n", "```\n", "model.predict(image_path)\n", "\n", "model.predict_video(input_video_path, metadata_file)\n", "```\n", "\n", "The following 80 classes are available for object detection in the COCO dataset:\n", "```\n", "'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck',\n", "'boat', 'traffic light', 'fire hydrant', 'stop sign',\n", "'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',\n", "'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella',\n", "'handbag', 'tie', 'suitcase', 'frisbee', 'skis',\n", "'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove',\n", "'skateboard', 'surfboard', 'tennis racket', 'bottle', 'wine glass',\n", "'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich',\n", "'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair',\n", "'couch', 'potted plant', 'bed', 'dining table',\n", "'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',\n", "'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book',\n", "'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush'\n", "```"]}, {"cell_type": "markdown", "metadata": {}, "source": ["### References"]}, {"cell_type": "markdown", "metadata": {}, "source": ["* [1] Sik-Ho Tsang, \"Review: YOLOv3 \u2014 You Only Look Once (Object Detection)\", https://towardsdatascience.com/review-yolov3-you-only-look-once-object-detection-eab75d7a1ba6.\n", "* [2] Joseph Redmon, Ali Farhadi: \"YOLOv3: An Incremental Improvement\", 2018; [https://arxiv.org/abs/1804.02767 arXiv:1804.02767].\n", "* [3] Ayoosh Katuria, \"What\u2019s new in YOLO v3?\", https://towardsdatascience.com/yolo-v3-object-detection-53fb7d3bfe6b."]}], "metadata": {"kernelspec": {"display_name": "Python 3", "language": "python", "name": "python3"}, "language_info": {"codemirror_mode": {"name": "ipython", "version": 3}, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.2"}, "toc": {"base_numbering": 1, "nav_menu": {}, "number_sections": true, "sideBar": true, "skip_h1_title": false, "title_cell": "Table of Contents", "title_sidebar": "Contents", "toc_cell": true, "toc_position": {}, "toc_section_display": true, "toc_window_display": true}}, "nbformat": 4, "nbformat_minor": 4}
1+
{"cells":[{"cell_type":"markdown","metadata":{},"source":["# YOLOv3 Object Detector"]},{"cell_type":"markdown","metadata":{"toc":true},"source":["<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n","<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#YOLOv3-Object-Detector\" data-toc-modified-id=\"YOLOv3-Object-Detector-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>YOLOv3 Object Detector</a></span><ul class=\"toc-item\"><li><span><a href=\"#Introduction\" data-toc-modified-id=\"Introduction-1.1\"><span class=\"toc-item-num\">1.1&nbsp;&nbsp;</span>Introduction</a></span><ul class=\"toc-item\"><li><span><a href=\"#Model-Architecture\" data-toc-modified-id=\"Model-Architecture-1.1.1\"><span class=\"toc-item-num\">1.1.1&nbsp;&nbsp;</span>Model Architecture</a></span></li><li><span><a href=\"#Implementation-in-arcgis.learn\" data-toc-modified-id=\"Implementation-in-arcgis.learn-1.1.2\"><span class=\"toc-item-num\">1.1.2&nbsp;&nbsp;</span>Implementation in <code>arcgis.learn</code></a></span></li><li><span><a href=\"#Using-COCO-pretrained-weights\" data-toc-modified-id=\"Using-COCO-pretrained-weights-1.1.3\"><span class=\"toc-item-num\">1.1.3&nbsp;&nbsp;</span>Using COCO pretrained weights</a></span></li><li><span><a href=\"#References\" data-toc-modified-id=\"References-1.1.4\"><span class=\"toc-item-num\">1.1.4&nbsp;&nbsp;</span>References</a></span></li></ul></li></ul></li></ul></div>"]},{"cell_type":"markdown","metadata":{},"source":["## Introduction"]},{"cell_type":"markdown","metadata":{},"source":["**YOLO (You Only Look Once)** is one of the most popular series of object detection models. Its advantage has been in providing real-time detections while approaching the accuracy of state-of-the-art object detection models.\n","\n","In the earlier works for object detection, models used to either use a sliding window technique or region proposal network. Sliding window, as the name suggests choses a Region of Interest (RoI) by sliding a window across the image and then performs classification in the chosen RoI to detect an object. Region proposal networks work in two steps - first, they extract region proposals and then using CNN features, classify the proposed regions. Sliding window method is not very precise and accurate, and though some of the region-based networks can be highly accurate they tend to be slower.\n","\n","Then came along the one-shot object detectors such as [SSD](https://arxiv.org/abs/1512.02325), [YOLO](https://arxiv.org/pdf/1506.02640.pdf) and [RetinaNet](https://arxiv.org/abs/1708.02002). These models detect objects in a single pass of the image and, thus, are considerably faster, and can match up the accuracy of region-based detectors. The [SSD guide](https://developers.arcgis.com/python/guide/how-ssd-works/) explains the essential components of a one-shot object detection model. You can also read up the RetinaNet guide [here](https://developers.arcgis.com/python/guide/how-retinanet-works/). These models are already a part of ArcGIS API for Python and the addition of [**YOLOv3**](https://arxiv.org/abs/1804.02767) provides another tool in our deep learning toolbox.\n","\n","The biggest advantage of YOLOv3 in `arcgis.learn` is that it comes preloaded with weights pretrained on the [COCO dataset](https://cocodataset.org/). This makes it ready-to-use for the 80 common objects (car, truck, person, etc.) that are part of the COCO dataset."]},{"cell_type":"markdown","metadata":{},"source":["<center><img src=\"../../static/img/yolov3_demo.gif\"/></center>\n","<center>Figure 1. Real-time Object detection using YOLOv3 [1]</center>"]},{"cell_type":"markdown","metadata":{},"source":["### Model Architecture"]},{"cell_type":"markdown","metadata":{},"source":["YOLOv3 uses **Darknet-53** as its backbone. This contrasts with the use of popular ResNet family of backbones by other models such as SSD and RetinaNet. Darknet-53 is a deeper version of Darknet-19 which was used in [YOLOv2](https://arxiv.org/pdf/1612.08242.pdf), a prior version. As the name suggests, this backbone architecture has 53 convolutional layers. Adapting the ResNet style residual layers has improved its accuracy while maintaining the speed advantage. This feature extractor performs better than ResNet101 and similar to ResNet152 while being about 1.5x and 2x faster, respectively [2].\n","\n","YOLOv3 has incremental improvements over its prior versions [2]. It uses upsampling and concatenation of feature layers with earlier feature layers which preserve fine-grained features. Another improvement is using three scales for detection. This has made the model good at detecting objects of varying scales in an image. There are other improvements in anchor box selections, loss function, etc. For a detailed analysis of the YOLOv3 architecture, please refer to this [blog](https://towardsdatascience.com/yolo-v3-object-detection-53fb7d3bfe6b)."]},{"cell_type":"markdown","metadata":{},"source":["<center><img src=\"../../static/img/yolov3.jpg\"/></center>\n","<center>Figure 2. YOLOv3 architecture [3]</center>"]},{"cell_type":"markdown","metadata":{},"source":["### Implementation in `arcgis.learn`\n","\n","You can create a YOLOv3 model in arcgis.learn using a single line of code.\n","```\n","model = YOLOv3(data)\n","```\n","where ``data`` is the databunch prepared for training using the `prepare_data` method in the earlier steps.\n","\n","For more information about the API, please go to the [API reference](https://developers.arcgis.com/python/api-reference/arcgis.learn.toc.html#yolov3)."]},{"cell_type":"markdown","metadata":{},"source":["### Using COCO pretrained weights\n","\n","To use the model out-of-the-box with COCO pretrained weights, initialize the model as following:\n","\n","```\n","model = YOLOv3()\n","```\n","\n","Note, the model must be initialized without providing any `data`. Because we are not training the model and instead using the pre-trained weights, we do not require a databunch. Any oriented image or video (at least 416x416 px) can be used for inferencing using the following commands, respectively:\n","\n","```\n","model.predict(image_path)\n","\n","model.predict_video(input_video_path, metadata_file)\n","```\n","\n","The following 80 classes are available for object detection in the COCO dataset:\n","```\n","'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck',\n","'boat', 'traffic light', 'fire hydrant', 'stop sign',\n","'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',\n","'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella',\n","'handbag', 'tie', 'suitcase', 'frisbee', 'skis',\n","'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove',\n","'skateboard', 'surfboard', 'tennis racket', 'bottle', 'wine glass',\n","'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich',\n","'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair',\n","'couch', 'potted plant', 'bed', 'dining table',\n","'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',\n","'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book',\n","'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush'\n","```"]},{"cell_type":"markdown","metadata":{},"source":["### References"]},{"cell_type":"markdown","metadata":{},"source":["* [1] Sik-Ho Tsang, \"Review: YOLOv3 — You Only Look Once (Object Detection)\", https://towardsdatascience.com/review-yolov3-you-only-look-once-object-detection-eab75d7a1ba6.\n","* [2] Joseph Redmon, Ali Farhadi: \"YOLOv3: An Incremental Improvement\", 2018; [https://arxiv.org/abs/1804.02767 arXiv:1804.02767].\n","* [3] Ayoosh Katuria, \"What’s new in YOLO v3?\", https://towardsdatascience.com/yolo-v3-object-detection-53fb7d3bfe6b."]}],"metadata":{"kernelspec":{"display_name":"Python 3","language":"python","name":"python3"},"language_info":{"codemirror_mode":{"name":"ipython","version":3},"file_extension":".py","mimetype":"text/x-python","name":"python","nbconvert_exporter":"python","pygments_lexer":"ipython3","version":"3.8.2"},"toc":{"base_numbering":1,"nav_menu":{},"number_sections":true,"sideBar":true,"skip_h1_title":false,"title_cell":"Table of Contents","title_sidebar":"Contents","toc_cell":true,"toc_position":{},"toc_section_display":true,"toc_window_display":true}},"nbformat":4,"nbformat_minor":4}

0 commit comments

Comments
 (0)