@@ -65,16 +65,36 @@ We have provided a list of EfficientDet checkpoints and results as follows:
6565** <em >val</em > denotes validation results, <em >test-dev</em > denotes test-dev2017 results. AP<sup >val</sup > is for validation accuracy, all other AP results in the table are for COCO test-dev2017. All accuracy numbers are for single-model single-scale without ensemble or test-time augmentation. All checkpoints are trained with baseline preprocessing (no autoaugmentation).
6666** EfficientNet-D0 to D6 are trained with 300 epochs, EfficientNet-D7 is trained with 500 epochs.
6767
68- ## 3. Benchmark model latency.
68+
69+ ## 3. Export SavedModel, frozen graph, or tflite.
70+
71+ Run the following command line to export models:
72+
73+ !rm -rf savedmodeldir
74+ !python model_inspect.py --runmode=saved_model --model_name=efficientdet-d0 \
75+ --ckpt_path=efficientdet-d0 --saved_model_dir=savedmodeldir \
76+ --tflite_path=efficientdet-d0.tflite
77+
78+ Then you will get:
79+
80+ - saved model under savedmodeldir/
81+ - frozen graph with name savedmodeldir/efficientdet-d0_frozen.pb
82+ - tflite file with name efficientdet-d0.tflite
83+
84+ Notably, --tflite_path is optinal, and it only works after 2.2.0-rc4.
85+
86+
87+ ## 4. Benchmark model latency.
6988
7089
7190There are two types of latency: network latency and end-to-end latency.
7291
7392(1) To measure the network latency (from the fist conv to the last class/box
7493prediction output), use the following command:
7594
76- !python model_inspect.py --runmode=bm --model_name=efficientdet-d0 \
77- # --hparams="precision=mixed-float16" # uncomment if on V100
95+ !python model_inspect.py --runmode=bm --model_name=efficientdet-d0
96+
97+ ** add --hparams="precision=mixed-float16" if running on V100.
7898
7999On single Tesla V100 without TensorRT, our D0 network (no pre/post-processing)
80100has 134 FPS (frame per second) for batch size 1, and 238 FPS for batch size 8.
@@ -86,13 +106,11 @@ use the following command:
86106 !rm -rf /tmp/benchmark/
87107 !python model_inspect.py --runmode=saved_model --model_name=efficientdet-d0 \
88108 --ckpt_path=efficientdet-d0 --saved_model_dir=/tmp/benchmark/ \
89- # --hparams="precision=mixed-float16" # uncomment if on V100
90109
91110 !python model_inspect.py --runmode=saved_model_benchmark \
92111 --saved_model_dir=/tmp/benchmark/efficientdet-d0_frozen.pb \
93112 --model_name=efficientdet-d0 --input_image=testdata/img1.jpg \
94113 --output_image_dir=/tmp/ \
95- # --hparams="precision=mixed-float16" # uncomment if on V100
96114
97115On single Tesla V100 without using TensorRT, our end-to-end
98116latency and throughput are:
@@ -109,7 +127,7 @@ latency and throughput are:
109127
110128** FPS means frames per second (or images/second).
111129
112- ## 4 . Inference for images.
130+ ## 5 . Inference for images.
113131
114132 # Step0: download model and testing image.
115133 !export MODEL=efficientdet-d0
@@ -157,7 +175,7 @@ Here is an example of EfficientDet-D0 visualization: more on [tutorial](tutorial
157175<img src =" ./g3doc/street.jpg " width =" 800 " />
158176</p >
159177
160- ## 5 . Inference for videos.
178+ ## 6 . Inference for videos.
161179
162180You can run inference for a video and show the results online:
163181
@@ -180,7 +198,7 @@ You can run inference for a video and show the results online:
180198 --saved_model_dir=/tmp/savedmodel --input_video=input.mov \
181199 --output_video=output.mov
182200
183- ## 6 . Eval on COCO 2017 val or test-dev.
201+ ## 7 . Eval on COCO 2017 val or test-dev.
184202
185203 // Download coco data.
186204 !wget http://images.cocodataset.org/zips/val2017.zip
@@ -227,7 +245,7 @@ You can also run eval on test-dev set with the following command:
227245 # Now you can submit testdev_output/detections_test-dev2017_test_results.json to
228246 # coco server: https://competitions.codalab.org/competitions/20794#participate
229247
230- ## 7 . Train on PASCAL VOC 2012 with backbone ImageNet ckpt.
248+ ## 8 . Train on PASCAL VOC 2012 with backbone ImageNet ckpt.
231249
232250 # Download and convert pascal data.
233251 !wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar
@@ -253,7 +271,7 @@ You can also run eval on test-dev set with the following command:
253271 --hparams="num_classes=20,moving_average_decay=0" \
254272 --use_tpu=False
255273
256- ## 8 . Finetune on PASCAL VOC 2012 with detector COCO ckpt.
274+ ## 9 . Finetune on PASCAL VOC 2012 with detector COCO ckpt.
257275Create a config file for the PASCAL VOC dataset called voc_config.yaml and put this in it.
258276
259277 num_classes: 20
@@ -289,7 +307,7 @@ If you want to do inference for custom data, you can run
289307
290308You should check more details of runmode which is written in caption-4.
291309
292- ## 9 . Training EfficientDets on TPUs.
310+ ## 10 . Training EfficientDets on TPUs.
293311
294312To train this model on Cloud TPU, you will need:
295313
0 commit comments