Skip to content

Commit f7137b5

Browse files
committed
Make exporting tflite optional and add instructions to README.
Related issues: #341 #158 #138 #4
1 parent 0c69c13 commit f7137b5

File tree

3 files changed

+36
-15
lines changed

3 files changed

+36
-15
lines changed

efficientdet/README.md

Lines changed: 29 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -65,16 +65,36 @@ We have provided a list of EfficientDet checkpoints and results as follows:
6565
** <em>val</em> denotes validation results, <em>test-dev</em> denotes test-dev2017 results. AP<sup>val</sup> is for validation accuracy, all other AP results in the table are for COCO test-dev2017. All accuracy numbers are for single-model single-scale without ensemble or test-time augmentation. All checkpoints are trained with baseline preprocessing (no autoaugmentation).
6666
** EfficientNet-D0 to D6 are trained with 300 epochs, EfficientNet-D7 is trained with 500 epochs.
6767

68-
## 3. Benchmark model latency.
68+
69+
## 3. Export SavedModel, frozen graph, or tflite.
70+
71+
Run the following command line to export models:
72+
73+
!rm -rf savedmodeldir
74+
!python model_inspect.py --runmode=saved_model --model_name=efficientdet-d0 \
75+
--ckpt_path=efficientdet-d0 --saved_model_dir=savedmodeldir \
76+
--tflite_path=efficientdet-d0.tflite
77+
78+
Then you will get:
79+
80+
- saved model under savedmodeldir/
81+
- frozen graph with name savedmodeldir/efficientdet-d0_frozen.pb
82+
- tflite file with name efficientdet-d0.tflite
83+
84+
Notably, --tflite_path is optinal, and it only works after 2.2.0-rc4.
85+
86+
87+
## 4. Benchmark model latency.
6988

7089

7190
There are two types of latency: network latency and end-to-end latency.
7291

7392
(1) To measure the network latency (from the fist conv to the last class/box
7493
prediction output), use the following command:
7594

76-
!python model_inspect.py --runmode=bm --model_name=efficientdet-d0 \
77-
# --hparams="precision=mixed-float16" # uncomment if on V100
95+
!python model_inspect.py --runmode=bm --model_name=efficientdet-d0
96+
97+
** add --hparams="precision=mixed-float16" if running on V100.
7898

7999
On single Tesla V100 without TensorRT, our D0 network (no pre/post-processing)
80100
has 134 FPS (frame per second) for batch size 1, and 238 FPS for batch size 8.
@@ -86,13 +106,11 @@ use the following command:
86106
!rm -rf /tmp/benchmark/
87107
!python model_inspect.py --runmode=saved_model --model_name=efficientdet-d0 \
88108
--ckpt_path=efficientdet-d0 --saved_model_dir=/tmp/benchmark/ \
89-
# --hparams="precision=mixed-float16" # uncomment if on V100
90109

91110
!python model_inspect.py --runmode=saved_model_benchmark \
92111
--saved_model_dir=/tmp/benchmark/efficientdet-d0_frozen.pb \
93112
--model_name=efficientdet-d0 --input_image=testdata/img1.jpg \
94113
--output_image_dir=/tmp/ \
95-
# --hparams="precision=mixed-float16" # uncomment if on V100
96114

97115
On single Tesla V100 without using TensorRT, our end-to-end
98116
latency and throughput are:
@@ -109,7 +127,7 @@ latency and throughput are:
109127

110128
** FPS means frames per second (or images/second).
111129

112-
## 4. Inference for images.
130+
## 5. Inference for images.
113131

114132
# Step0: download model and testing image.
115133
!export MODEL=efficientdet-d0
@@ -157,7 +175,7 @@ Here is an example of EfficientDet-D0 visualization: more on [tutorial](tutorial
157175
<img src="./g3doc/street.jpg" width="800" />
158176
</p>
159177

160-
## 5. Inference for videos.
178+
## 6. Inference for videos.
161179

162180
You can run inference for a video and show the results online:
163181

@@ -180,7 +198,7 @@ You can run inference for a video and show the results online:
180198
--saved_model_dir=/tmp/savedmodel --input_video=input.mov \
181199
--output_video=output.mov
182200

183-
## 6. Eval on COCO 2017 val or test-dev.
201+
## 7. Eval on COCO 2017 val or test-dev.
184202

185203
// Download coco data.
186204
!wget http://images.cocodataset.org/zips/val2017.zip
@@ -227,7 +245,7 @@ You can also run eval on test-dev set with the following command:
227245
# Now you can submit testdev_output/detections_test-dev2017_test_results.json to
228246
# coco server: https://competitions.codalab.org/competitions/20794#participate
229247

230-
## 7. Train on PASCAL VOC 2012 with backbone ImageNet ckpt.
248+
## 8. Train on PASCAL VOC 2012 with backbone ImageNet ckpt.
231249

232250
# Download and convert pascal data.
233251
!wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar
@@ -253,7 +271,7 @@ You can also run eval on test-dev set with the following command:
253271
--hparams="num_classes=20,moving_average_decay=0" \
254272
--use_tpu=False
255273

256-
## 8. Finetune on PASCAL VOC 2012 with detector COCO ckpt.
274+
## 9. Finetune on PASCAL VOC 2012 with detector COCO ckpt.
257275
Create a config file for the PASCAL VOC dataset called voc_config.yaml and put this in it.
258276

259277
num_classes: 20
@@ -289,7 +307,7 @@ If you want to do inference for custom data, you can run
289307

290308
You should check more details of runmode which is written in caption-4.
291309

292-
## 9. Training EfficientDets on TPUs.
310+
## 10. Training EfficientDets on TPUs.
293311

294312
To train this model on Cloud TPU, you will need:
295313

efficientdet/inference.py

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -699,7 +699,7 @@ def to_tflite(self, saved_model_dir):
699699
converter.target_spec.supported_ops = supported_ops
700700
return converter.convert()
701701

702-
def export(self, output_dir, frozen_pb=True, tflite=True):
702+
def export(self, output_dir, frozen_pb=True, tflite_path=None):
703703
"""Export a saved model."""
704704
signitures = self.signitures
705705
signature_def_map = {
@@ -729,12 +729,11 @@ def export(self, output_dir, frozen_pb=True, tflite=True):
729729
tf.io.gfile.GFile(pb_path, 'wb').write(graphdef.SerializeToString())
730730
logging.info('Free graph saved at %s', pb_path)
731731

732-
if tflite:
732+
if tflite_path:
733733
ver = tf.__version__
734734
if ver < '2.2.0-dev20200501' or ('dev' not in ver and ver < '2.2.0-rc4'):
735735
raise ValueError('TFLite requires TF 2.2.0rc4 or laterr version.')
736736
tflite_model = self.to_tflite(output_dir)
737-
tflite_path = os.path.join(output_dir, self.model_name + '.tflite')
738737
with tf.io.gfile.GFile(tflite_path, 'wb') as f:
739738
f.write(tflite_model)
740739

efficientdet/model_inspect.py

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -71,6 +71,7 @@
7171
# For saved model.
7272
flags.DEFINE_string('saved_model_dir', '/tmp/saved_model',
7373
'Folder path for saved model.')
74+
flags.DEFINE_string('tflite_path', None, 'Path for exporting tflite file.')
7475

7576
FLAGS = flags.FLAGS
7677

@@ -86,6 +87,7 @@ def __init__(self,
8687
ckpt_path: Text = None,
8788
export_ckpt: Text = None,
8889
saved_model_dir: Text = None,
90+
tflite_path: Text = None,
8991
batch_size: int = 1,
9092
hparams: Text = ''):
9193
self.model_name = model_name
@@ -95,6 +97,7 @@ def __init__(self,
9597
self.ckpt_path = ckpt_path
9698
self.export_ckpt = export_ckpt
9799
self.saved_model_dir = saved_model_dir
100+
self.tflite_path = tflite_path
98101

99102
model_config = hparams_config.get_detection_config(model_name)
100103
model_config.override(hparams) # Add custom overrides
@@ -144,7 +147,7 @@ def export_saved_model(self, **kwargs):
144147
model_params=self.model_config.as_dict(),
145148
**kwargs)
146149
driver.build()
147-
driver.export(self.saved_model_dir)
150+
driver.export(self.saved_model_dir, tflite_path=self.tflite_path)
148151

149152
def saved_model_inference(self, image_path_pattern, output_dir, **kwargs):
150153
"""Perform inference for the given saved model."""
@@ -459,6 +462,7 @@ def main(argv):
459462
ckpt_path=FLAGS.ckpt_path,
460463
export_ckpt=FLAGS.export_ckpt,
461464
saved_model_dir=FLAGS.saved_model_dir,
465+
tflite_path=FLAGS.tflite_path,
462466
batch_size=FLAGS.batch_size,
463467
hparams=FLAGS.hparams)
464468
inspector.run_model(

0 commit comments

Comments
 (0)