Skip to content

Where yolov8n.pt is the trained weights, the output format is specified as tflite, int8=True means the model will be quantized using 8-bits signed for the weights and the activations. #33

@lucker26

Description

@lucker26

The exported model is already tf.lite of int8, and then execute the quantization script provided by you for quantization? Shouldn't it be exported to tf.lite of float32 and then quantized by executing your quantization script?

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions