Skip to content

Incorrect Face Detection Output When Using int16 Quantization with Yunet on RV1106 #375

@urvi-ai

Description

@urvi-ai

Hello,
I’m using the YuNet model for face detection on the RV1106 hardware. I’m converting the ONNX model to RKNN format using rknntoolkit2.

When I convert the model with default (INT8) quantization, everything works correctly — the detection results are accurate.

However, when I try to convert the ONNX model using INT16 quantization (for example, w16a16i_dfp or w16a16i as quantized_dtype), the model conversion completes successfully, and I can also load the .rknn model in my C++ code using RKNN_TENSOR_INT16 input type.
But the output detections are incorrect — the faces are not detected properly or the bounding boxes are invalid.

Details:

Hardware: RV1106
Model: YuNet (ONNX format)
Toolkit: rknntoolkit2
Quantization modes tested:
✅ INT8 — works correctly
❌ INT16 (w16a16i_dfp, w16a16i) — conversion OK, but wrong output
C++ Inference: works fine with int8 model, fails to detect correctly with int16 model

Questions:

Is INT16 quantization fully supported for RV1106 (and for models like YuNet)?
Do I need to modify preprocessing or input normalization when using INT16 models?
Is there any known limitation or bug related to w16a16i / w16a16i_dfp quantization?

Any guidance or clarification would be appreciated.
Thank you!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions