Skip to content

Single class output issue with quantized YOLO models on QCS6490 #278

@freedomlyle

Description

@freedomlyle

Following up on a previous inquiry regarding custom dataset quantization (which was resolved by modifying the source code), a new issue has emerged.

It appears that models trained and submitted to the Hub for quantization consistently yield only a single output result when running on the GenKit QCS6490. Regardless of the dataset used, the detection output is strictly limited to objects with label 0. This behavior persists even when testing with the official YOLOv8 pre-trained model using the standard quantization workflow and COCO dataset; only label 0 (Person) is detected.

The environment is set up using the GStreamer sample application via API mapping on the board, following this tutorial:
https://docs.qualcomm.com/doc/80-70018-50SC/topic/gst-ai-object-detection.html

The corresponding source code path is:
gst-plugins-qti-oss-imsdk.lnx.2.0.0.r2-rel\gst-sample-apps\gst-ai-object-detection

After verifying the label files and configuration, a discrepancy was noted in the "constants" configuration compared to the tutorial.

The tutorial documentation shows 3 parameters for offsets and scales:
"constants": "YOLOv8,q-offsets=<21.0, 0.0, 0.0>,q-scales=<3.0546178817749023, 0.003793874057009816, 1.0>;"

However, the current parameters in my setup only contain 2 values:
YOLOv8,q-offsets=<0.0, 0.0>,q-scales=<2.4900553226470947, 0.00390625>;

Could you please advise on the cause of this discrepancy and how to fix the single-output issue?

Metadata

Metadata

Assignees

No one assigned

    Labels

    questionPlease ask any questions on Slack. This issue will be closed once responded to.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions