I customized my own model according to the tutorial and trained it using distillation with dinov3/vits16 as the teacher. However, after exporting the model as ONNX and performing int8 quantization, the model accuracy dropped significantly.
This issue does not occur with models that have not undergone distillation.