🤔 What’s your question?
I am training a DINOV3 LTDETR model for medical xray domain. I have 10K+ unlabeled images and 3K labeled images.
What I want to do is to
- distill/pretrain the DINO backbone to the 10K unlabeled dataset
- finetune the model with a custom augmentation
Since the dataset are radiographs, during fine-tuning the model isnt compatible with some augmentations like Solarize and this specific problem will fail if I add a Flip augmentation (which is the default). Moreover, when deploying, I am doing letterbox padding like what YOLO does.
Is there a way I could use a custom Albumentations for training?