I was able to use the AI-TOD V2 Dataset with MMDET-AITOD training by referencing the AI-TOD V2 JSON annotations. All worked well. Model does converge, and inferences look good. But you describe using the 'generate_aitod_imgs.py' tool to create a full AI-TOD AND xView dataset. I assume this is to get an even bigger dataset? But, when I run the tool, it creates many directories, but the only new directory that looks like the full set is the 'xview_aitod_sets' directory in the xview folder, not the aitod folder. This isn't what the tool description is. Anyway, the problem is that the 'xview_aitod_sets' folder does have what looks like a full set of data (train, val, trainval, and test), but it's NOT in JSON format like AI-TOD. It's in images/labels format (text files which correspond to image files) which is more like YOLO. There is no annotations JSON file with AI-TOD and xView labels together.
This is NOT what your MMDET-AITOD is seems to be looking for in the 'mmdet-aitod/mmdet-nwdrka/configs_nwdrka/nwd_rka' V2 config files. They are looking for JSON annotation files.
Questions:
- Is AI-TOD just the vehicles from xView, or is it a different dataset that can be COMBINED with xView vehicles it using your tool?
- If it is a different dataset, then why is the full set that is written out when running your 'generate_aitod_imgs.py' tool not in the AI-TOD format so it can also be used with MMDET-AITOD?