This work was done in the Omni Lab for Intelligent Visual Engineering and Science (OLIVES) @ Georgia Tech. It has recently been accepted for publication in the IEEE Journal for Biomedical and Health Informatics!! Feel free to check our lab's Website and GitHub for other interesting work!!!
K. Kokilepersaud, M. Prabhushankar, Y. Yarici, G. AlRegib, and A. Mostafa, ”Exploit- ing the Distortion-Semantic Interaction in Fisheye Data” in Open Journal of Signals Processing, 2023.
@inproceedings{kokilepersaud2023exploiting, title={Exploiting the Distortion-Semantic Interaction in Fisheye Data}, author={K. Kokilepersaud, M. Prabhushankar, Y. Yarici, G. AlRegib, and A. Mostafa}, booktitle={Open Journal of Signals Processing}, year={2023}}
In this work, we present a methodology to shape a fisheye-specific representation space that reflects the interaction between distortion and semantic context present in this data modality. Fisheye data has the wider field of view advantage over other types of cameras, but this comes at the expense of high radial distortion. As a result, objects further from the center exhibit deformations that make it difficult for a model to identify their semantic context. While previous work has attempted architectural and training augmentation changes to alleviate this effect, no work has attempted to guide the model towards learning a representation space that reflects this interaction between distortion and semantic context inherent to fisheye data. We introduce an approach to exploit this relationship by first extracting distortion class labels based on an object’s distance from the center of the image. We then shape a backbone’s representation space with a weighted contrastive loss that constrains objects of the same semantic class and distortion class to be close to each other within a lower dimensional embedding space. This backbone trained with both semantic and distortion information is then fine-tuned within an object detection setting to empirically evaluate the quality of the learnt representation. We show this method leads to performance improvements by as much as 1.1% mean average precision over standard object detection strategies and .6% improvement over other state of the art representation learning approaches.
The data for this work can be found in from the WoodScape Fisheye Object Detection Dataset.
This dataset needs to be processed to extract out the patches for every single object. An example of the output of this extraction process is shown in "./csv_ford/csv_ford/patches_train_5_classes_small_area.csv." This file is the result of extracting every object as a patch based on the ground truth bounding box coordinates. The patch image file is labeled with the number of object and the corresponding image file. The box coordinates are used to derive the region class and the distance from the center of the image. The region labels are generated by discretizing the distance from the center. This discretization is binary in the base form of the method.
In a typical experiment, contrastive pre-training takes place on the data present in the files located at:
./csv_ford
-
Set the python path with: export PYTHONPATH=$PYTHONPATH:$PWD
-
Train the backbone network with the supervised contrastive loss using the parameters specified in config/config_supcon.py
a) Specify number of regional labels to train with --num_methods parameter
b) Specify which regional labels to train with --method1, --method2, etc.
c) Specify which dataset to train within in the --dataset field
d) An example of a script would be:
python training_main/main_supcon.py --dataset 'Region_WoodScape' --num_methods 1 --method 'SupCon' --method1 'region_class'
-
Take the pre-trained encoder network generated from this repo and transfer it to the flexible yolo v5 codebase.
-
Train with this backbone weights frozen while fine-tuning the Yolo v4 head with the basic WoodScape dataset.
This work was done in collaboration with the Ford Motor Company.
This codebase utilized was partly constructed with code from the Supervised Contrastive Learning Github.
