-
COCO-20i: COCO2014
Download the data lists (.txt files) and put them into the
BAM/listsdirectory. -
Run
util/get_mulway_base_data.pyto generate base annotations for stage1, or directly use the trained weights.
Our model relies on two models, BAM and SAM.
Firstly, clone BAM, and download datasets and model weights.
Then, clone SAM, and download model weights.
cd BAM-main, make new folder visual, visual/query, visual/output, visual/label, and visual/label2, then make new folder output2pt.
mkdir visual
Copy our test_EF.py, test_EF.sh and test_save.py, test_save.sh to BAM-main, run
sh test_save.sh
Then, cd ../segment-anything-main/notebooks.
Copy our EFSAM.ipynb and EFSAM-multiple.ipynb to ../segment-anything-main/notebooks. Make new file output2.
mkdir output2
Run EFSAM.ipynb or EFSAM-multiple.ipynb to get results and save in ../../BAM-main/output2pt/.
- Note: the mIoU results in EFSAM.ipynb and EFSAM-multiple.ipynb are not the final mIOU.
- Note: Do not use EFSAM-old.ipynb and EFSAM-multiple-old.ipynb.
cd ../../BAM-main, run
sh test_EF.sh
To get the final results by EF-SAM.
If you find our paper and repo are helpful for your research, please consider citing:
@article{feng2025learning,
title={Learning few-shot semantic segmentation with error-filtered segment anything model},
author={Feng, Chen-Bin and Lai, Qi and Liu, Kangdao and Su, Houcheng and Chen, Hao and Luo, Kaixi and Vong, Chi-Man},
journal={The Visual Computer},
pages={1--15},
year={2025},
publisher={Springer}
}