This is the official implementation for Understanding and Mitigating Miscalibration in Prompt Tuning for Vision-Language Models 🔗 at ICML 2025.
Confidence calibration is critical for the safe deployment of machine learning models in the real world. However, such issue in vision-language models like CLIP, particularly after fine-tuning, has not been fully addressed. In this work, we demonstrate that existing prompt tuning methods usually lead to a trade-off of calibration between base and new classes: the cross-entropy loss used in standard prompt tuning (e.g., CoOp) causes overconfidence in new classes by increasing textual label divergence, whereas regularization-based tuning (e.g., KgCoOp) maintains the confidence level but results in underconfidence in base classes due to the improved accuracy. Inspired by the observations, we introduce Dynamic Outlier Regularization (DOR) to ensure the confidence calibration on both base and new classes after fine-tuning. In particular, DOR minimizes the feature deviation of novel textual labels (instead of base classes) sampled from a large vocabulary set. In effect, DOR prevents the increase in textual divergence for new labels while easing restrictions on base classes. Extensive experiments demonstrate that DOR can notably enhance the calibration performance of current fine-tuning methods.
1. Installation
For installation and other package requirements, please follow the instructions detailed in INSTALL.md.
2. Data preparation
Please follow the instructions at DATASETS.md to prepare all datasets.
Please refer to ./scripts
for more info about our scripts. You should modify the ${DATA_DIR} with yours.
1. Baseline
TRAINER=CoOp # CoOp CoCoOp KgCoOp MaPLe DEPT CoPrompt
GPU_ID=1 # replace it with your GPU ID
bash run/zeroshot.sh ${GPU_ID} # zero-shot CLIP
bash run/fewshot.sh ${TRAINER} vanilla ${GPU_ID} # fine-tuned CLIP
2. DOR (ours)
TRAINER=CoOp # CoOp CoCoOp KgCoOp MaPLe DEPT CoPrompt
GPU_ID=1
bash run/fewshot.sh ${TRAINER} dor ${GPU_ID} # fine-tuned CLIP
The final results will be logged in output/base2new/logs_base2new.csv
.
If you find this useful in your research, please consider citing:
@inproceedings{wang2025understanding,
title={Understanding and Mitigating Miscalibration in Prompt Tuning for Vision-Language Models},
author={Wang, Shuoyuan and Li, Yixuan and Wei, Hongxin},
booktitle = {International Conference on Machine Learning (ICML)},
year = {2025}
}