An Empirical Study Into What Matters for Calibrating Vision–Language Models

Weijie Tu*, Weijian Deng, Dylan Campbell, Stephen Gould, Tom Gedeon

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

Abstract

Vision–Language Models (VLMs) have emerged as the dominant approach for zero-shot recognition, adept at handling diverse scenarios and significant distribution changes. However, their deployment in risk-sensitive areas requires a deep understanding of their uncertainty estimation capabilities, a relatively uncharted area. In this study, we explore the calibration properties of VLMs across different architectures, datasets, and training strategies. In particular, we analyze the uncertainty estimation performance of VLMs when calibrated in one domain, label set or hierarchy level, and tested in a different one. Our findings reveal that while VLMs are not inherently calibrated for uncertainty, temperature scaling significantly and consistently improves calibration, even across shifts in distribution and changes in label set. Moreover, VLMs can be calibrated with a very small set of examples. Through detailed experimentation, we highlight the potential applications and importance of our insights, aiming for more reliable and effective use of VLMs in critical, real-world scenarios.

Original languageEnglish
Pages (from-to)48791-48808
Number of pages18
JournalProceedings of Machine Learning Research
Volume235
Publication statusPublished - 2024
Event41st International Conference on Machine Learning, ICML 2024 - Vienna, Austria
Duration: 21 Jul 202427 Jul 2024

Fingerprint

Dive into the research topics of 'An Empirical Study Into What Matters for Calibrating Vision–Language Models'. Together they form a unique fingerprint.

Cite this