Project Details
Description
This project aims to address the overconfidence of current highly accurate large deep neural networks, ie., incorrect predictions frequently have high confidence. This project expects to develop new theoretical models of vicinal model calibration, that can be implemented as efficient fine-tuning, ensuring that confidence reduces away from ground truth data, to a uniform distribution for far away images. Expected outcomes are new modelcalibration theory and techniques, for classification and dense prediction, improving out-of-distribution detection while ensuring adversarial robustness. This should provide significant benefits in reducing risk in vision systems, including safety-critical applications, e.g. bushfire detection.
Status | Not started |
---|---|
Effective start/end date | 30/06/25 → 29/06/28 |
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.