Skip to content

Estimating Expected Calibration Errors

Uncertainty in probabilistic classifiers predictions is a key concern when models are used to support human decision making, in broader probabilistic pipelines or when sensitive automatic decisions have to be taken.
Studies have shown that most models are not intrinsically well calibrated, meaning that their decision scores are not consistent with posterior probabilities.
Hence being able to calibrate these models, or enforce calibration while learning them, has regained interest in recent literature.
In this context, properly assessing calibration is paramount to quantify new contributions tackling calibration.
However, there is room for improvement for commonly used metrics and evaluation of calibration could benefit from deeper analyses.
Thus this paper focuses on the empirical evaluation of calibration metrics in the context of classification.
More specifically it evaluates different estimators of the Expected Calibration Error ($ECE$), amongst which legacy estimators and some novel ones, proposed in this paper.
We build an empirical procedure to quantify the quality of these $ECE$ estimators, and use it to decide which estimator should be used in practice for different settings.

Nicolas Posocco, Antoine Bonnefoy, Estimating Expected Calibration Errors, In Proc. of the The 30th International Conference on Artificial Neural Networks, 2021.

Watch the presentation on YouTube.

Click here to access the paper.

Releated Posts

Investigating a Feature Unlearning Bias Mitigation Technique for Cancer-type Bias in AutoPet Dataset

We proposed a feature unlearning technique to reduce cancer-type bias, which improved segmentation accuracy while promoting fairness across sub-groups, even with limited data.
Read More

Muppet: A Modular and Constructive Decomposition for Perturbation-based Explanation Methods

The topic of explainable AI has recently received attention driven by a growing awareness of the need for transparent and accountable AI. In this paper, we propose a novel methodology to decompose any state-of-the-art perturbation-based explainability approach into four blocks. In addition, we provide Muppet: an open-source Python library for explainable AI.
Read More