Skip to content

Towards a Continuous Evaluation of Calibration

For safety-critical systems involving AI components (such as in planes, cars, or healthcare), safety and associated certification tasks are one of the main challenges, which can become costly and difficult to address.

One key aspect is to ensure that the decisions a machine-learning classifier makes are properly calibrated. This Thursday, our engineer Nicolas presented at the MLSC workshop part of the research work on classifiers calibration carried out with our senior data scientist Antoine Bonnefoy.

The Machine Learning in Certified Systems workshop brought together machine learning researchers with international authorities and industry experts to present the main open questions and methods for verification and certification of critical software. The objective was also to define the future research agenda towards the medium-term goal of certifying critical systems involving AI components. The workshop included invited talks, a poster session and panel discussions.
Nicolas talked about improving the calibration of classifiers and its evaluation through the introduction of continuous estimators of related errors.

Watch him present his poster presentation on Youtube.

Click here to access the poster.

Releated Posts

The Building Blocks of a Responsible AI Practice: An Outlook on the Current Landscape

Responsible AI comes with the challenge of implementation. This survey aims to bridge the gap between principles and practice through a study of different approaches taken in the literature and the proposition of a foundational framework.
Read More

TS-Relax : Interprétation des représentations apprises pour les séries temporelles

Les modèles d’apprentissage de représentations sont de plus en plus utilisés, mais des modèles d’IA explicables et de confiance sont nécessaires. Ce travail présente l’adaptation aux séries temporelles d’une méthode d’interprétation de représentation initialement conçue pour les images.
Read More