Skip to content

Towards a Continuous Evaluation of Calibration

For safety-critical systems involving AI components (such as in planes, cars, or healthcare), safety and associated certification tasks are one of the main challenges, which can become costly and difficult to address.

One key aspect is to ensure that the decisions a machine-learning classifier makes are properly calibrated. This Thursday, our engineer Nicolas presented at the MLSC workshop part of the research work on classifiers calibration carried out with our senior data scientist Antoine Bonnefoy.

The Machine Learning in Certified Systems workshop brought together machine learning researchers with international authorities and industry experts to present the main open questions and methods for verification and certification of critical software. The objective was also to define the future research agenda towards the medium-term goal of certifying critical systems involving AI components. The workshop included invited talks, a poster session and panel discussions.
Nicolas talked about improving the calibration of classifiers and its evaluation through the introduction of continuous estimators of related errors.

Watch him present his poster presentation on Youtube.

Click here to access the poster.

Releated Posts

IEEE Big Data 2023 – A Summary

Our CTO, Sabri Skhiri, recently travelled to Sorrento for IEEE Big Data 2023. In this article, Sabri explores for you the various keynotes and talks that took place during the
Read More

Robust ML Approach for Screening MET Drug Candidates in Combination with Immune Checkpoint Inhibitors

Present study highlights the significance of dataset size in ICI microbiota models and presents a methodology to enhance the performances of a multi-cohort-based ML approach.
Read More