Skip to content

A Fair Classifier Embracing Triplet Collapse

In this paper, we study the behaviour of the triplet loss and show that it can be exploited to limit the biases created and perpetuated by machine learning models. Our fair classifier uses the collapse of the triplet loss when its margin is greater than the maximum distance between two points in the latent space, in the case of stochastic triplet selection.

Alice Martzloff, Nicolas Posocco, Quentin Ferré, A Fair Classifier Embracing Triplet Collapse, In Proc. of The Conférence sur l’Apprentissage Automatique, July 2023.

Click here to access the paper.

Releated Posts

Muppet: A Modular and Constructive Decomposition for Perturbation-based Explanation Methods

The topic of explainable AI has recently received attention driven by a growing awareness of the need for transparent and accountable AI. In this paper, we propose a novel methodology to decompose any state-of-the-art perturbation-based explainability approach into four blocks. In addition, we provide Muppet: an open-source Python library for explainable AI.
Read More

Insights from GTC Paris 2025

Among the NVIDIA GTC Paris crowd was our CTO Sabri Skhiri, and from quantum computing breakthroughs to the full-stack AI advancements powering industrial digital twins and robotics, there is a lot to share! Explore with Sabri GTC 2025 trends, keynotes, and what it means for businesses looking to innovate.
Read More