Skip to content

Investigating a Feature Unlearning Bias Mitigation Technique for Cancer-type Bias in AutoPet Dataset

Integrating datasets from different cancer types can improve diagnostic accuracy, as deep learning models tend to generalise better with more data. However, this benefit is often limited by performance variance caused by biases, such as under- or over-representation of certain diseases. In this work, we propose a cancer-type-invariant model capable of segmenting tumours from both lymphoma and lung cancer, irrespective of their frequency or representation bias. We frame the problem as a transfer learning task; we introduce a discriminator dedicated to learning bias-group specific features and a confusion loss that preserve generic features while unlearning the domain-specific ones. 

Duc Thang Hoang, Quentin Ferre, Elsa Schalck, Olivier Humbert, Rosana El Jurdi, Investigating a Feature Unlearning Bias Mitigation Technique for Cancer-type Bias in AutoPet Dataset, In Proc. of the 30th Colloque Francophone de Traitement du Signal et des Images, Août 2025. 

Click here to access the poster.

Releated Posts

Muppet: A Modular and Constructive Decomposition for Perturbation-based Explanation Methods

The topic of explainable AI has recently received attention driven by a growing awareness of the need for transparent and accountable AI. In this paper, we propose a novel methodology to decompose any state-of-the-art perturbation-based explainability approach into four blocks. In addition, we provide Muppet: an open-source Python library for explainable AI.
Read More

Insights from GTC Paris 2025

Among the NVIDIA GTC Paris crowd was our CTO Sabri Skhiri, and from quantum computing breakthroughs to the full-stack AI advancements powering industrial digital twins and robotics, there is a lot to share! Explore with Sabri GTC 2025 trends, keynotes, and what it means for businesses looking to innovate.
Read More