We propose a framework using contrastive learning as a pre-training task to perform image classification in the presence of noisy labels. Recent strategies, such as pseudo-labelling, sample selection with Gaussian Mixture models, and weighted supervised contrastive learning have been combined into a fine-tuning phase following the pre-training. In this paper, we provide an extensive empirical study showing that a preliminary contrastive learning step brings a significant gain in performance when using different loss functions: non robust, robust, and early-learning regularized. Our experiments performed on standard benchmarks and real-world datasets demonstrate that: (i) the contrastive pre-training increases the robustness of any loss function to noisy labels and (ii) the additional fine-tuning phase can further improve accuracy but at the cost of additional complexity.
Madalina Ciortan, Romain Dupuis and Thomas Peel, A Framework Using Contrastive Learning for Classification with Noisy Labels, Data, 2021, 6, 61.
DOI: https://doi.org/10.3390/data6060061
Watch the presentation on YouTube.
Public implementation: https://github.com/ciortanmadalina/constrastive-noisy-label
Click here to access the paper.