Anomaly detection is a widely explored domain in machine learning. Many models are proposed in the literature, and compared through different metrics measured on various datasets.
The most popular metrics used to compare performances are F1-score, AUC and AVPR.
DAEMA: Denoising Autoencoder with Mask Attention
Missing data is a recurrent and challenging problem, especially when using machine learning algorithms for real-world applications. For this reason, missing data imputation has become an active research area, in which recent deep learning approaches have achieved state-of-the-art results. We propose DAEMA: Denoising Autoencoder with Mask Attention.
Continue readingEstimating Expected Calibration Errors
Uncertainty in probabilistic classifiers predictions is a key concern when models are used to support human decision making, in broader probabilistic pipelines or when sensitive automatic decisions have to be taken.
Studies have shown that most models are not intrinsically well calibrated, meaning that their decision scores are not consistent.
10 Papers in 2021
To answer today’s problems, our research centre is dedicated to anticipating the challenges that European businesses face. Find out the impacts of our latest published papers.
Continue readingA Framework Using Contrastive Learning for Classification with Noisy Labels
We propose a framework using contrastive learning as a pre-training task to perform image classification in the presence of noisy labels. Recent strategies, such as pseudo-labelling, sample selection with Gaussian Mixture models, and weighted supervised contrastive learning have been combined into a fine-tuning phase following the pre-training.
Continue readingAMU-EURANOVA at CASE 2021 Task 1: Assessing the stability of multilingual BERT
This paper explains our participation in task 1of the CASE 2021 shared task. This task is about multilingual event extraction from the news. We focused on sub-task 4, event information extraction. This sub-task has a small training dataset, and we fine-tuned a multilingual BERT to solve this sub-task.
Continue reading