Skip to content

DAEMA: Denoising Autoencoder with Mask Attention

Missing data is a recurrent and challenging problem, especially when using machine learning algorithms for real-world applications. For this reason, missing data imputation has become an active research area, in which recent deep learning approaches have achieved state-of-the-art results. We propose DAEMA: Denoising Autoencoder with Mask Attention, an algorithm based on a denoising autoencoder architecture with an attention mechanism.
While most imputation algorithms use incomplete inputs as they would use complete data – up to basic preprocessing (e.g. mean imputation) – DAEMA leverages a mask-based attention mechanism to focus on the observed values of its inputs.
We evaluate DAEMA both in terms of reconstruction capabilities and downstream prediction and show that it achieves superior performance to state-of-the-art algorithms on several publicly available real-world datasets under various missingness settings.

The paper won the third-best paper award of ICANN 2021! It is freely accessible in its preprint form: https://arxiv.org/abs/2106.16057.

Simon Tihon*, Muhammad Usama Javaid*, Damien Fourure, Nicolas Posocco, Thomas Peel, DAEMA: Denoising Autoencoder with Mask Attention, In Proc. of the The 30th International Conference on Artificial Neural Networks, 2021.

* equal contributions

Watch the presentation on YouTube.

Releated Posts

Investigating a Feature Unlearning Bias Mitigation Technique for Cancer-type Bias in AutoPet Dataset

We proposed a feature unlearning technique to reduce cancer-type bias, which improved segmentation accuracy while promoting fairness across sub-groups, even with limited data.
Read More

Muppet: A Modular and Constructive Decomposition for Perturbation-based Explanation Methods

The topic of explainable AI has recently received attention driven by a growing awareness of the need for transparent and accountable AI. In this paper, we propose a novel methodology to decompose any state-of-the-art perturbation-based explainability approach into four blocks. In addition, we provide Muppet: an open-source Python library for explainable AI.
Read More