Skip to content

Multimodal Classifier For Space Target Recognition

In this paper, we propose a multi-modal framework to tackle the SPARK Challenge by classifying satellites using RGB and depth images. Our framework is mainly based on Auto-Encoders (AE)s to embed the two modalities in a common latent space in order to exploit redundant and complementary information between the two types of data.

Ichraf Lahouli, Mahmoud Jarraya, and Gianmarco Aversano, Multimodal Classifier For Space Target Recognition, In Proc. of The 2021 IEEE International Conference on Image Processing, September 2021.

Click here to access the paper.

Releated Posts

Investigating a Feature Unlearning Bias Mitigation Technique for Cancer-type Bias in AutoPet Dataset

We proposed a feature unlearning technique to reduce cancer-type bias, which improved segmentation accuracy while promoting fairness across sub-groups, even with limited data.
Read More

Muppet: A Modular and Constructive Decomposition for Perturbation-based Explanation Methods

The topic of explainable AI has recently received attention driven by a growing awareness of the need for transparent and accountable AI. In this paper, we propose a novel methodology to decompose any state-of-the-art perturbation-based explainability approach into four blocks. In addition, we provide Muppet: an open-source Python library for explainable AI.
Read More