Skip to content

AMI-Class: Towards a Fully Automated Multi-view Image Classifier

In this paper, we propose an automated framework for multi-view image classification tasks. We combined a GAN-based multi-view embedding architecture with a scalable AutoML library, DeepHyper. The proposed framework is able to, all at once, train a model to find a common latent representation and perform data imputation, choose the best classifier and tune all necessary hyper-parameters. Experiments on the MNIST data-set show the effectiveness of our solution to optimize the end-to-end multi-view classification pipeline.

Mahmoud Jarraya, Maher Marwani, Gianmarco Aversano, Ichraf Lahouli and Sabri Skhiri, AMI-Class: Towards a Fully Automated Multi-view Image Classifier, In Proc. of The 19th International Conference on Computer Analysis of Images and Patterns CAIP2021, September 2021.

Click here to access the paper.

Releated Posts

Augment to Interpret: Unsupervised and Inherently Interpretable Graph Embeddings

In this paper, we study graph representation learning and show that data augmentation that preserves semantics can be learned and used to produce interpretations. Our framework, which we named INGENIOUS, creates inherently interpretable embeddings and eliminates the need for costly additional post-hoc analysis.
Read More

SANGEA: Scalable and Attributed Network Generation

In this paper, we present SANGEA, a sizeable synthetic graph generation framework that extends the applicability of any SGG to large graphs. By first splitting the large graph into communities, SANGEA trains one SGG per community, then links the community graphs back together to create a synthetic large graph.
Read More