Skip to content

Coherence Regularization for Neural Topic Models

Neural topic models aim to predict the words of a document given the document itself. In such models, perplexity is used as a training criterion, whereas the final quality measure is topic coherence. In this work, we introduce a coherence regularization loss that penalizes incoherent topics during the training of the model. We analyze our approach using coherence and an additional metric – exclusivity, responsible for the uniqueness of the terms in topics. We argue that this combination of metrics is an adequate indicator of the model quality. Our results indicate the effectiveness of our loss and the potential to be used in the future neural topic models.

The paper will be published at the 16th International Symposium on Neural Networks taking place in Moscow. In the meantime, do not hesitate to contact our R&D department at research@euranova.eu to discuss how you can leverage neural topic models in your projects.

Katsiaryna Krasnashchok, Aymen Cherif, Coherence Regularization for Neural Topic Models. in 16th International Symposium on Neural Networks 2019 (ISNN 2019)

Click here to access the paper.

Releated Posts

The Building Blocks of a Responsible AI Practice: An Outlook on the Current Landscape

Responsible AI comes with the challenge of implementation. This survey aims to bridge the gap between principles and practice through a study of different approaches taken in the literature and the proposition of a foundational framework.
Read More

TS-Relax : Interprétation des représentations apprises pour les séries temporelles

Les modèles d’apprentissage de représentations sont de plus en plus utilisés, mais des modèles d’IA explicables et de confiance sont nécessaires. Ce travail présente l’adaptation aux séries temporelles d’une méthode d’interprétation de représentation initialement conçue pour les images.
Read More