Skip to content

Coherence Regularization for Neural Topic Models

Neural topic models aim to predict the words of a document given the document itself. In such models, perplexity is used as a training criterion, whereas the final quality measure is topic coherence. In this work, we introduce a coherence regularization loss that penalizes incoherent topics during the training of the model. We analyze our approach using coherence and an additional metric – exclusivity, responsible for the uniqueness of the terms in topics. We argue that this combination of metrics is an adequate indicator of the model quality. Our results indicate the effectiveness of our loss and the potential to be used in the future neural topic models.

The paper will be published at the 16th International Symposium on Neural Networks taking place in Moscow. In the meantime, do not hesitate to contact our R&D department at research@euranova.eu to discuss how you can leverage neural topic models in your projects.

Katsiaryna Krasnashchok, Aymen Cherif, Coherence Regularization for Neural Topic Models. in 16th International Symposium on Neural Networks 2019 (ISNN 2019)

Click here to access the paper.

Releated Posts

Insights From Flink Forward 2024

In October, our CTO Sabri Skhiri attended the Flink Forward conference, held in Berlin, which marked the 10-year anniversary of Apache Flink. This event brought together experts and enthusiasts in the field of stream processing to discuss the latest advancements, challenges, and future trends. In this article, Sabri will delve into some of the keynotes and talks that took place during the conference, highlighting the noteworthy insights and innovations shared by Ververica and industry leaders.
Read More

Internships 2025

This document presents internships supervised by our consulting department or by our research & development department. Each project is an opportunity to feel both empowered and responsible for your own professional development and for your contribution to the company.
Read More