Skip to content

A Performance Prediction Model for Spark Applications

Apache Spark is a popular open-source distributed-processing framework that enables efficient processing of massive amounts of data. It has a large number of parameters that need to be tuned to get the best performance. However, tuning these parameters manually is a complex and time-consuming task. Therefore, a robust performance model to predict applications execution time could greatly help in accelerating the deployment and optimization of big data applications relying on Spark. In this paper, we ran extensive experiments on a selected set of Spark applications that cover the most common workloads to generate a representative dataset of execution time. In addition, we extracted application and data features to build a machine learning-based performance model to predict Spark applications execution time. The experiments show that boosting algorithms achieved better results compared to other algorithms.

Florian Demesmaeker, Amine Ghrab, Usama Javaid, Ahmed Amir Kanoun, A Performance Prediction Model for Spark Applications, in the proceedings of Big Data congress 2020.

Click here to access the paper in its preprint form.

Releated Posts

The Building Blocks of a Responsible AI Practice: An Outlook on the Current Landscape

Responsible AI comes with the challenge of implementation. This survey aims to bridge the gap between principles and practice through a study of different approaches taken in the literature and the proposition of a foundational framework.
Read More

TS-Relax : Interprétation des représentations apprises pour les séries temporelles

Les modèles d’apprentissage de représentations sont de plus en plus utilisés, mais des modèles d’IA explicables et de confiance sont nécessaires. Ce travail présente l’adaptation aux séries temporelles d’une méthode d’interprétation de représentation initialement conçue pour les images.
Read More