Skip to content

Distributed frank-wolfe under pipelined stale synchronous parallelism

We are witnessing the move towards data center operating systems (OS), where resources are unified and  processing frameworks coexist with each other. In this context it has been shown that an iteration model with relaxed consistency such as the Stale Synchronous Parallel (SSP) model, while still guaranteeing convergence, is able to cope with the straggler problem for converging iterative algorithms. In this poster we present a model for the integration of the SSP model on a pipelined processing framework. We then apply the SSP on a distributed version of the Frank-Wolfe algorithm and empirically show its convergence under stress situations similar to those encountered in a data center OS.

 

Thomas Peel, and Nam-Luc Tran, Distributed Frank-Wolfe under Pipelined Stale Synchronous Parallelism, poster at the Greed is Great ICML’15 Workshop, Lille, France, July 2015

Releated Posts

Investigating a Feature Unlearning Bias Mitigation Technique for Cancer-type Bias in AutoPet Dataset

We proposed a feature unlearning technique to reduce cancer-type bias, which improved segmentation accuracy while promoting fairness across sub-groups, even with limited data.
Read More

Muppet: A Modular and Constructive Decomposition for Perturbation-based Explanation Methods

The topic of explainable AI has recently received attention driven by a growing awareness of the need for transparent and accountable AI. In this paper, we propose a novel methodology to decompose any state-of-the-art perturbation-based explainability approach into four blocks. In addition, we provide Muppet: an open-source Python library for explainable AI.
Read More