--- res: bibo_abstract: - The ability to leverage large-scale hardware parallelism has been one of the key enablers of the accelerated recent progress in machine learning. Consequently, there has been considerable effort invested into developing efficient parallel variants of classic machine learning algorithms. However, despite the wealth of knowledge on parallelization, some classic machine learning algorithms often prove hard to parallelize efficiently while maintaining convergence. In this paper, we focus on efficient parallel algorithms for the key machine learning task of inference on graphical models, in particular on the fundamental belief propagation algorithm. We address the challenge of efficiently parallelizing this classic paradigm by showing how to leverage scalable relaxed schedulers in this context. We present an extensive empirical study, showing that our approach outperforms previous parallel belief propagation implementations both in terms of scalability and in terms of wall-clock convergence time, on a range of practical applications.@eng bibo_authorlist: - foaf_Person: foaf_givenName: Vitaly foaf_name: Aksenov, Vitaly foaf_surname: Aksenov - foaf_Person: foaf_givenName: Dan-Adrian foaf_name: Alistarh, Dan-Adrian foaf_surname: Alistarh foaf_workInfoHomepage: http://www.librecat.org/personId=4A899BFC-F248-11E8-B48F-1D18A9856A87 orcid: 0000-0003-3650-940X - foaf_Person: foaf_givenName: Janne foaf_name: Korhonen, Janne foaf_surname: Korhonen foaf_workInfoHomepage: http://www.librecat.org/personId=C5402D42-15BC-11E9-A202-CA2BE6697425 bibo_volume: 33 dct_date: 2020^xs_gYear dct_isPartOf: - http://id.crossref.org/issn/10495258 - http://id.crossref.org/issn/9781713829546 dct_language: eng dct_publisher: Curran Associates@ dct_title: Scalable belief propagation via relaxed scheduling@ ...