Relaxed schedulers can efficiently parallelize iterative algorithms

Alistarh D-A, Brown TA, Kopinsky J, Nadiradze G. 2018. Relaxed schedulers can efficiently parallelize iterative algorithms. Proceedings of the 2018 ACM Symposium on Principles of Distributed Computing  - PODC ’18. PODC: Principles of Distributed Computing, 377–386.


Conference Paper | Published | English

Scopus indexed
Author
Alistarh, Dan-AdrianISTA ; Brown, Trevor AISTA; Kopinsky, Justin; Nadiradze, Giorgi
Department
Abstract
There has been significant progress in understanding the parallelism inherent to iterative sequential algorithms: for many classic algorithms, the depth of the dependence structure is now well understood, and scheduling techniques have been developed to exploit this shallow dependence structure for efficient parallel implementations. A related, applied research strand has studied methods by which certain iterative task-based algorithms can be efficiently parallelized via relaxed concurrent priority schedulers. These allow for high concurrency when inserting and removing tasks, at the cost of executing superfluous work due to the relaxed semantics of the scheduler. In this work, we take a step towards unifying these two research directions, by showing that there exists a family of relaxed priority schedulers that can efficiently and deterministically execute classic iterative algorithms such as greedy maximal independent set (MIS) and matching. Our primary result shows that, given a randomized scheduler with an expected relaxation factor of k in terms of the maximum allowed priority inversions on a task, and any graph on n vertices, the scheduler is able to execute greedy MIS with only an additive factor of \poly(k) expected additional iterations compared to an exact (but not scalable) scheduler. This counter-intuitive result demonstrates that the overhead of relaxation when computing MIS is not dependent on the input size or structure of the input graph. Experimental results show that this overhead can be clearly offset by the gain in performance due to the highly scalable scheduler. In sum, we present an efficient method to deterministically parallelize iterative sequential algorithms, with provable runtime guarantees in terms of the number of executed tasks to completion.
Publishing Year
Date Published
2018-07-23
Proceedings Title
Proceedings of the 2018 ACM Symposium on Principles of Distributed Computing - PODC '18
Page
377-386
Conference
PODC: Principles of Distributed Computing
Conference Location
Egham, United Kingdom
Conference Date
2018-07-23 – 2018-07-27
IST-REx-ID

Cite this

Alistarh D-A, Brown TA, Kopinsky J, Nadiradze G. Relaxed schedulers can efficiently parallelize iterative algorithms. In: Proceedings of the 2018 ACM Symposium on Principles of Distributed Computing  - PODC ’18. ACM Press; 2018:377-386. doi:10.1145/3212734.3212756
Alistarh, D.-A., Brown, T. A., Kopinsky, J., & Nadiradze, G. (2018). Relaxed schedulers can efficiently parallelize iterative algorithms. In Proceedings of the 2018 ACM Symposium on Principles of Distributed Computing  - PODC ’18 (pp. 377–386). Egham, United Kingdom: ACM Press. https://doi.org/10.1145/3212734.3212756
Alistarh, Dan-Adrian, Trevor A Brown, Justin Kopinsky, and Giorgi Nadiradze. “Relaxed Schedulers Can Efficiently Parallelize Iterative Algorithms.” In Proceedings of the 2018 ACM Symposium on Principles of Distributed Computing  - PODC ’18, 377–86. ACM Press, 2018. https://doi.org/10.1145/3212734.3212756.
D.-A. Alistarh, T. A. Brown, J. Kopinsky, and G. Nadiradze, “Relaxed schedulers can efficiently parallelize iterative algorithms,” in Proceedings of the 2018 ACM Symposium on Principles of Distributed Computing  - PODC ’18, Egham, United Kingdom, 2018, pp. 377–386.
Alistarh D-A, Brown TA, Kopinsky J, Nadiradze G. 2018. Relaxed schedulers can efficiently parallelize iterative algorithms. Proceedings of the 2018 ACM Symposium on Principles of Distributed Computing  - PODC ’18. PODC: Principles of Distributed Computing, 377–386.
Alistarh, Dan-Adrian, et al. “Relaxed Schedulers Can Efficiently Parallelize Iterative Algorithms.” Proceedings of the 2018 ACM Symposium on Principles of Distributed Computing  - PODC ’18, ACM Press, 2018, pp. 377–86, doi:10.1145/3212734.3212756.
All files available under the following license(s):
Copyright Statement:
This Item is protected by copyright and/or related rights. [...]

Link(s) to Main File(s)
Access Level
OA Open Access

Export

Marked Publications

Open Data ISTA Research Explorer

Web of Science

View record in Web of Science®

Sources

arXiv 1808.04155

Search this title in

Google Scholar
ISBN Search