Taming unbalanced training workloads in deep learning with partial collective operations

Li S, Tal Ben-Nun TB-N, Girolamo SD, Alistarh D-A, Hoefler T. 2020. Taming unbalanced training workloads in deep learning with partial collective operations. Proceedings of the 25th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. PPoPP: Sympopsium on Principles and Practice of Parallel Programming, 45–61.


Conference Paper | Published | English
Author
Li, Shigang; Tal Ben-Nun, Tal Ben-Nun; Girolamo, Salvatore Di; Alistarh, Dan-AdrianISTA ; Hoefler, Torsten
Department
Abstract
Load imbalance pervasively exists in distributed deep learning training systems, either caused by the inherent imbalance in learned tasks or by the system itself. Traditional synchronous Stochastic Gradient Descent (SGD) achieves good accuracy for a wide variety of tasks, but relies on global synchronization to accumulate the gradients at every training step. In this paper, we propose eager-SGD, which relaxes the global synchronization for decentralized accumulation. To implement eager-SGD, we propose to use two partial collectives: solo and majority. With solo allreduce, the faster processes contribute their gradients eagerly without waiting for the slower processes, whereas with majority allreduce, at least half of the participants must contribute gradients before continuing, all without using a central parameter server. We theoretically prove the convergence of the algorithms and describe the partial collectives in detail. Experimental results on load-imbalanced environments (CIFAR-10, ImageNet, and UCF101 datasets) show that eager-SGD achieves 1.27x speedup over the state-of-the-art synchronous SGD, without losing accuracy.
Publishing Year
Date Published
2020-02-01
Proceedings Title
Proceedings of the 25th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming
Page
45-61
Conference
PPoPP: Sympopsium on Principles and Practice of Parallel Programming
Conference Location
San Diego, CA, United States
Conference Date
2020-02-22 – 2020-02-26
IST-REx-ID

Cite this

Li S, Tal Ben-Nun TB-N, Girolamo SD, Alistarh D-A, Hoefler T. Taming unbalanced training workloads in deep learning with partial collective operations. In: Proceedings of the 25th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. Association for Computing Machinery; 2020:45-61. doi:10.1145/3332466.3374528
Li, S., Tal Ben-Nun, T. B.-N., Girolamo, S. D., Alistarh, D.-A., & Hoefler, T. (2020). Taming unbalanced training workloads in deep learning with partial collective operations. In Proceedings of the 25th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (pp. 45–61). San Diego, CA, United States: Association for Computing Machinery. https://doi.org/10.1145/3332466.3374528
Li, Shigang, Tal Ben-Nun Tal Ben-Nun, Salvatore Di Girolamo, Dan-Adrian Alistarh, and Torsten Hoefler. “Taming Unbalanced Training Workloads in Deep Learning with Partial Collective Operations.” In Proceedings of the 25th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, 45–61. Association for Computing Machinery, 2020. https://doi.org/10.1145/3332466.3374528.
S. Li, T. B.-N. Tal Ben-Nun, S. D. Girolamo, D.-A. Alistarh, and T. Hoefler, “Taming unbalanced training workloads in deep learning with partial collective operations,” in Proceedings of the 25th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, San Diego, CA, United States, 2020, pp. 45–61.
Li S, Tal Ben-Nun TB-N, Girolamo SD, Alistarh D-A, Hoefler T. 2020. Taming unbalanced training workloads in deep learning with partial collective operations. Proceedings of the 25th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. PPoPP: Sympopsium on Principles and Practice of Parallel Programming, 45–61.
Li, Shigang, et al. “Taming Unbalanced Training Workloads in Deep Learning with Partial Collective Operations.” Proceedings of the 25th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, Association for Computing Machinery, 2020, pp. 45–61, doi:10.1145/3332466.3374528.
All files available under the following license(s):
Copyright Statement:
This Item is protected by copyright and/or related rights. [...]

Link(s) to Main File(s)
Access Level
OA Open Access

Export

Marked Publications

Open Data ISTA Research Explorer

Web of Science

View record in Web of Science®

Sources

arXiv 1908.04207

Search this title in

Google Scholar