{"author":[{"full_name":"Li, Shigang","last_name":"Li","first_name":"Shigang"},{"last_name":"Tal Ben-Nun","full_name":"Tal Ben-Nun, Tal Ben-Nun","first_name":"Tal Ben-Nun"},{"first_name":"Salvatore Di","full_name":"Girolamo, Salvatore Di","last_name":"Girolamo"},{"orcid":"0000-0003-3650-940X","last_name":"Alistarh","full_name":"Alistarh, Dan-Adrian","id":"4A899BFC-F248-11E8-B48F-1D18A9856A87","first_name":"Dan-Adrian"},{"last_name":"Hoefler","full_name":"Hoefler, Torsten","first_name":"Torsten"}],"year":"2020","type":"conference","page":"45-61","date_updated":"2023-08-22T12:13:48Z","month":"02","date_created":"2020-11-05T15:25:30Z","department":[{"_id":"DaAl"}],"title":"Taming unbalanced training workloads in deep learning with partial collective operations","_id":"8722","publication_status":"published","oa":1,"oa_version":"Preprint","isi":1,"main_file_link":[{"open_access":"1","url":"https://arxiv.org/abs/1908.04207"}],"project":[{"grant_number":"805223","name":"Elastic Coordination for Scalable Machine Learning","call_identifier":"H2020","_id":"268A44D6-B435-11E9-9278-68D0E5697425"}],"date_published":"2020-02-01T00:00:00Z","status":"public","ec_funded":1,"day":"01","publication":"Proceedings of the 25th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming","external_id":{"isi":["000564476500004"],"arxiv":["1908.04207"]},"quality_controlled":"1","user_id":"4359f0d1-fa6c-11eb-b949-802e58b17ae8","language":[{"iso":"eng"}],"article_processing_charge":"No","publisher":"Association for Computing Machinery","abstract":[{"lang":"eng","text":"Load imbalance pervasively exists in distributed deep learning training systems, either caused by the inherent imbalance in learned tasks or by the system itself. Traditional synchronous Stochastic Gradient Descent (SGD)\r\nachieves good accuracy for a wide variety of tasks, but relies on global synchronization to accumulate the gradients at every training step. In this paper, we propose eager-SGD, which relaxes the global synchronization for\r\ndecentralized accumulation. To implement eager-SGD, we propose to use two partial collectives: solo and majority. With solo allreduce, the faster processes contribute their gradients eagerly without waiting for the slower processes, whereas with majority allreduce, at least half of the participants must contribute gradients before continuing, all without using a central parameter server. We theoretically prove the convergence of the algorithms and describe the partial collectives in detail. Experimental results on load-imbalanced environments (CIFAR-10, ImageNet, and UCF101 datasets) show\r\nthat eager-SGD achieves 1.27x speedup over the state-of-the-art synchronous SGD, without losing accuracy."}],"citation":{"chicago":"Li, Shigang, Tal Ben-Nun Tal Ben-Nun, Salvatore Di Girolamo, Dan-Adrian Alistarh, and Torsten Hoefler. “Taming Unbalanced Training Workloads in Deep Learning with Partial Collective Operations.” In Proceedings of the 25th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, 45–61. Association for Computing Machinery, 2020. https://doi.org/10.1145/3332466.3374528.","ama":"Li S, Tal Ben-Nun TB-N, Girolamo SD, Alistarh D-A, Hoefler T. Taming unbalanced training workloads in deep learning with partial collective operations. In: Proceedings of the 25th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. Association for Computing Machinery; 2020:45-61. doi:10.1145/3332466.3374528","ieee":"S. Li, T. B.-N. Tal Ben-Nun, S. D. Girolamo, D.-A. Alistarh, and T. Hoefler, “Taming unbalanced training workloads in deep learning with partial collective operations,” in Proceedings of the 25th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, San Diego, CA, United States, 2020, pp. 45–61.","apa":"Li, S., Tal Ben-Nun, T. B.-N., Girolamo, S. D., Alistarh, D.-A., & Hoefler, T. (2020). Taming unbalanced training workloads in deep learning with partial collective operations. In Proceedings of the 25th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (pp. 45–61). San Diego, CA, United States: Association for Computing Machinery. https://doi.org/10.1145/3332466.3374528","short":"S. Li, T.B.-N. Tal Ben-Nun, S.D. Girolamo, D.-A. Alistarh, T. Hoefler, in:, Proceedings of the 25th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, Association for Computing Machinery, 2020, pp. 45–61.","ista":"Li S, Tal Ben-Nun TB-N, Girolamo SD, Alistarh D-A, Hoefler T. 2020. Taming unbalanced training workloads in deep learning with partial collective operations. Proceedings of the 25th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. PPoPP: Sympopsium on Principles and Practice of Parallel Programming, 45–61.","mla":"Li, Shigang, et al. “Taming Unbalanced Training Workloads in Deep Learning with Partial Collective Operations.” Proceedings of the 25th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, Association for Computing Machinery, 2020, pp. 45–61, doi:10.1145/3332466.3374528."},"conference":{"start_date":"2020-02-22","name":"PPoPP: Sympopsium on Principles and Practice of Parallel Programming","end_date":"2020-02-26","location":"San Diego, CA, United States"},"doi":"10.1145/3332466.3374528"}