{"external_id":{"arxiv":["1803.08917"],"isi":["000461823304061"]},"author":[{"first_name":"Dan-Adrian","orcid":"0000-0003-3650-940X","last_name":"Alistarh","id":"4A899BFC-F248-11E8-B48F-1D18A9856A87","full_name":"Alistarh, Dan-Adrian"},{"last_name":"Allen-Zhu","first_name":"Zeyuan","full_name":"Allen-Zhu, Zeyuan"},{"last_name":"Li","first_name":"Jerry","full_name":"Li, Jerry"}],"date_updated":"2023-09-19T15:12:45Z","user_id":"c635000d-4b10-11ee-a964-aac5a93f6ac1","year":"2018","_id":"6558","quality_controlled":"1","isi":1,"publication_status":"published","volume":2018,"scopus_import":"1","main_file_link":[{"url":"https://arxiv.org/abs/1803.08917","open_access":"1"}],"publication":"Advances in Neural Information Processing Systems","month":"12","citation":{"short":"D.-A. Alistarh, Z. Allen-Zhu, J. Li, in:, Advances in Neural Information Processing Systems, Neural Information Processing Systems Foundation, 2018, pp. 4613–4623.","ieee":"D.-A. Alistarh, Z. Allen-Zhu, and J. Li, “Byzantine stochastic gradient descent,” in Advances in Neural Information Processing Systems, Montreal, Canada, 2018, vol. 2018, pp. 4613–4623.","apa":"Alistarh, D.-A., Allen-Zhu, Z., & Li, J. (2018). Byzantine stochastic gradient descent. In Advances in Neural Information Processing Systems (Vol. 2018, pp. 4613–4623). Montreal, Canada: Neural Information Processing Systems Foundation.","ama":"Alistarh D-A, Allen-Zhu Z, Li J. Byzantine stochastic gradient descent. In: Advances in Neural Information Processing Systems. Vol 2018. Neural Information Processing Systems Foundation; 2018:4613-4623.","chicago":"Alistarh, Dan-Adrian, Zeyuan Allen-Zhu, and Jerry Li. “Byzantine Stochastic Gradient Descent.” In Advances in Neural Information Processing Systems, 2018:4613–23. Neural Information Processing Systems Foundation, 2018.","mla":"Alistarh, Dan-Adrian, et al. “Byzantine Stochastic Gradient Descent.” Advances in Neural Information Processing Systems, vol. 2018, Neural Information Processing Systems Foundation, 2018, pp. 4613–23.","ista":"Alistarh D-A, Allen-Zhu Z, Li J. 2018. Byzantine stochastic gradient descent. Advances in Neural Information Processing Systems. NeurIPS: Conference on Neural Information Processing Systems vol. 2018, 4613–4623."},"status":"public","date_created":"2019-06-13T08:22:37Z","language":[{"iso":"eng"}],"article_processing_charge":"No","title":"Byzantine stochastic gradient descent","page":"4613-4623","day":"01","intvolume":" 2018","type":"conference","publisher":"Neural Information Processing Systems Foundation","department":[{"_id":"DaAl"}],"oa":1,"abstract":[{"lang":"eng","text":"This paper studies the problem of distributed stochastic optimization in an adversarial setting where, out of m machines which allegedly compute stochastic gradients every iteration, an α-fraction are Byzantine, and may behave adversarially. Our main result is a variant of stochastic gradient descent (SGD) which finds ε-approximate minimizers of convex functions in T=O~(1/ε²m+α²/ε²) iterations. In contrast, traditional mini-batch SGD needs T=O(1/ε²m) iterations, but cannot tolerate Byzantine failures. Further, we provide a lower bound showing that, up to logarithmic factors, our algorithm is information-theoretically optimal both in terms of sample complexity and time complexity."}],"conference":{"name":"NeurIPS: Conference on Neural Information Processing Systems","end_date":"2018-12-08","start_date":"2018-12-02","location":"Montreal, Canada"},"date_published":"2018-12-01T00:00:00Z","oa_version":"Published Version"}