ZipML: Training linear models with end-to-end low precision, and a little bit of deep learning

H. Zhang, J. Li, K. Kara, D.-A. Alistarh, J. Liu, C. Zhang, in:, Proceedings of Machine Learning Research, PMLR, 2017, pp. 4035–4043.

Download
OA 2017_ICML_Zhang.pdf 849.35 KB
Conference Paper | Published | English

Scopus indexed
Author
Zhang, Hantian; Li, Jerry; Kara, Kaan; Alistarh, Dan-AdrianIST Austria; Liu, Ji; Zhang, Ce
Department
Series Title
PMLR Press
Abstract
Recently there has been significant interest in training machine-learning models at low precision: by reducing precision, one can reduce computation and communication by one order of magnitude. We examine training at reduced precision, both from a theoretical and practical perspective, and ask: is it possible to train models at end-to-end low precision with provable guarantees? Can this lead to consistent order-of-magnitude speedups? We mainly focus on linear models, and the answer is yes for linear models. We develop a simple framework called ZipML based on one simple but novel strategy called double sampling. Our ZipML framework is able to execute training at low precision with no bias, guaranteeing convergence, whereas naive quanti- zation would introduce significant bias. We val- idate our framework across a range of applica- tions, and show that it enables an FPGA proto- type that is up to 6.5 × faster than an implemen- tation using full 32-bit precision. We further de- velop a variance-optimal stochastic quantization strategy and show that it can make a significant difference in a variety of settings. When applied to linear models together with double sampling, we save up to another 1.7 × in data movement compared with uniform quantization. When training deep networks with quantized models, we achieve higher accuracy than the state-of-the- art XNOR-Net.
Publishing Year
Date Published
2017-01-01
Proceedings Title
Proceedings of Machine Learning Research
Volume
70
Page
4035 - 4043
Conference
ICML: International Conference on Machine Learning
Conference Location
Sydney, Australia
Conference Date
2017-08-06 – 2017-08-11
IST-REx-ID
432

Cite this

Zhang H, Li J, Kara K, Alistarh D-A, Liu J, Zhang C. ZipML: Training linear models with end-to-end low precision, and a little bit of deep learning. In: Proceedings of Machine Learning Research. Vol 70. PMLR; 2017:4035-4043.
Zhang, H., Li, J., Kara, K., Alistarh, D.-A., Liu, J., & Zhang, C. (2017). ZipML: Training linear models with end-to-end low precision, and a little bit of deep learning. In Proceedings of Machine Learning Research (Vol. 70, pp. 4035–4043). Sydney, Australia: PMLR.
Zhang, Hantian, Jerry Li, Kaan Kara, Dan-Adrian Alistarh, Ji Liu, and Ce Zhang. “ZipML: Training Linear Models with End-to-End Low Precision, and a Little Bit of Deep Learning.” In Proceedings of Machine Learning Research, 70:4035–43. PMLR, 2017.
H. Zhang, J. Li, K. Kara, D.-A. Alistarh, J. Liu, and C. Zhang, “ZipML: Training linear models with end-to-end low precision, and a little bit of deep learning,” in Proceedings of Machine Learning Research, Sydney, Australia, 2017, vol. 70, pp. 4035–4043.
Zhang H, Li J, Kara K, Alistarh D-A, Liu J, Zhang C. 2017. ZipML: Training linear models with end-to-end low precision, and a little bit of deep learning. Proceedings of Machine Learning Research. ICML: International  Conference  on  Machine Learning, PMLR Press, vol. 70, 4035–4043.
Zhang, Hantian, et al. “ZipML: Training Linear Models with End-to-End Low Precision, and a Little Bit of Deep Learning.” Proceedings of Machine Learning Research, vol. 70, PMLR, 2017, pp. 4035–43.
All files available under the following license(s):
Copyright Statement:
This Item is protected by copyright and/or related rights. [...]
Main File(s)
File Name
Access Level
OA Open Access
Date Uploaded
2019-01-22
MD5 Checksum
86156ba7f4318e47cef3eb9092593c10


Export

Marked Publications

Open Data IST Research Explorer

Search this title in

Google Scholar
ISBN Search