Towards understanding knowledge distillation

P. Bui Thi Mai, C. Lampert, in:, Proceedings of the 36th International Conference on Machine Learning, PMLR, 2019, pp. 5142–5151.

Download
OA paper.pdf 686.43 KB
Conference Paper | Published | English
Department
Abstract
Knowledge distillation, i.e. one classifier being trained on the outputs of another classifier, is an empirically very successful technique for knowledge transfer between classifiers. It has even been observed that classifiers learn much faster and more reliably if trained with the outputs of another classifier as soft labels, instead of from ground truth data. So far, however, there is no satisfactory theoretical explanation of this phenomenon. In this work, we provide the first insights into the working mechanisms of distillation by studying the special case of linear and deep linear classifiers. Specifically, we prove a generalization bound that establishes fast convergence of the expected risk of a distillation-trained linear classifier. From the bound and its proof we extract three keyfactors that determine the success of distillation: data geometry – geometric properties of the datadistribution, in particular class separation, has an immediate influence on the convergence speed of the risk; optimization bias– gradient descentoptimization finds a very favorable minimum of the distillation objective; and strong monotonicity– the expected risk of the student classifier always decreases when the size of the training set grows.
Publishing Year
Date Published
2019-06-13
Proceedings Title
Proceedings of the 36th International Conference on Machine Learning
Volume
97
Page
5142-5151
Conference
ICML: International Conference on Machine Learning
Conference Location
Long Beach, CA, United States
Conference Date
2019-06-10 – 2019-06-15
IST-REx-ID

Cite this

Bui Thi Mai P, Lampert C. Towards understanding knowledge distillation. In: Proceedings of the 36th International Conference on Machine Learning. Vol 97. PMLR; 2019:5142-5151.
Bui Thi Mai, P., & Lampert, C. (2019). Towards understanding knowledge distillation. In Proceedings of the 36th International Conference on Machine Learning (Vol. 97, pp. 5142–5151). Long Beach, CA, United States: PMLR.
Bui Thi Mai, Phuong, and Christoph Lampert. “Towards Understanding Knowledge Distillation.” In Proceedings of the 36th International Conference on Machine Learning, 97:5142–51. PMLR, 2019.
P. Bui Thi Mai and C. Lampert, “Towards understanding knowledge distillation,” in Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, United States, 2019, vol. 97, pp. 5142–5151.
Bui Thi Mai P, Lampert C. 2019. Towards understanding knowledge distillation. Proceedings of the 36th International Conference on Machine Learning. ICML: International Conference on Machine Learning vol. 97. 5142–5151.
Bui Thi Mai, Phuong, and Christoph Lampert. “Towards Understanding Knowledge Distillation.” Proceedings of the 36th International Conference on Machine Learning, vol. 97, PMLR, 2019, pp. 5142–51.
All files available under the following license(s):
Copyright Statement:
This Item is protected by copyright and/or related rights. [...]
Main File(s)
File Name
paper.pdf 686.43 KB
Access Level
OA Open Access
Date Uploaded
2019-06-20
MD5 Checksum
a66d00e2694d749250f8507f301320ca


Export

Marked Publications

Open Data IST Research Explorer

Search this title in

Google Scholar