TY - JOUR AU - Morandell, Jasmin AU - Nicolas, Armel AU - Schwarz, Lena A AU - Novarino, Gaia ID - 7415 IS - Supplement 6 JF - European Neuropsychopharmacology SN - 0924-977X TI - S.16.05 Illuminating the role of the e3 ubiquitin ligase cullin3 in brain development and autism VL - 29 ER - TY - JOUR AU - Knaus, Lisa AU - Tarlungeanu, Dora-Clara AU - Novarino, Gaia ID - 7414 IS - Supplement 6 JF - European Neuropsychopharmacology SN - 0924-977X TI - S.16.03 A homozygous missense mutation in SLC7A5 leads to autism spectrum disorder and microcephaly VL - 29 ER - TY - JOUR AU - Benková, Eva AU - Dagdas, Yasin ID - 7394 IS - 12 JF - Current Opinion in Plant Biology SN - 1369-5266 TI - Editorial overview: Cell biology in the era of omics? VL - 52 ER - TY - CONF AB - Multi-exit architectures, in which a stack of processing layers is interleaved with early output layers, allow the processing of a test example to stop early and thus save computation time and/or energy. In this work, we propose a new training procedure for multi-exit architectures based on the principle of knowledge distillation. The method encourage searly exits to mimic later, more accurate exits, by matching their output probabilities. Experiments on CIFAR100 and ImageNet show that distillation-based training significantly improves the accuracy of early exits while maintaining state-of-the-art accuracy for late ones. The method is particularly beneficial when training data is limited and it allows a straightforward extension to semi-supervised learning,i.e. making use of unlabeled data at training time. Moreover, it takes only afew lines to implement and incurs almost no computational overhead at training time, and none at all at test time. AU - Bui Thi Mai, Phuong AU - Lampert, Christoph ID - 7479 SN - 15505499 T2 - IEEE International Conference on Computer Vision TI - Distillation-based training for multi-exit architectures VL - 2019-October ER - TY - CONF AB - We present a novel class of convolutional neural networks (CNNs) for set functions,i.e., data indexed with the powerset of a finite set. The convolutions are derivedas linear, shift-equivariant functions for various notions of shifts on set functions.The framework is fundamentally different from graph convolutions based on theLaplacian, as it provides not one but several basic shifts, one for each element inthe ground set. Prototypical experiments with several set function classificationtasks on synthetic datasets and on datasets derived from real-world hypergraphsdemonstrate the potential of our new powerset CNNs. AU - Wendler, Chris AU - Alistarh, Dan-Adrian AU - Püschel, Markus ID - 7542 SN - 1049-5258 TI - Powerset convolutional neural networks VL - 32 ER -