TY - CONF AB - Recently there has been a significant interest in learning disentangled representations, as they promise increased interpretability, generalization to unseen scenarios and faster learning on downstream tasks. In this paper, we investigate the usefulness of different notions of disentanglement for improving the fairness of downstream prediction tasks based on representations. We consider the setting where the goal is to predict a target variable based on the learned representation of high-dimensional observations (such as images) that depend on both the target variable and an \emph{unobserved} sensitive variable. We show that in this setting both the optimal and empirical predictions can be unfair, even if the target variable and the sensitive variable are independent. Analyzing the representations of more than \num{12600} trained state-of-the-art disentangled models, we observe that several disentanglement scores are consistently correlated with increased fairness, suggesting that disentanglement may be a useful property to encourage fairness when sensitive variables are not observed. AU - Locatello, Francesco AU - Abbati, Gabriele AU - Rainforth, Tom AU - Bauer, Stefan AU - Schölkopf, Bernhard AU - Bachem, Olivier ID - 14197 SN - 9781713807933 T2 - Advances in Neural Information Processing Systems TI - On the fairness of disentangled representations VL - 32 ER - TY - CONF AB - A broad class of convex optimization problems can be formulated as a semidefinite program (SDP), minimization of a convex function over the positive-semidefinite cone subject to some affine constraints. The majority of classical SDP solvers are designed for the deterministic setting where problem data is readily available. In this setting, generalized conditional gradient methods (aka Frank-Wolfe-type methods) provide scalable solutions by leveraging the so-called linear minimization oracle instead of the projection onto the semidefinite cone. Most problems in machine learning and modern engineering applications, however, contain some degree of stochasticity. In this work, we propose the first conditional-gradient-type method for solving stochastic optimization problems under affine constraints. Our method guarantees O(k−1/3) convergence rate in expectation on the objective residual and O(k−5/12) on the feasibility gap. AU - Locatello, Francesco AU - Yurtsever, Alp AU - Fercoq, Olivier AU - Cevher, Volkan ID - 14191 SN - 9781713807933 T2 - Advances in Neural Information Processing Systems TI - Stochastic Frank-Wolfe for composite convex minimization VL - 32 ER - TY - CONF AB - A disentangled representation encodes information about the salient factors of variation in the data independently. Although it is often argued that this representational format is useful in learning to solve many real-world down-stream tasks, there is little empirical evidence that supports this claim. In this paper, we conduct a large-scale study that investigates whether disentangled representations are more suitable for abstract reasoning tasks. Using two new tasks similar to Raven's Progressive Matrices, we evaluate the usefulness of the representations learned by 360 state-of-the-art unsupervised disentanglement models. Based on these representations, we train 3600 abstract reasoning models and observe that disentangled representations do in fact lead to better down-stream performance. In particular, they enable quicker learning using fewer samples. AU - Steenkiste, Sjoerd van AU - Locatello, Francesco AU - Schmidhuber, Jürgen AU - Bachem, Olivier ID - 14193 SN - 9781713807933 T2 - Advances in Neural Information Processing Systems TI - Are disentangled representations helpful for abstract visual reasoning? VL - 32 ER - TY - CONF AB - The key idea behind the unsupervised learning of disentangled representations is that real-world data is generated by a few explanatory factors of variation which can be recovered by unsupervised learning algorithms. In this paper, we provide a sober look at recent progress in the field and challenge some common assumptions. We first theoretically show that the unsupervised learning of disentangled representations is fundamentally impossible without inductive biases on both the models and the data. Then, we train more than 12000 models covering most prominent methods and evaluation metrics in a reproducible large-scale experimental study on seven different data sets. We observe that while the different methods successfully enforce properties ``encouraged'' by the corresponding losses, well-disentangled models seemingly cannot be identified without supervision. Furthermore, increased disentanglement does not seem to lead to a decreased sample complexity of learning for downstream tasks. Our results suggest that future work on disentanglement learning should be explicit about the role of inductive biases and (implicit) supervision, investigate concrete benefits of enforcing disentanglement of the learned representations, and consider a reproducible experimental setup covering several data sets. AU - Locatello, Francesco AU - Bauer, Stefan AU - Lucic, Mario AU - Rätsch, Gunnar AU - Gelly, Sylvain AU - Schölkopf, Bernhard AU - Bachem, Olivier ID - 14200 T2 - Proceedings of the 36th International Conference on Machine Learning TI - Challenging common assumptions in the unsupervised learning of disentangled representations VL - 97 ER - TY - JOUR AB - We theoretically study the shapes of lipid vesicles confined to a spherical cavity, elaborating a framework based on the so-called limiting shapes constructed from geometrically simple structural elements such as double-membrane walls and edges. Partly inspired by numerical results, the proposed non-compartmentalized and compartmentalized limiting shapes are arranged in the bilayer-couple phase diagram which is then compared to its free-vesicle counterpart. We also compute the area-difference-elasticity phase diagram of the limiting shapes and we use it to interpret shape transitions experimentally observed in vesicles confined within another vesicle. The limiting-shape framework may be generalized to theoretically investigate the structure of certain cell organelles such as the mitochondrion. AU - Kavcic, Bor AU - Sakashita, A. AU - Noguchi, H. AU - Ziherl, P. ID - 5817 IS - 4 JF - Soft Matter SN - 1744-683X TI - Limiting shapes of confined lipid vesicles VL - 15 ER -