TY - JOUR AB - Bending-active structures are able to efficiently produce complex curved shapes from flat panels. The desired deformation of the panels derives from the proper selection of their elastic properties. Optimized panels, called FlexMaps, are designed such that, once they are bent and assembled, the resulting static equilibrium configuration matches a desired input 3D shape. The FlexMaps elastic properties are controlled by locally varying spiraling geometric mesostructures, which are optimized in size and shape to match specific bending requests, namely the global curvature of the target shape. The design pipeline starts from a quad mesh representing the input 3D shape, which defines the edge size and the total amount of spirals: every quad will embed one spiral. Then, an optimization algorithm tunes the geometry of the spirals by using a simplified pre-computed rod model. This rod model is derived from a non-linear regression algorithm which approximates the non-linear behavior of solid FEM spiral models subject to hundreds of load combinations. This innovative pipeline has been applied to the project of a lightweight plywood pavilion named FlexMaps Pavilion, which is a single-layer piecewise twisted arch that fits a bounding box of 3.90x3.96x3.25 meters. This case study serves to test the applicability of this methodology at the architectural scale. The structure is validated via FE analyses and the fabrication of the full scale prototype. AU - Laccone, Francesco AU - Malomo, Luigi AU - Perez Rodriguez, Jesus AU - Pietroni, Nico AU - Ponchio, Federico AU - Bickel, Bernd AU - Cignoni, Paolo ID - 9208 IS - 9 JF - SN Applied Sciences TI - A bending-active twisted-arch plywood structure: Computational design and fabrication of the FlexMaps Pavilion VL - 2 ER - TY - CONF AB - Recent works have shown that gradient descent can find a global minimum for over-parameterized neural networks where the widths of all the hidden layers scale polynomially with N (N being the number of training samples). In this paper, we prove that, for deep networks, a single layer of width N following the input layer suffices to ensure a similar guarantee. In particular, all the remaining layers are allowed to have constant widths, and form a pyramidal topology. We show an application of our result to the widely used LeCun’s initialization and obtain an over-parameterization requirement for the single wide layer of order N2. AU - Nguyen, Quynh AU - Mondelli, Marco ID - 9221 T2 - 34th Conference on Neural Information Processing Systems TI - Global convergence of deep networks with one wide layer followed by pyramidal topology VL - 33 ER - TY - CONF AB - Optimizing convolutional neural networks for fast inference has recently become an extremely active area of research. One of the go-to solutions in this context is weight pruning, which aims to reduce computational and memory footprint by removing large subsets of the connections in a neural network. Surprisingly, much less attention has been given to exploiting sparsity in the activation maps, which tend to be naturally sparse in many settings thanks to the structure of rectified linear (ReLU) activation functions. In this paper, we present an in-depth analysis of methods for maximizing the sparsity of the activations in a trained neural network, and show that, when coupled with an efficient sparse-input convolution algorithm, we can leverage this sparsity for significant performance gains. To induce highly sparse activation maps without accuracy loss, we introduce a new regularization technique, coupled with a new threshold-based sparsification method based on a parameterized activation function called Forced-Activation-Threshold Rectified Linear Unit (FATReLU). We examine the impact of our methods on popular image classification models, showing that most architectures can adapt to significantly sparser activation maps without any accuracy loss. Our second contribution is showing that these these compression gains can be translated into inference speedups: we provide a new algorithm to enable fast convolution operations over networks with sparse activations, and show that it can enable significant speedups for end-to-end inference on a range of popular models on the large-scale ImageNet image classification task on modern Intel CPUs, with little or no retraining cost. AU - Kurtz, Mark AU - Kopinsky, Justin AU - Gelashvili, Rati AU - Matveev, Alexander AU - Carr, John AU - Goin, Michael AU - Leiserson, William AU - Moore, Sage AU - Nell, Bill AU - Shavit, Nir AU - Alistarh, Dan-Adrian ID - 9415 SN - 2640-3498 T2 - 37th International Conference on Machine Learning, ICML 2020 TI - Inducing and exploiting activation sparsity for fast neural network inference VL - 119 ER - TY - CONF AB - The family of feedback alignment (FA) algorithms aims to provide a more biologically motivated alternative to backpropagation (BP), by substituting the computations that are unrealistic to be implemented in physical brains. While FA algorithms have been shown to work well in practice, there is a lack of rigorous theory proofing their learning capabilities. Here we introduce the first feedback alignment algorithm with provable learning guarantees. In contrast to existing work, we do not require any assumption about the size or depth of the network except that it has a single output neuron, i.e., such as for binary classification tasks. We show that our FA algorithm can deliver its theoretical promises in practice, surpassing the learning performance of existing FA methods and matching backpropagation in binary classification tasks. Finally, we demonstrate the limits of our FA variant when the number of output neurons grows beyond a certain quantity. AU - Lechner, Mathias ID - 10672 T2 - 8th International Conference on Learning Representations TI - Learning representations for binary-classification without backpropagation ER - TY - CONF AB - A natural approach to generative modeling of videos is to represent them as a composition of moving objects. Recent works model a set of 2D sprites over a slowly-varying background, but without considering the underlying 3D scene that gives rise to them. We instead propose to model a video as the view seen while moving through a scene with multiple 3D objects and a 3D background. Our model is trained from monocular videos without any supervision, yet learns to generate coherent 3D scenes containing several moving objects. We conduct detailed experiments on two datasets, going beyond the visual complexity supported by state-of-the-art generative approaches. We evaluate our method on depth-prediction and 3D object detection---tasks which cannot be addressed by those earlier works---and show it out-performs them even on 2D instance segmentation and tracking. AU - Henderson, Paul M AU - Lampert, Christoph ID - 8188 SN - 9781713829546 T2 - 34th Conference on Neural Information Processing Systems TI - Unsupervised object-centric video generation and decomposition in 3D VL - 33 ER - TY - BOOK AB - This booklet is a collection of abstracts presented at the AHPC conference. ED - Schlögl, Alois ED - Kiss, Janos ED - Elefante, Stefano ID - 7474 SN - 978-3-99078-004-6 TI - Austrian High-Performance-Computing meeting (AHPC2020) ER - TY - CONF AB - Quantization converts neural networks into low-bit fixed-point computations which can be carried out by efficient integer-only hardware, and is standard practice for the deployment of neural networks on real-time embedded devices. However, like their real-numbered counterpart, quantized networks are not immune to malicious misclassification caused by adversarial attacks. We investigate how quantization affects a network’s robustness to adversarial attacks, which is a formal verification question. We show that neither robustness nor non-robustness are monotonic with changing the number of bits for the representation and, also, neither are preserved by quantization from a real-numbered network. For this reason, we introduce a verification method for quantized neural networks which, using SMT solving over bit-vectors, accounts for their exact, bit-precise semantics. We built a tool and analyzed the effect of quantization on a classifier for the MNIST dataset. We demonstrate that, compared to our method, existing methods for the analysis of real-numbered networks often derive false conclusions about their quantizations, both when determining robustness and when detecting attacks, and that existing methods for quantized networks often miss attacks. Furthermore, we applied our method beyond robustness, showing how the number of bits in quantization enlarges the gender bias of a predictor for students’ grades. AU - Giacobbe, Mirco AU - Henzinger, Thomas A AU - Lechner, Mathias ID - 7808 SN - 03029743 T2 - International Conference on Tools and Algorithms for the Construction and Analysis of Systems TI - How many bits does it take to quantize your neural network? VL - 12079 ER - TY - CONF AB - Isomanifolds are the generalization of isosurfaces to arbitrary dimension and codimension, i.e. manifolds defined as the zero set of some multivariate vector-valued smooth function f: ℝ^d → ℝ^(d-n). A natural (and efficient) way to approximate an isomanifold is to consider its Piecewise-Linear (PL) approximation based on a triangulation 𝒯 of the ambient space ℝ^d. In this paper, we give conditions under which the PL-approximation of an isomanifold is topologically equivalent to the isomanifold. The conditions are easy to satisfy in the sense that they can always be met by taking a sufficiently fine triangulation 𝒯. This contrasts with previous results on the triangulation of manifolds where, in arbitrary dimensions, delicate perturbations are needed to guarantee topological correctness, which leads to strong limitations in practice. We further give a bound on the Fréchet distance between the original isomanifold and its PL-approximation. Finally we show analogous results for the PL-approximation of an isomanifold with boundary. AU - Boissonnat, Jean-Daniel AU - Wintraecken, Mathijs ID - 7952 SN - 1868-8969 T2 - 36th International Symposium on Computational Geometry TI - The topological correctness of PL-approximations of isomanifolds VL - 164 ER - TY - CONF AB - Given a finite point set P in general position in the plane, a full triangulation is a maximal straight-line embedded plane graph on P. A partial triangulation on P is a full triangulation of some subset P' of P containing all extreme points in P. A bistellar flip on a partial triangulation either flips an edge, removes a non-extreme point of degree 3, or adds a point in P ⧵ P' as vertex of degree 3. The bistellar flip graph has all partial triangulations as vertices, and a pair of partial triangulations is adjacent if they can be obtained from one another by a bistellar flip. The goal of this paper is to investigate the structure of this graph, with emphasis on its connectivity. For sets P of n points in general position, we show that the bistellar flip graph is (n-3)-connected, thereby answering, for sets in general position, an open questions raised in a book (by De Loera, Rambau, and Santos) and a survey (by Lee and Santos) on triangulations. This matches the situation for the subfamily of regular triangulations (i.e., partial triangulations obtained by lifting the points and projecting the lower convex hull), where (n-3)-connectivity has been known since the late 1980s through the secondary polytope (Gelfand, Kapranov, Zelevinsky) and Balinski’s Theorem. Our methods also yield the following results (see the full version [Wagner and Welzl, 2020]): (i) The bistellar flip graph can be covered by graphs of polytopes of dimension n-3 (products of secondary polytopes). (ii) A partial triangulation is regular, if it has distance n-3 in the Hasse diagram of the partial order of partial subdivisions from the trivial subdivision. (iii) All partial triangulations are regular iff the trivial subdivision has height n-3 in the partial order of partial subdivisions. (iv) There are arbitrarily large sets P with non-regular partial triangulations, while every proper subset has only regular triangulations, i.e., there are no small certificates for the existence of non-regular partial triangulations (answering a question by F. Santos in the unexpected direction). AU - Wagner, Uli AU - Welzl, Emo ID - 7990 SN - 18688969 T2 - 36th International Symposium on Computational Geometry TI - Connectivity of triangulation flip graphs in the plane (Part II: Bistellar flips) VL - 164 ER - TY - CONF AB - In a straight-line embedded triangulation of a point set P in the plane, removing an inner edge and—provided the resulting quadrilateral is convex—adding the other diagonal is called an edge flip. The (edge) flip graph has all triangulations as vertices, and a pair of triangulations is adjacent if they can be obtained from each other by an edge flip. The goal of this paper is to contribute to a better understanding of the flip graph, with an emphasis on its connectivity. For sets in general position, it is known that every triangulation allows at least edge flips (a tight bound) which gives the minimum degree of any flip graph for n points. We show that for every point set P in general position, the flip graph is at least -vertex connected. Somewhat more strongly, we show that the vertex connectivity equals the minimum degree occurring in the flip graph, i.e. the minimum number of flippable edges in any triangulation of P, provided P is large enough. Finally, we exhibit some of the geometry of the flip graph by showing that the flip graph can be covered by 1-skeletons of polytopes of dimension (products of associahedra). A corresponding result ((n – 3)-vertex connectedness) can be shown for the bistellar flip graph of partial triangulations, i.e. the set of all triangulations of subsets of P which contain all extreme points of P. This will be treated separately in a second part. AU - Wagner, Uli AU - Welzl, Emo ID - 7807 SN - 9781611975994 T2 - Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms TI - Connectivity of triangulation flip graphs in the plane (Part I: Edge flips) VL - 2020-January ER -