TY - JOUR
AU - Barton, Nicholas H
AU - Novak, Sebastian
AU - Paixao, Tiago
ID - 2169
IS - 29
JF - PNAS
TI - Diverse forms of selection in evolution and computer science
VL - 111
ER -
TY - JOUR
AB - Short-read sequencing technologies have in principle made it feasible to draw detailed inferences about the recent history of any organism. In practice, however, this remains challenging due to the difficulty of genome assembly in most organisms and the lack of statistical methods powerful enough to discriminate between recent, nonequilibrium histories. We address both the assembly and inference challenges. We develop a bioinformatic pipeline for generating outgroup-rooted alignments of orthologous sequence blocks from de novo low-coverage short-read data for a small number of genomes, and show how such sequence blocks can be used to fit explicit models of population divergence and admixture in a likelihood framework. To illustrate our approach, we reconstruct the Pleistocene history of an oak-feeding insect (the oak gallwasp Biorhiza pallida), which, in common with many other taxa, was restricted during Pleistocene ice ages to a longitudinal series of southern refugia spanning the Western Palaearctic. Our analysis of sequence blocks sampled from a single genome from each of three major glacial refugia reveals support for an unexpected history dominated by recent admixture. Despite the fact that 80% of the genome is affected by admixture during the last glacial cycle, we are able to infer the deeper divergence history of these populations. These inferences are robust to variation in block length, mutation model and the sampling location of individual genomes within refugia. This combination of de novo assembly and numerical likelihood calculation provides a powerful framework for estimating recent population history that can be applied to any organism without the need for prior genetic resources.
AU - Hearn, Jack
AU - Stone, Graham
AU - Bunnefeld, Lynsey
AU - Nicholls, James
AU - Barton, Nicholas H
AU - Lohse, Konrad
ID - 2170
IS - 1
JF - Molecular Ecology
TI - Likelihood-based inference of population history from low-coverage de novo genome assemblies
VL - 23
ER -
TY - CONF
AB - We present LS-CRF, a new method for training cyclic Conditional Random Fields (CRFs) from large datasets that is inspired by classical closed-form expressions for the maximum likelihood parameters of a generative graphical model with tree topology. Training a CRF with LS-CRF requires only solving a set of independent regression problems, each of which can be solved efficiently in closed form or by an iterative solver. This makes LS-CRF orders of magnitude faster than classical CRF training based on probabilistic inference, and at the same time more flexible and easier to implement than other approximate techniques, such as pseudolikelihood or piecewise training. We apply LS-CRF to the task of semantic image segmentation, showing that it achieves on par accuracy to other training techniques at higher speed, thereby allowing efficient CRF training from very large training sets. For example, training a linearly parameterized pairwise CRF on 150,000 images requires less than one hour on a modern workstation.
AU - Kolesnikov, Alexander
AU - Guillaumin, Matthieu
AU - Ferrari, Vittorio
AU - Lampert, Christoph
ED - Fleet, David
ED - Pajdla, Tomas
ED - Schiele, Bernt
ED - Tuytelaars, Tinne
ID - 2171
IS - PART 3
T2 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
TI - Closed-form approximate CRF training for scalable image segmentation
VL - 8691
ER -
TY - CONF
AB - Fisher Kernels and Deep Learning were two developments with significant impact on large-scale object categorization in the last years. Both approaches were shown to achieve state-of-the-art results on large-scale object categorization datasets, such as ImageNet. Conceptually, however, they are perceived as very different and it is not uncommon for heated debates to spring up when advocates of both paradigms meet at conferences or workshops. In this work, we emphasize the similarities between both architectures rather than their differences and we argue that such a unified view allows us to transfer ideas from one domain to the other. As a concrete example we introduce a method for learning a support vector machine classifier with Fisher kernel at the same time as a task-specific data representation. We reinterpret the setting as a multi-layer feed forward network. Its final layer is the classifier, parameterized by a weight vector, and the two previous layers compute Fisher vectors, parameterized by the coefficients of a Gaussian mixture model. We introduce a gradient descent based learning algorithm that, in contrast to other feature learning techniques, is not just derived from intuition or biological analogy, but has a theoretical justification in the framework of statistical learning theory. Our experiments show that the new training procedure leads to significant improvements in classification accuracy while preserving the modularity and geometric interpretability of a support vector machine setup.
AU - Sydorov, Vladyslav
AU - Sakurada, Mayu
AU - Lampert, Christoph
ID - 2172
T2 - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
TI - Deep Fisher Kernels – End to end learning of the Fisher Kernel GMM parameters
ER -
TY - CONF
AB - In this work we introduce a new approach to co-classification, i.e. the task of jointly classifying multiple, otherwise independent, data samples. The method we present, named CoConut, is based on the idea of adding a regularizer in the label space to encode certain priors on the resulting labelings. A regularizer that encourages labelings that are smooth across the test set, for instance, can be seen as a test-time variant of the cluster assumption, which has been proven useful at training time in semi-supervised learning. A regularizer that introduces a preference for certain class proportions can be regarded as a prior distribution on the class labels. CoConut can build on existing classifiers without making any assumptions on how they were obtained and without the need to re-train them. The use of a regularizer adds a new level of flexibility. It allows the integration of potentially new information at test time, even in other modalities than what the classifiers were trained on. We evaluate our framework on six datasets, reporting a clear performance gain in classification accuracy compared to the standard classification setup that predicts labels for each test sample separately.
AU - Khamis, Sameh
AU - Lampert, Christoph
ID - 2173
T2 - Proceedings of the British Machine Vision Conference 2014
TI - CoConut: Co-classification with output space regularization
ER -
TY - JOUR
AB - When polygenic traits are under stabilizing selection, many different combinations of alleles allow close adaptation to the optimum. If alleles have equal effects, all combinations that result in the same deviation from the optimum are equivalent. Furthermore, the genetic variance that is maintained by mutation-selection balance is 2μ/S per locus, where μ is the mutation rate and S the strength of stabilizing selection. In reality, alleles vary in their effects, making the fitness landscape asymmetric and complicating analysis of the equilibria. We show that that the resulting genetic variance depends on the fraction of alleles near fixation, which contribute by 2μ/S, and on the total mutational effects of alleles that are at intermediate frequency. The inpplayfi between stabilizing selection and mutation leads to a sharp transition: alleles with effects smaller than a threshold value of 2 remain polymorphic, whereas those with larger effects are fixed. The genetic load in equilibrium is less than for traits of equal effects, and the fitness equilibria are more similar. We find p the optimum is displaced, alleles with effects close to the threshold value sweep first, and their rate of increase is bounded by Long-term response leads in general to well-adapted traits, unlike the case of equal effects that often end up at a suboptimal fitness peak. However, the particular peaks to which the populations converge are extremely sensitive to the initial states and to the speed of the shift of the optimum trait value.
AU - De Vladar, Harold
AU - Barton, Nicholas H
ID - 2174
IS - 2
JF - Genetics
TI - Stability and response of polygenic traits to stabilizing selection and mutation
VL - 197
ER -
TY - JOUR
AB - The cerebral cortex, the seat of our cognitive abilities, is composed of an intricate network of billions of excitatory projection and inhibitory interneurons. Postmitotic cortical neurons are generated by a diverse set of neural stem cell progenitors within dedicated zones and defined periods of neurogenesis during embryonic development. Disruptions in neurogenesis can lead to alterations in the neuronal cytoarchitecture, which is thought to represent a major underlying cause for several neurological disorders, including microcephaly, autism and epilepsy. Although a number of signaling pathways regulating neurogenesis have been described, the precise cellular and molecular mechanisms regulating the functional neural stem cell properties in cortical neurogenesis remain unclear. Here, we discuss the most up-to-date strategies to monitor the fundamental mechanistic parameters of neuronal progenitor proliferation, and recent advances deciphering the logic and dynamics of neurogenesis.
AU - Postiglione, Maria P
AU - Hippenmeyer, Simon
ID - 2175
IS - 3
JF - Future Neurology
TI - Monitoring neurogenesis in the cerebral cortex: an update
VL - 9
ER -
TY - JOUR
AB - Electron microscopy (EM) allows for the simultaneous visualization of all tissue components at high resolution. However, the extent to which conventional aldehyde fixation and ethanol dehydration of the tissue alter the fine structure of cells and organelles, thereby preventing detection of subtle structural changes induced by an experiment, has remained an issue. Attempts have been made to rapidly freeze tissue to preserve native ultrastructure. Shock-freezing of living tissue under high pressure (high-pressure freezing, HPF) followed by cryosubstitution of the tissue water avoids aldehyde fixation and dehydration in ethanol; the tissue water is immobilized in â ̂1/450 ms, and a close-to-native fine structure of cells, organelles and molecules is preserved. Here we describe a protocol for HPF that is useful to monitor ultrastructural changes associated with functional changes at synapses in the brain but can be applied to many other tissues as well. The procedure requires a high-pressure freezer and takes a minimum of 7 d but can be paused at several points.
AU - Studer, Daniel
AU - Zhao, Shanting
AU - Chai, Xuejun
AU - Jonas, Peter M
AU - Graber, Werner
AU - Nestel, Sigrun
AU - Frotscher, Michael
ID - 2176
IS - 6
JF - Nature Protocols
TI - Capture of activity-induced ultrastructural changes at synapses by high-pressure freezing of brain tissue
VL - 9
ER -
TY - CONF
AB - We give evidence for the difficulty of computing Betti numbers of simplicial complexes over a finite field. We do this by reducing the rank computation for sparse matrices with to non-zero entries to computing Betti numbers of simplicial complexes consisting of at most a constant times to simplices. Together with the known reduction in the other direction, this implies that the two problems have the same computational complexity.
AU - Edelsbrunner, Herbert
AU - Parsa, Salman
ID - 2177
T2 - Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms
TI - On the computational complexity of betti numbers reductions from matrix rank
ER -
TY - JOUR
AB - We consider the three-state toric homogeneous Markov chain model (THMC) without loops and initial parameters. At time T, the size of the design matrix is 6 × 3 · 2T-1 and the convex hull of its columns is the model polytope. We study the behavior of this polytope for T ≥ 3 and we show that it is defined by 24 facets for all T ≥ 5. Moreover, we give a complete description of these facets. From this, we deduce that the toric ideal associated with the design matrix is generated by binomials of degree at most 6. Our proof is based on a result due to Sturmfels, who gave a bound on the degree of the generators of a toric ideal, provided the normality of the corresponding toric variety. In our setting, we established the normality of the toric variety associated to the THMC model by studying the geometric properties of the model polytope.
AU - Haws, David
AU - Martin Del Campo Sanchez, Abraham
AU - Takemura, Akimichi
AU - Yoshida, Ruriko
ID - 2178
IS - 1
JF - Beitrage zur Algebra und Geometrie
TI - Markov degree of the three-state toric homogeneous Markov chain model
VL - 55
ER -
TY - JOUR
AB - We extend the proof of the local semicircle law for generalized Wigner matrices given in MR3068390 to the case when the matrix of variances has an eigenvalue -1. In particular, this result provides a short proof of the optimal local Marchenko-Pastur law at the hard edge (i.e. around zero) for sample covariance matrices X*X, where the variances of the entries of X may vary.
AU - Ajanki, Oskari H
AU - Erdös, László
AU - Krüger, Torben H
ID - 2179
JF - Electronic Communications in Probability
TI - Local semicircle law with imprimitive variance matrix
VL - 19
ER -
TY - JOUR
AB - Weighted majority votes allow one to combine the output of several classifiers or voters. MinCq is a recent algorithm for optimizing the weight of each voter based on the minimization of a theoretical bound over the risk of the vote with elegant PAC-Bayesian generalization guarantees. However, while it has demonstrated good performance when combining weak classifiers, MinCq cannot make use of the useful a priori knowledge that one may have when using a mixture of weak and strong voters. In this paper, we propose P-MinCq, an extension of MinCq that can incorporate such knowledge in the form of a constraint over the distribution of the weights, along with general proofs of convergence that stand in the sample compression setting for data-dependent voters. The approach is applied to a vote of k-NN classifiers with a specific modeling of the voters' performance. P-MinCq significantly outperforms the classic k-NN classifier, a symmetric NN and MinCq using the same voters. We show that it is also competitive with LMNN, a popular metric learning algorithm, and that combining both approaches further reduces the error.
AU - Bellet, Aurélien
AU - Habrard, Amaury
AU - Morvant, Emilie
AU - Sebban, Marc
ID - 2180
IS - 1-2
JF - Machine Learning
TI - Learning a priori constrained weighted majority votes
VL - 97
ER -
TY - JOUR
AB - Given topological spaces X,Y, a fundamental problem of algebraic topology is understanding the structure of all continuous maps X→ Y. We consider a computational version, where X,Y are given as finite simplicial complexes, and the goal is to compute [X,Y], that is, all homotopy classes of suchmaps.We solve this problem in the stable range, where for some d ≥ 2, we have dim X ≤ 2d-2 and Y is (d-1)-connected; in particular, Y can be the d-dimensional sphere Sd. The algorithm combines classical tools and ideas from homotopy theory (obstruction theory, Postnikov systems, and simplicial sets) with algorithmic tools from effective algebraic topology (locally effective simplicial sets and objects with effective homology). In contrast, [X,Y] is known to be uncomputable for general X,Y, since for X = S1 it includes a well known undecidable problem: testing triviality of the fundamental group of Y. In follow-up papers, the algorithm is shown to run in polynomial time for d fixed, and extended to other problems, such as the extension problem, where we are given a subspace A ⊂ X and a map A→ Y and ask whether it extends to a map X → Y, or computing the Z2-index-everything in the stable range. Outside the stable range, the extension problem is undecidable.
AU - Čadek, Martin
AU - Krcál, Marek
AU - Matoušek, Jiří
AU - Sergeraert, Francis
AU - Vokřínek, Lukáš
AU - Wagner, Uli
ID - 2184
IS - 3
JF - Journal of the ACM
TI - Computing all maps into a sphere
VL - 61
ER -
TY - CONF
AB - We revisit the classical problem of converting an imperfect source of randomness into a usable cryptographic key. Assume that we have some cryptographic application P that expects a uniformly random m-bit key R and ensures that the best attack (in some complexity class) against P(R) has success probability at most δ. Our goal is to design a key-derivation function (KDF) h that converts any random source X of min-entropy k into a sufficiently "good" key h(X), guaranteeing that P(h(X)) has comparable security δ′ which is 'close' to δ. Seeded randomness extractors provide a generic way to solve this problem for all applications P, with resulting security δ′ = O(δ), provided that we start with entropy k ≥ m + 2 log (1/δ) - O(1). By a result of Radhakrishnan and Ta-Shma, this bound on k (called the "RT-bound") is also known to be tight in general. Unfortunately, in many situations the loss of 2 log (1/δ) bits of entropy is unacceptable. This motivates the study KDFs with less entropy waste by placing some restrictions on the source X or the application P. In this work we obtain the following new positive and negative results in this regard: - Efficient samplability of the source X does not help beat the RT-bound for general applications. This resolves the SRT (samplable RT) conjecture of Dachman-Soled et al. [DGKM12] in the affirmative, and also shows that the existence of computationally-secure extractors beating the RT-bound implies the existence of one-way functions. - We continue in the line of work initiated by Barak et al. [BDK+11] and construct new information-theoretic KDFs which beat the RT-bound for large but restricted classes of applications. Specifically, we design efficient KDFs that work for all unpredictability applications P (e.g., signatures, MACs, one-way functions, etc.) and can either: (1) extract all of the entropy k = m with a very modest security loss δ′ = O(δ·log (1/δ)), or alternatively, (2) achieve essentially optimal security δ′ = O(δ) with a very modest entropy loss k ≥ m + loglog (1/δ). In comparison, the best prior results from [BDK+11] for this class of applications would only guarantee δ′ = O(√δ) when k = m, and would need k ≥ m + log (1/δ) to get δ′ = O(δ). - The weaker bounds of [BDK+11] hold for a larger class of so-called "square- friendly" applications (which includes all unpredictability, but also some important indistinguishability, applications). Unfortunately, we show that these weaker bounds are tight for the larger class of applications. - We abstract out a clean, information-theoretic notion of (k,δ,δ′)- unpredictability extractors, which guarantee "induced" security δ′ for any δ-secure unpredictability application P, and characterize the parameters achievable for such unpredictability extractors. Of independent interest, we also relate this notion to the previously-known notion of (min-entropy) condensers, and improve the state-of-the-art parameters for such condensers.
AU - Dodis, Yevgeniy
AU - Pietrzak, Krzysztof Z
AU - Wichs, Daniel
ED - Nguyen, Phong
ED - Oswald, Elisabeth
ID - 2185
TI - Key derivation without entropy waste
VL - 8441
ER -
TY - JOUR
AB - We prove the existence of scattering states for the defocusing cubic Gross-Pitaevskii (GP) hierarchy in ℝ3. Moreover, we show that an exponential energy growth condition commonly used in the well-posedness theory of the GP hierarchy is, in a specific sense, necessary. In fact, we prove that without the latter, there exist initial data for the focusing cubic GP hierarchy for which instantaneous blowup occurs.
AU - Chen, Thomas
AU - Hainzl, Christian
AU - Pavlović, Nataša
AU - Seiringer, Robert
ID - 2186
IS - 7
JF - Letters in Mathematical Physics
TI - On the well-posedness and scattering for the Gross-Pitaevskii hierarchy via quantum de Finetti
VL - 104
ER -
TY - JOUR
AB - Systems should not only be correct but also robust in the sense that they behave reasonably in unexpected situations. This article addresses synthesis of robust reactive systems from temporal specifications. Existing methods allow arbitrary behavior if assumptions in the specification are violated. To overcome this, we define two robustness notions, combine them, and show how to enforce them in synthesis. The first notion applies to safety properties: If safety assumptions are violated temporarily, we require that the system recovers to normal operation with as few errors as possible. The second notion requires that, if liveness assumptions are violated, as many guarantees as possible should be fulfilled nevertheless. We present a synthesis procedure achieving this for the important class of GR(1) specifications, and establish complexity bounds. We also present an implementation of a special case of robustness, and show experimental results.
AU - Bloem, Roderick
AU - Chatterjee, Krishnendu
AU - Greimel, Karin
AU - Henzinger, Thomas A
AU - Hofferek, Georg
AU - Jobstmann, Barbara
AU - Könighofer, Bettina
AU - Könighofer, Robert
ID - 2187
IS - 3-4
JF - Acta Informatica
TI - Synthesizing robust systems
VL - 51
ER -
TY - JOUR
AB - Although plant and animal cells use a similar core mechanism to deliver proteins to the plasma membrane, their different lifestyle, body organization and specific cell structures resulted in the acquisition of regulatory mechanisms that vary in the two kingdoms. In particular, cell polarity regulators do not seem to be conserved, because genes encoding key components are absent in plant genomes. In plants, the broad knowledge on polarity derives from the study of auxin transporters, the PIN-FORMED proteins, in the model plant Arabidopsis thaliana. In animals, much information is provided from the study of polarity in epithelial cells that exhibit basolateral and luminal apical polarities, separated by tight junctions. In this review, we summarize the similarities and differences of the polarization mechanisms between plants and animals and survey the main genetic approaches that have been used to characterize new genes involved in polarity establishment in plants, including the frequently used forward and reverse genetics screens as well as a novel chemical genetics approach that is expected to overcome the limitation of classical genetics methods.
AU - Kania, Urszula
AU - Fendrych, Matyas
AU - Friml, Jiřĺ
ID - 2188
IS - APRIL
JF - Open Biology
TI - Polar delivery in plants; commonalities and differences to animal epithelial cells
VL - 4
ER -
TY - CONF
AB - En apprentissage automatique, nous parlons d'adaptation de domaine lorsque les données de test (cibles) et d'apprentissage (sources) sont générées selon différentes distributions. Nous devons donc développer des algorithmes de classification capables de s'adapter à une nouvelle distribution, pour laquelle aucune information sur les étiquettes n'est disponible. Nous attaquons cette problématique sous l'angle de l'approche PAC-Bayésienne qui se focalise sur l'apprentissage de modèles définis comme des votes de majorité sur un ensemble de fonctions. Dans ce contexte, nous introduisons PV-MinCq une version adaptative de l'algorithme (non adaptatif) MinCq. PV-MinCq suit le principe suivant. Nous transférons les étiquettes sources aux points cibles proches pour ensuite appliquer MinCq sur l'échantillon cible ``auto-étiqueté'' (justifié par une borne théorique). Plus précisément, nous définissons un auto-étiquetage non itératif qui se focalise dans les régions où les distributions marginales source et cible sont les plus similaires. Dans un second temps, nous étudions l'influence de notre auto-étiquetage pour en déduire une procédure de validation des hyperparamètres. Finalement, notre approche montre des résultats empiriques prometteurs.
AU - Morvant, Emilie
ID - 2189
TI - Adaptation de domaine de vote de majorité par auto-étiquetage non itératif
VL - 1
ER -
TY - CONF
AB - We present a new algorithm to construct a (generalized) deterministic Rabin automaton for an LTL formula φ. The automaton is the product of a master automaton and an array of slave automata, one for each G-subformula of φ. The slave automaton for G ψ is in charge of recognizing whether FG ψ holds. As opposed to standard determinization procedures, the states of all our automata have a clear logical structure, which allows for various optimizations. Our construction subsumes former algorithms for fragments of LTL. Experimental results show improvement in the sizes of the resulting automata compared to existing methods.
AU - Esparza, Javier
AU - Kretinsky, Jan
ID - 2190
TI - From LTL to deterministic automata: A safraless compositional approach
VL - 8559
ER -
TY - JOUR
AB - Neuronal ectopia, such as granule cell dispersion (GCD) in temporal lobe epilepsy (TLE), has been assumed to result from a migration defect during development. Indeed, recent studies reported that aberrant migration of neonatal-generated dentate granule cells (GCs) increased the risk to develop epilepsy later in life. On the contrary, in the present study, we show that fully differentiated GCs become motile following the induction of epileptiform activity, resulting in GCD. Hippocampal slice cultures from transgenic mice expressing green fluorescent protein in differentiated, but not in newly generated GCs, were incubated with the glutamate receptor agonist kainate (KA), which induced GC burst activity and GCD. Using real-time microscopy, we observed that KA-exposed, differentiated GCs translocated their cell bodies and changed their dendritic organization. As found in human TLE, KA application was associated with decreased expression of the extracellular matrix protein Reelin, particularly in hilar interneurons. Together these findings suggest that KA-induced motility of differentiated GCs contributes to the development of GCD and establish slice cultures as a model to study neuronal changes induced by epileptiform activity.
AU - Chai, Xuejun
AU - Münzner, Gert
AU - Zhao, Shanting
AU - Tinnes, Stefanie
AU - Kowalski, Janina
AU - Häussler, Ute
AU - Young, Christina
AU - Haas, Carola
AU - Frotscher, Michael
ID - 2164
IS - 8
JF - Cerebral Cortex
TI - Epilepsy-induced motility of differentiated neurons
VL - 24
ER -