TY - JOUR AB - The fixation probability of a single mutant invading a population of residents is among the most widely-studied quantities in evolutionary dynamics. Amplifiers of natural selection are population structures that increase the fixation probability of advantageous mutants, compared to well-mixed populations. Extensive studies have shown that many amplifiers exist for the Birth-death Moran process, some of them substantially increasing the fixation probability or even guaranteeing fixation in the limit of large population size. On the other hand, no amplifiers are known for the death-Birth Moran process, and computer-assisted exhaustive searches have failed to discover amplification. In this work we resolve this disparity, by showing that any amplification under death-Birth updating is necessarily bounded and transient. Our boundedness result states that even if a population structure does amplify selection, the resulting fixation probability is close to that of the well-mixed population. Our transience result states that for any population structure there exists a threshold r⋆ such that the population structure ceases to amplify selection if the mutant fitness advantage r is larger than r⋆. Finally, we also extend the above results to δ-death-Birth updating, which is a combination of Birth-death and death-Birth updating. On the positive side, we identify population structures that maintain amplification for a wide range of values r and δ. These results demonstrate that amplification of natural selection depends on the specific mechanisms of the evolutionary process. AU - Tkadlec, Josef AU - Pavlogiannis, Andreas AU - Chatterjee, Krishnendu AU - Nowak, Martin A. ID - 7212 JF - PLoS computational biology TI - Limits on amplifiers of natural selection under death-Birth updating VL - 16 ER - TY - THES AB - In this thesis we study certain mathematical aspects of evolution. The two primary forces that drive an evolutionary process are mutation and selection. Mutation generates new variants in a population. Selection chooses among the variants depending on the reproductive rates of individuals. Evolutionary processes are intrinsically random – a new mutation that is initially present in the population at low frequency can go extinct, even if it confers a reproductive advantage. The overall rate of evolution is largely determined by two quantities: the probability that an invading advantageous mutation spreads through the population (called fixation probability) and the time until it does so (called fixation time). Both those quantities crucially depend not only on the strength of the invading mutation but also on the population structure. In this thesis, we aim to understand how the underlying population structure affects the overall rate of evolution. Specifically, we study population structures that increase the fixation probability of advantageous mutants (called amplifiers of selection). Broadly speaking, our results are of three different types: We present various strong amplifiers, we identify regimes under which only limited amplification is feasible, and we propose population structures that provide different tradeoffs between high fixation probability and short fixation time. AU - Tkadlec, Josef ID - 7196 TI - A role of graphs in evolutionary processes ER - TY - CONF AB - The optimization of multilayer neural networks typically leads to a solution with zero training error, yet the landscape can exhibit spurious local minima and the minima can be disconnected. In this paper, we shed light on this phenomenon: we show that the combination of stochastic gradient descent (SGD) and over-parameterization makes the landscape of multilayer neural networks approximately connected and thus more favorable to optimization. More specifically, we prove that SGD solutions are connected via a piecewise linear path, and the increase in loss along this path vanishes as the number of neurons grows large. This result is a consequence of the fact that the parameters found by SGD are increasingly dropout stable as the network becomes wider. We show that, if we remove part of the neurons (and suitably rescale the remaining ones), the change in loss is independent of the total number of neurons, and it depends only on how many neurons are left. Our results exhibit a mild dependence on the input dimension: they are dimension-free for two-layer networks and depend linearly on the dimension for multilayer networks. We validate our theoretical findings with numerical experiments for different architectures and classification tasks. AU - Shevchenko, Alexander AU - Mondelli, Marco ID - 9198 T2 - Proceedings of the 37th International Conference on Machine Learning TI - Landscape connectivity and dropout stability of SGD solutions for over-parameterized neural networks VL - 119 ER - TY - JOUR AB - Representing an atom by a solid sphere in 3-dimensional Euclidean space, we get the space-filling diagram of a molecule by taking the union. Molecular dynamics simulates its motion subject to bonds and other forces, including the solvation free energy. The morphometric approach [12, 17] writes the latter as a linear combination of weighted versions of the volume, area, mean curvature, and Gaussian curvature of the space-filling diagram. We give a formula for the derivative of the weighted mean curvature. Together with the derivatives of the weighted volume in [7], the weighted area in [3], and the weighted Gaussian curvature [1], this yields the derivative of the morphometric expression of the solvation free energy. AU - Akopyan, Arseniy AU - Edelsbrunner, Herbert ID - 9157 IS - 1 JF - Computational and Mathematical Biophysics SN - 2544-7297 TI - The weighted mean curvature derivative of a space-filling diagram VL - 8 ER - TY - JOUR AB - The morphometric approach [11, 14] writes the solvation free energy as a linear combination of weighted versions of the volume, area, mean curvature, and Gaussian curvature of the space-filling diagram. We give a formula for the derivative of the weighted Gaussian curvature. Together with the derivatives of the weighted volume in [7], the weighted area in [4], and the weighted mean curvature in [1], this yields the derivative of the morphometric expression of solvation free energy. AU - Akopyan, Arseniy AU - Edelsbrunner, Herbert ID - 9156 IS - 1 JF - Computational and Mathematical Biophysics SN - 2544-7297 TI - The weighted Gaussian curvature derivative of a space-filling diagram VL - 8 ER - TY - JOUR AB - We consider the symmetric simple exclusion process in Zd with quenched bounded dynamic random conductances and prove its hydrodynamic limit in path space. The main tool is the connection, due to the self-duality of the process, between the invariance principle for single particles starting from all points and the macroscopic behavior of the density field. While the hydrodynamic limit at fixed macroscopic times is obtained via a generalization to the time-inhomogeneous context of the strategy introduced in [41], in order to prove tightness for the sequence of empirical density fields we develop a new criterion based on the notion of uniform conditional stochastic continuity, following [50]. In conclusion, we show that uniform elliptic dynamic conductances provide an example of environments in which the so-called arbitrary starting point invariance principle may be derived from the invariance principle of a single particle starting from the origin. Therefore, our hydrodynamics result applies to the examples of quenched environments considered in, e.g., [1], [3], [6] in combination with the hypothesis of uniform ellipticity. AU - Redig, Frank AU - Saada, Ellen AU - Sau, Federico ID - 8973 JF - Electronic Journal of Probability TI - Symmetric simple exclusion process in dynamic environment: Hydrodynamics VL - 25 ER - TY - JOUR AB - An asymptotic formula is established for the number of rational points of bounded anticanonical height which lie on a certain Zariski dense subset of the biprojective hypersurface x1y21+⋯+x4y24=0 in ℙ3×ℙ3. This confirms the modified Manin conjecture for this variety, in which the removal of a thin set of rational points is allowed. AU - Browning, Timothy D AU - Heath Brown, Roger ID - 179 IS - 16 JF - Duke Mathematical Journal SN - 0012-7094 TI - Density of rational points on a quadric bundle in ℙ3×ℙ3 VL - 169 ER - TY - GEN AB - Data and mathematica notebooks for plotting figures from Language learning with communication between learners AU - Ibsen-Jensen, Rasmus AU - Tkadlec, Josef AU - Chatterjee, Krishnendu AU - Nowak, Martin ID - 9814 TI - Data and mathematica notebooks for plotting figures from language learning with communication between learners from language acquisition with communication between learners ER - TY - JOUR AB - We demonstrate the utility of optical cavity generated spin-squeezed states in free space atomic fountain clocks in ensembles of 390 000 87Rb atoms. Fluorescence imaging, correlated to an initial quantum nondemolition measurement, is used for population spectroscopy after the atoms are released from a confining lattice. For a free fall time of 4 milliseconds, we resolve a single-shot phase sensitivity of 814(61) microradians, which is 5.8(0.6) decibels (dB) below the quantum projection limit. We observe that this squeezing is preserved as the cloud expands to a roughly 200  μm radius and falls roughly 300  μm in free space. Ramsey spectroscopy with 240 000 atoms at a 3.6 ms Ramsey time results in a single-shot fractional frequency stability of 8.4(0.2)×10−12, 3.8(0.2) dB below the quantum projection limit. The sensitivity and stability are limited by the technical noise in the fluorescence detection protocol and the microwave system, respectively. AU - Malia, Benjamin K. AU - Martínez-Rincón, Julián AU - Wu, Yunfan AU - Hosten, Onur AU - Kasevich, Mark A. ID - 8285 IS - 4 JF - Physical Review Letters SN - 0031-9007 TI - Free space Ramsey spectroscopy in rubidium with noise below the quantum projection limit VL - 125 ER - TY - CONF AB - The search for biologically faithful synaptic plasticity rules has resulted in a large body of models. They are usually inspired by – and fitted to – experimental data, but they rarely produce neural dynamics that serve complex functions. These failures suggest that current plasticity models are still under-constrained by existing data. Here, we present an alternative approach that uses meta-learning to discover plausible synaptic plasticity rules. Instead of experimental data, the rules are constrained by the functions they implement and the structure they are meant to produce. Briefly, we parameterize synaptic plasticity rules by a Volterra expansion and then use supervised learning methods (gradient descent or evolutionary strategies) to minimize a problem-dependent loss function that quantifies how effectively a candidate plasticity rule transforms an initially random network into one with the desired function. We first validate our approach by re-discovering previously described plasticity rules, starting at the single-neuron level and “Oja’s rule”, a simple Hebbian plasticity rule that captures the direction of most variability of inputs to a neuron (i.e., the first principal component). We expand the problem to the network level and ask the framework to find Oja’s rule together with an anti-Hebbian rule such that an initially random two-layer firing-rate network will recover several principal components of the input space after learning. Next, we move to networks of integrate-and-fire neurons with plastic inhibitory afferents. We train for rules that achieve a target firing rate by countering tuned excitation. Our algorithm discovers a specific subset of the manifold of rules that can solve this task. Our work is a proof of principle of an automated and unbiased approach to unveil synaptic plasticity rules that obey biological constraints and can solve complex functions. AU - Confavreux, Basile J AU - Zenke, Friedemann AU - Agnes, Everton J. AU - Lillicrap, Timothy AU - Vogels, Tim P ID - 9633 SN - 1049-5258 T2 - Advances in Neural Information Processing Systems TI - A meta-learning approach to (re)discover plasticity rules that carve a desired function into a neural network VL - 33 ER -