TY - JOUR AB - Selection accumulates information in the genome—it guides stochastically evolving populations toward states (genotype frequencies) that would be unlikely under neutrality. This can be quantified as the Kullback–Leibler (KL) divergence between the actual distribution of genotype frequencies and the corresponding neutral distribution. First, we show that this population-level information sets an upper bound on the information at the level of genotype and phenotype, limiting how precisely they can be specified by selection. Next, we study how the accumulation and maintenance of information is limited by the cost of selection, measured as the genetic load or the relative fitness variance, both of which we connect to the control-theoretic KL cost of control. The information accumulation rate is upper bounded by the population size times the cost of selection. This bound is very general, and applies across models (Wright–Fisher, Moran, diffusion) and to arbitrary forms of selection, mutation, and recombination. Finally, the cost of maintaining information depends on how it is encoded: Specifying a single allele out of two is expensive, but one bit encoded among many weakly specified loci (as in a polygenic trait) is cheap. AU - Hledik, Michal AU - Barton, Nicholas H AU - Tkačik, Gašper ID - 12081 IS - 36 JF - Proceedings of the National Academy of Sciences SN - 0027-8424 TI - Accumulation and maintenance of information in evolution VL - 119 ER - TY - JOUR AB - Realistic models of biological processes typically involve interacting components on multiple scales, driven by changing environment and inherent stochasticity. Such models are often analytically and numerically intractable. We revisit a dynamic maximum entropy method that combines a static maximum entropy with a quasi-stationary approximation. This allows us to reduce stochastic non-equilibrium dynamics expressed by the Fokker-Planck equation to a simpler low-dimensional deterministic dynamics, without the need to track microscopic details. Although the method has been previously applied to a few (rather complicated) applications in population genetics, our main goal here is to explain and to better understand how the method works. We demonstrate the usefulness of the method for two widely studied stochastic problems, highlighting its accuracy in capturing important macroscopic quantities even in rapidly changing non-stationary conditions. For the Ornstein-Uhlenbeck process, the method recovers the exact dynamics whilst for a stochastic island model with migration from other habitats, the approximation retains high macroscopic accuracy under a wide range of scenarios in a dynamic environment. AU - Bod'ová, Katarína AU - Szep, Eniko AU - Barton, Nicholas H ID - 10535 IS - 12 JF - PLoS Computational Biology SN - 1553-734X TI - Dynamic maximum entropy provides accurate approximation of structured population dynamics VL - 17 ER - TY - GEN AB - Brain dynamics display collective phenomena as diverse as neuronal oscillations and avalanches. Oscillations are rhythmic, with fluctuations occurring at a characteristic scale, whereas avalanches are scale-free cascades of neural activity. Here we show that such antithetic features can coexist in a very generic class of adaptive neural networks. In the most simple yet fully microscopic model from this class we make direct contact with human brain resting-state activity recordings via tractable inference of the model's two essential parameters. The inferred model quantitatively captures the dynamics over a broad range of scales, from single sensor fluctuations, collective behaviors of nearly-synchronous extreme events on multiple sensors, to neuronal avalanches unfolding over multiple sensors across multiple time-bins. Importantly, the inferred parameters correlate with model-independent signatures of "closeness to criticality", suggesting that the coexistence of scale-specific (neural oscillations) and scale-free (neuronal avalanches) dynamics in brain activity occurs close to a non-equilibrium critical point at the onset of self-sustained oscillations. AU - Lombardi, Fabrizio AU - Pepic, Selver AU - Shriki, Oren AU - Tkačik, Gašper AU - De Martino, Daniele ID - 10912 TI - Quantifying the coexistence of neuronal oscillations and avalanches ER - TY - GEN AB - We consider a totally asymmetric simple exclusion process (TASEP) consisting of particles on a lattice that require binding by a "token" to move. Using a combination of theory and simulations, we address the following questions: (i) How token binding kinetics affects the current-density relation; (ii) How the current-density relation depends on the scarcity of tokens; (iii) How tokens propagate the effects of the locally-imposed disorder (such a slow site) over the entire lattice; (iv) How a shared pool of tokens couples concurrent TASEPs running on multiple lattices; (v) How our results translate to TASEPs with open boundaries that exchange particles with the reservoir. Since real particle motion (including in systems that inspired the standard TASEP model, e.g., protein synthesis or movement of molecular motors) is often catalyzed, regulated, actuated, or otherwise mediated, the token-driven TASEP dynamics analyzed in this paper should allow for a better understanding of real systems and enable a closer match between TASEP theory and experimental observations. AU - Kavcic, Bor AU - Tkačik, Gašper ID - 10579 T2 - arXiv TI - Token-driven totally asymmetric simple exclusion process ER - TY - JOUR AB - Resting-state brain activity is characterized by the presence of neuronal avalanches showing absence of characteristic size. Such evidence has been interpreted in the context of criticality and associated with the normal functioning of the brain. A distinctive attribute of systems at criticality is the presence of long-range correlations. Thus, to verify the hypothesis that the brain operates close to a critical point and consequently assess deviations from criticality for diagnostic purposes, it is of primary importance to robustly and reliably characterize correlations in resting-state brain activity. Recent works focused on the analysis of narrow-band electroencephalography (EEG) and magnetoencephalography (MEG) signal amplitude envelope, showing evidence of long-range temporal correlations (LRTC) in neural oscillations. However, brain activity is a broadband phenomenon, and a significant piece of information useful to precisely discriminate between normal (critical) and pathological behavior (non-critical), may be encoded in the broadband spatio-temporal cortical dynamics. Here we propose to characterize the temporal correlations in the broadband brain activity through the lens of neuronal avalanches. To this end, we consider resting-state EEG and long-term MEG recordings, extract the corresponding neuronal avalanche sequences, and study their temporal correlations. We demonstrate that the broadband resting-state brain activity consistently exhibits long-range power-law correlations in both EEG and MEG recordings, with similar values of the scaling exponents. Importantly, although we observe that the avalanche size distribution depends on scale parameters, scaling exponents characterizing long-range correlations are quite robust. In particular, they are independent of the temporal binning (scale of analysis), indicating that our analysis captures intrinsic characteristics of the underlying dynamics. Because neuronal avalanches constitute a fundamental feature of neural systems with universal characteristics, the proposed approach may serve as a general, systems- and experiment-independent procedure to infer the existence of underlying long-range correlations in extended neural systems, and identify pathological behaviors in the complex spatio-temporal interplay of cortical rhythms. AU - Lombardi, Fabrizio AU - Shriki, Oren AU - Herrmann, Hans J AU - de Arcangelis, Lucilla ID - 7463 JF - Neurocomputing SN - 0925-2312 TI - Long-range temporal correlations in the broadband resting state activity of the human brain revealed by neuronal avalanches VL - 461 ER - TY - JOUR AB - Half a century after Lewis Wolpert's seminal conceptual advance on how cellular fates distribute in space, we provide a brief historical perspective on how the concept of positional information emerged and influenced the field of developmental biology and beyond. We focus on a modern interpretation of this concept in terms of information theory, largely centered on its application to cell specification in the early Drosophila embryo. We argue that a true physical variable (position) is encoded in local concentrations of patterning molecules, that this mapping is stochastic, and that the processes by which positions and corresponding cell fates are determined based on these concentrations need to take such stochasticity into account. With this approach, we shift the focus from biological mechanisms, molecules, genes and pathways to quantitative systems-level questions: where does positional information reside, how it is transformed and accessed during development, and what fundamental limits it is subject to? AU - Tkačik, Gašper AU - Gregor, Thomas ID - 9226 IS - 2 JF - Development TI - The many bits of positional information VL - 148 ER - TY - JOUR AB - The ability to adapt to changes in stimulus statistics is a hallmark of sensory systems. Here, we developed a theoretical framework that can account for the dynamics of adaptation from an information processing perspective. We use this framework to optimize and analyze adaptive sensory codes, and we show that codes optimized for stationary environments can suffer from prolonged periods of poor performance when the environment changes. To mitigate the adversarial effects of these environmental changes, sensory systems must navigate tradeoffs between the ability to accurately encode incoming stimuli and the ability to rapidly detect and adapt to changes in the distribution of these stimuli. We derive families of codes that balance these objectives, and we demonstrate their close match to experimentally observed neural dynamics during mean and variance adaptation. Our results provide a unifying perspective on adaptation across a range of sensory systems, environments, and sensory tasks. AU - Mlynarski, Wiktor F AU - Hermundstad, Ann M. ID - 9439 JF - Nature Neuroscience SN - 1097-6256 TI - Efficient and adaptive sensory codes VL - 24 ER - TY - JOUR AB - Attachment of adhesive molecules on cell culture surfaces to restrict cell adhesion to defined areas and shapes has been vital for the progress of in vitro research. In currently existing patterning methods, a combination of pattern properties such as stability, precision, specificity, high-throughput outcome, and spatiotemporal control is highly desirable but challenging to achieve. Here, we introduce a versatile and high-throughput covalent photoimmobilization technique, comprising a light-dose-dependent patterning step and a subsequent functionalization of the pattern via click chemistry. This two-step process is feasible on arbitrary surfaces and allows for generation of sustainable patterns and gradients. The method is validated in different biological systems by patterning adhesive ligands on cell-repellent surfaces, thereby constraining the growth and migration of cells to the designated areas. We then implement a sequential photopatterning approach by adding a second switchable patterning step, allowing for spatiotemporal control over two distinct surface patterns. As a proof of concept, we reconstruct the dynamics of the tip/stalk cell switch during angiogenesis. Our results show that the spatiotemporal control provided by our “sequential photopatterning” system is essential for mimicking dynamic biological processes and that our innovative approach has great potential for further applications in cell science. AU - Zisis, Themistoklis AU - Schwarz, Jan AU - Balles, Miriam AU - Kretschmer, Maibritt AU - Nemethova, Maria AU - Chait, Remy P AU - Hauschild, Robert AU - Lange, Janina AU - Guet, Calin C AU - Sixt, Michael K AU - Zahler, Stefan ID - 9822 IS - 30 JF - ACS Applied Materials and Interfaces SN - 19448244 TI - Sequential and switchable patterning for studying cellular processes under spatiotemporal control VL - 13 ER - TY - JOUR AB - Amplitude demodulation is a classical operation used in signal processing. For a long time, its effective applications in practice have been limited to narrowband signals. In this work, we generalize amplitude demodulation to wideband signals. We pose demodulation as a recovery problem of an oversampled corrupted signal and introduce special iterative schemes belonging to the family of alternating projection algorithms to solve it. Sensibly chosen structural assumptions on the demodulation outputs allow us to reveal the high inferential accuracy of the method over a rich set of relevant signals. This new approach surpasses current state-of-the-art demodulation techniques apt to wideband signals in computational efficiency by up to many orders of magnitude with no sacrifice in quality. Such performance opens the door for applications of the amplitude demodulation procedure in new contexts. In particular, the new method makes online and large-scale offline data processing feasible, including the calculation of modulator-carrier pairs in higher dimensions and poor sampling conditions, independent of the signal bandwidth. We illustrate the utility and specifics of applications of the new method in practice by using natural speech and synthetic signals. AU - Gabrielaitis, Mantas ID - 9828 JF - IEEE Transactions on Signal Processing SN - 1053-587X TI - Fast and accurate amplitude demodulation of wideband signals VL - 69 ER - TY - JOUR AB - A central goal in systems neuroscience is to understand the functions performed by neural circuits. Previous top-down models addressed this question by comparing the behaviour of an ideal model circuit, optimised to perform a given function, with neural recordings. However, this requires guessing in advance what function is being performed, which may not be possible for many neural systems. To address this, we propose an inverse reinforcement learning (RL) framework for inferring the function performed by a neural network from data. We assume that the responses of each neuron in a network are optimised so as to drive the network towards ‘rewarded’ states, that are desirable for performing a given function. We then show how one can use inverse RL to infer the reward function optimised by the network from observing its responses. This inferred reward function can be used to predict how the neural network should adapt its dynamics to perform the same function when the external environment or network structure changes. This could lead to theoretical predictions about how neural network dynamics adapt to deal with cell death and/or varying sensory stimulus statistics. AU - Chalk, Matthew J AU - Tkačik, Gašper AU - Marre, Olivier ID - 9362 IS - 4 JF - PLoS ONE TI - Inferring the function performed by a recurrent neural network VL - 16 ER -