TY - JOUR
AB - The ability of an organism to distinguish between various stimuli is limited by the structure and noise in the population code of its sensory neurons. Here we infer a distance measure on the stimulus space directly from the recorded activity of 100 neurons in the salamander retina. In contrast to previously used measures of stimulus similarity, this "neural metric" tells us how distinguishable a pair of stimulus clips is to the retina, based on the similarity between the induced distributions of population responses. We show that the retinal distance strongly deviates from Euclidean, or any static metric, yet has a simple structure: we identify the stimulus features that the neural population is jointly sensitive to, and show the support-vector-machine- like kernel function relating the stimulus and neural response spaces. We show that the non-Euclidean nature of the retinal distance has important consequences for neural decoding.
AU - Tkacik, Gasper
AU - Granot Atedgi, Einat
AU - Segev, Ronen
AU - Schneidman, Elad
ID - 2913
IS - 5
JF - Physical Review Letters
TI - Retinal metric: a stimulus distance measure derived from population neural responses
VL - 110
ER -
TY - JOUR
AB - The scale invariance of natural images suggests an analogy to the statistical mechanics of physical systems at a critical point. Here we examine the distribution of pixels in small image patches and show how to construct the corresponding thermodynamics. We find evidence for criticality in a diverging specific heat, which corresponds to large fluctuations in how "surprising" we find individual images, and in the quantitative form of the entropy vs energy. We identify special image configurations as local energy minima and show that average patches within each basin are interpretable as lines and edges in all orientations.
AU - Stephens, Greg
AU - Mora, Thierry
AU - Tkacik, Gasper
AU - Bialek, William
ID - 2914
IS - 1
JF - Physical Review Letters
TI - Statistical thermodynamics of natural images
VL - 110
ER -
TY - JOUR
AB - Recent work emphasizes that the maximum entropy principle provides a bridge between statistical mechanics models for collective behavior in neural networks and experiments on networks of real neurons. Most of this work has focused on capturing the measured correlations among pairs of neurons. Here we suggest an alternative, constructing models that are consistent with the distribution of global network activity, i.e. the probability that K out of N cells in the network generate action potentials in the same small time bin. The inverse problem that we need to solve in constructing the model is analytically tractable, and provides a natural 'thermodynamics' for the network in the limit of large N. We analyze the responses of neurons in a small patch of the retina to naturalistic stimuli, and find that the implied thermodynamics is very close to an unusual critical point, in which the entropy (in proper units) is exactly equal to the energy. © 2013 IOP Publishing Ltd and SISSA Medialab srl.
AU - Tkacik, Gasper
AU - Marre, Olivier
AU - Mora, Thierry
AU - Amodei, Dario
AU - Berry, Michael
AU - Bialek, William
ID - 2850
IS - 3
JF - Journal of Statistical Mechanics Theory and Experiment
TI - The simplest maximum entropy model for collective behavior in a neural network
VL - 2013
ER -
TY - JOUR
AB - The number of possible activity patterns in a population of neurons grows exponentially with the size of the population. Typical experiments explore only a tiny fraction of the large space of possible activity patterns in the case of populations with more than 10 or 20 neurons. It is thus impossible, in this undersampled regime, to estimate the probabilities with which most of the activity patterns occur. As a result, the corresponding entropy - which is a measure of the computational power of the neural population - cannot be estimated directly. We propose a simple scheme for estimating the entropy in the undersampled regime, which bounds its value from both below and above. The lower bound is the usual 'naive' entropy of the experimental frequencies. The upper bound results from a hybrid approximation of the entropy which makes use of the naive estimate, a maximum entropy fit, and a coverage adjustment. We apply our simple scheme to artificial data, in order to check their accuracy; we also compare its performance to those of several previously defined entropy estimators. We then apply it to actual measurements of neural activity in populations with up to 100 cells. Finally, we discuss the similarities and differences between the proposed simple estimation scheme and various earlier methods. © 2013 IOP Publishing Ltd and SISSA Medialab srl.
AU - Berry, Michael
AU - Tkacik, Gasper
AU - Dubuis, Julien
AU - Marre, Olivier
AU - Da Silveira, Ravá
ID - 2851
IS - 3
JF - Journal of Statistical Mechanics Theory and Experiment
TI - A simple method for estimating the entropy of neural activity
VL - 2013
ER -
TY - JOUR
AB - Cells in a developing embryo have no direct way of "measuring" their physical position. Through a variety of processes, however, the expression levels of multiple genes come to be correlated with position, and these expression levels thus form a code for "positional information." We show how to measure this information, in bits, using the gap genes in the Drosophila embryo as an example. Individual genes carry nearly two bits of information, twice as much as expected if the expression patterns consisted only of on/off domains separated by sharp boundaries. Taken together, four gap genes carry enough information to define a cell's location with an error bar of ~1% along the anterior-posterior axis of the embryo. This precision is nearly enough for each cell to have a unique identity, which is the maximum information the system can use, and is nearly constant along the length of the embryo. We argue that this constancy is a signature of optimality in the transmission of information from primary morphogen inputs to the output of the gap gene network.
AU - Dubuis, Julien
AU - Tkacik, Gasper
AU - Wieschaus, Eric
AU - Gregor, Thomas
AU - Bialek, William
ID - 3261
IS - 41
JF - PNAS
TI - Positional information, in bits
VL - 110
ER -