{"volume":117,"month":"10","page":"25066-25073","date_updated":"2023-08-22T12:11:23Z","date_created":"2020-10-25T23:01:16Z","department":[{"_id":"GaTk"}],"_id":"8698","title":"Learning probabilistic neural representations with randomly connected circuits","ddc":["570"],"author":[{"first_name":"Ori","full_name":"Maoz, Ori","last_name":"Maoz"},{"first_name":"Gašper","id":"3D494DCA-F248-11E8-B48F-1D18A9856A87","full_name":"Tkačik, Gašper","last_name":"Tkačik","orcid":"0000-0002-6699-1455"},{"full_name":"Esteki, Mohamad Saleh","last_name":"Esteki","first_name":"Mohamad Saleh"},{"last_name":"Kiani","full_name":"Kiani, Roozbeh","first_name":"Roozbeh"},{"full_name":"Schneidman, Elad","last_name":"Schneidman","first_name":"Elad"}],"year":"2020","type":"journal_article","publication_status":"published","pmid":1,"oa":1,"oa_version":"Published Version","tmp":{"image":"/images/cc_by_nc_nd.png","legal_code_url":"https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode","name":"Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)","short":"CC BY-NC-ND (4.0)"},"isi":1,"acknowledgement":"We thank Udi Karpas, Roy Harpaz, Tal Tamir, Adam Haber, and Amir Bar for discussions and suggestions; and especially Oren Forkosh and Walter Senn for invaluable discussions of the learning rule. This work was supported by European Research Council Grant 311238 (to E.S.) and Israel Science Foundation Grant 1629/12 (to E.S.); as well as research support from Martin Kushner Schnur and Mr. and Mrs. Lawrence Feis (E.S.); National Institute of Mental Health Grant R01MH109180 (to R.K.); a Pew Scholarship in Biomedical Sciences (to R.K.); Simons Collaboration on the Global Brain Grant 542997 (to R.K. and E.S.); and a CRCNS (Collaborative Research in Computational Neuroscience) grant (to R.K. and E.S.).","article_type":"original","intvolume":" 117","file":[{"access_level":"open_access","file_size":1755359,"checksum":"c6a24fdecf3f28faf447078e7a274a88","content_type":"application/pdf","date_created":"2020-10-27T14:57:50Z","success":1,"file_id":"8713","relation":"main_file","file_name":"2020_PNAS_Maoz.pdf","creator":"cziletti","date_updated":"2020-10-27T14:57:50Z"}],"has_accepted_license":"1","publication":"Proceedings of the National Academy of Sciences of the United States of America","external_id":{"isi":["000579045200012"],"pmid":["32948691"]},"quality_controlled":"1","publication_identifier":{"issn":["00278424"],"eissn":["10916490"]},"date_published":"2020-10-06T00:00:00Z","status":"public","day":"06","citation":{"ieee":"O. Maoz, G. Tkačik, M. S. Esteki, R. Kiani, and E. Schneidman, “Learning probabilistic neural representations with randomly connected circuits,” Proceedings of the National Academy of Sciences of the United States of America, vol. 117, no. 40. National Academy of Sciences, pp. 25066–25073, 2020.","ama":"Maoz O, Tkačik G, Esteki MS, Kiani R, Schneidman E. Learning probabilistic neural representations with randomly connected circuits. Proceedings of the National Academy of Sciences of the United States of America. 2020;117(40):25066-25073. doi:10.1073/pnas.1912804117","chicago":"Maoz, Ori, Gašper Tkačik, Mohamad Saleh Esteki, Roozbeh Kiani, and Elad Schneidman. “Learning Probabilistic Neural Representations with Randomly Connected Circuits.” Proceedings of the National Academy of Sciences of the United States of America. National Academy of Sciences, 2020. https://doi.org/10.1073/pnas.1912804117.","short":"O. Maoz, G. Tkačik, M.S. Esteki, R. Kiani, E. Schneidman, Proceedings of the National Academy of Sciences of the United States of America 117 (2020) 25066–25073.","apa":"Maoz, O., Tkačik, G., Esteki, M. S., Kiani, R., & Schneidman, E. (2020). Learning probabilistic neural representations with randomly connected circuits. Proceedings of the National Academy of Sciences of the United States of America. National Academy of Sciences. https://doi.org/10.1073/pnas.1912804117","mla":"Maoz, Ori, et al. “Learning Probabilistic Neural Representations with Randomly Connected Circuits.” Proceedings of the National Academy of Sciences of the United States of America, vol. 117, no. 40, National Academy of Sciences, 2020, pp. 25066–73, doi:10.1073/pnas.1912804117.","ista":"Maoz O, Tkačik G, Esteki MS, Kiani R, Schneidman E. 2020. Learning probabilistic neural representations with randomly connected circuits. Proceedings of the National Academy of Sciences of the United States of America. 117(40), 25066–25073."},"scopus_import":"1","doi":"10.1073/pnas.1912804117","file_date_updated":"2020-10-27T14:57:50Z","user_id":"4359f0d1-fa6c-11eb-b949-802e58b17ae8","issue":"40","language":[{"iso":"eng"}],"article_processing_charge":"No","publisher":"National Academy of Sciences","abstract":[{"lang":"eng","text":"The brain represents and reasons probabilistically about complex stimuli and motor actions using a noisy, spike-based neural code. A key building block for such neural computations, as well as the basis for supervised and unsupervised learning, is the ability to estimate the surprise or likelihood of incoming high-dimensional neural activity patterns. Despite progress in statistical modeling of neural responses and deep learning, current approaches either do not scale to large neural populations or cannot be implemented using biologically realistic mechanisms. Inspired by the sparse and random connectivity of real neuronal circuits, we present a model for neural codes that accurately estimates the likelihood of individual spiking patterns and has a straightforward, scalable, efficient, learnable, and realistic neural implementation. This model’s performance on simultaneously recorded spiking activity of >100 neurons in the monkey visual and prefrontal cortices is comparable with or better than that of state-of-the-art models. Importantly, the model can be learned using a small number of samples and using a local learning rule that utilizes noise intrinsic to neural circuits. Slower, structural changes in random connectivity, consistent with rewiring and pruning processes, further improve the efficiency and sparseness of the resulting neural representations. Our results merge insights from neuroanatomy, machine learning, and theoretical neuroscience to suggest random sparse connectivity as a key design principle for neuronal computation."}]}