--- _id: '7406' abstract: - lang: eng text: "Background\r\nSynaptic vesicles (SVs) are an integral part of the neurotransmission machinery, and isolation of SVs from their host neuron is necessary to reveal their most fundamental biochemical and functional properties in in vitro assays. Isolated SVs from neurons that have been genetically engineered, e.g. to introduce genetically encoded indicators, are not readily available but would permit new insights into SV structure and function. Furthermore, it is unclear if cultured neurons can provide sufficient starting material for SV isolation procedures.\r\n\r\nNew method\r\nHere, we demonstrate an efficient ex vivo procedure to obtain functional SVs from cultured rat cortical neurons after genetic engineering with a lentivirus.\r\n\r\nResults\r\nWe show that ∼108 plated cortical neurons allow isolation of suitable SV amounts for functional analysis and imaging. We found that SVs isolated from cultured neurons have neurotransmitter uptake comparable to that of SVs isolated from intact cortex. Using total internal reflection fluorescence (TIRF) microscopy, we visualized an exogenous SV-targeted marker protein and demonstrated the high efficiency of SV modification.\r\n\r\nComparison with existing methods\r\nObtaining SVs from genetically engineered neurons currently generally requires the availability of transgenic animals, which is constrained by technical (e.g. cost and time) and biological (e.g. developmental defects and lethality) limitations.\r\n\r\nConclusions\r\nThese results demonstrate the modification and isolation of functional SVs using cultured neurons and viral transduction. The ability to readily obtain SVs from genetically engineered neurons will permit linking in situ studies to in vitro experiments in a variety of genetic contexts." acknowledged_ssus: - _id: Bio - _id: EM-Fac article_processing_charge: No article_type: original author: - first_name: Catherine full_name: Mckenzie, Catherine id: 3EEDE19A-F248-11E8-B48F-1D18A9856A87 last_name: Mckenzie - first_name: Miroslava full_name: Spanova, Miroslava id: 44A924DC-F248-11E8-B48F-1D18A9856A87 last_name: Spanova - first_name: Alexander J full_name: Johnson, Alexander J id: 46A62C3A-F248-11E8-B48F-1D18A9856A87 last_name: Johnson orcid: 0000-0002-2739-8843 - first_name: Stephanie full_name: Kainrath, Stephanie id: 32CFBA64-F248-11E8-B48F-1D18A9856A87 last_name: Kainrath - first_name: Vanessa full_name: Zheden, Vanessa id: 39C5A68A-F248-11E8-B48F-1D18A9856A87 last_name: Zheden orcid: 0000-0002-9438-4783 - first_name: Harald H. full_name: Sitte, Harald H. last_name: Sitte - first_name: Harald L full_name: Janovjak, Harald L id: 33BA6C30-F248-11E8-B48F-1D18A9856A87 last_name: Janovjak orcid: 0000-0002-8023-9315 citation: ama: Mckenzie C, Spanova M, Johnson AJ, et al. Isolation of synaptic vesicles from genetically engineered cultured neurons. Journal of Neuroscience Methods. 2019;312:114-121. doi:10.1016/j.jneumeth.2018.11.018 apa: Mckenzie, C., Spanova, M., Johnson, A. J., Kainrath, S., Zheden, V., Sitte, H. H., & Janovjak, H. L. (2019). Isolation of synaptic vesicles from genetically engineered cultured neurons. Journal of Neuroscience Methods. Elsevier. https://doi.org/10.1016/j.jneumeth.2018.11.018 chicago: Mckenzie, Catherine, Miroslava Spanova, Alexander J Johnson, Stephanie Kainrath, Vanessa Zheden, Harald H. Sitte, and Harald L Janovjak. “Isolation of Synaptic Vesicles from Genetically Engineered Cultured Neurons.” Journal of Neuroscience Methods. Elsevier, 2019. https://doi.org/10.1016/j.jneumeth.2018.11.018. ieee: C. Mckenzie et al., “Isolation of synaptic vesicles from genetically engineered cultured neurons,” Journal of Neuroscience Methods, vol. 312. Elsevier, pp. 114–121, 2019. ista: Mckenzie C, Spanova M, Johnson AJ, Kainrath S, Zheden V, Sitte HH, Janovjak HL. 2019. Isolation of synaptic vesicles from genetically engineered cultured neurons. Journal of Neuroscience Methods. 312, 114–121. mla: Mckenzie, Catherine, et al. “Isolation of Synaptic Vesicles from Genetically Engineered Cultured Neurons.” Journal of Neuroscience Methods, vol. 312, Elsevier, 2019, pp. 114–21, doi:10.1016/j.jneumeth.2018.11.018. short: C. Mckenzie, M. Spanova, A.J. Johnson, S. Kainrath, V. Zheden, H.H. Sitte, H.L. Janovjak, Journal of Neuroscience Methods 312 (2019) 114–121. date_created: 2020-01-30T09:12:19Z date_published: 2019-01-15T00:00:00Z date_updated: 2023-09-06T15:27:29Z day: '15' department: - _id: HaJa - _id: Bio doi: 10.1016/j.jneumeth.2018.11.018 ec_funded: 1 external_id: isi: - '000456220900013' pmid: - '30496761' intvolume: ' 312' isi: 1 language: - iso: eng month: '01' oa_version: None page: 114-121 pmid: 1 project: - _id: 25548C20-B435-11E9-9278-68D0E5697425 call_identifier: FP7 grant_number: '303564' name: Microbial Ion Channels for Synthetic Neurobiology - _id: 26538374-B435-11E9-9278-68D0E5697425 call_identifier: FWF grant_number: I03630 name: Molecular mechanisms of endocytic cargo recognition in plants - _id: 2548AE96-B435-11E9-9278-68D0E5697425 call_identifier: FWF grant_number: W1232-B24 name: Molecular Drug Targets publication: Journal of Neuroscience Methods publication_identifier: issn: - 0165-0270 publication_status: published publisher: Elsevier quality_controlled: '1' scopus_import: '1' status: public title: Isolation of synaptic vesicles from genetically engineered cultured neurons type: journal_article user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1 volume: 312 year: '2019' ... --- _id: '7437' abstract: - lang: eng text: 'Most of today''s distributed machine learning systems assume reliable networks: whenever two machines exchange information (e.g., gradients or models), the network should guarantee the delivery of the message. At the same time, recent work exhibits the impressive tolerance of machine learning algorithms to errors or noise arising from relaxed communication or synchronization. In this paper, we connect these two trends, and consider the following question: Can we design machine learning systems that are tolerant to network unreliability during training? With this motivation, we focus on a theoretical problem of independent interest-given a standard distributed parameter server architecture, if every communication between the worker and the server has a non-zero probability p of being dropped, does there exist an algorithm that still converges, and at what speed? The technical contribution of this paper is a novel theoretical analysis proving that distributed learning over unreliable network can achieve comparable convergence rate to centralized or distributed learning over reliable networks. Further, we prove that the influence of the packet drop rate diminishes with the growth of the number of parameter servers. We map this theoretical result onto a real-world scenario, training deep neural networks over an unreliable network layer, and conduct network simulation to validate the system improvement by allowing the networks to be unreliable.' article_processing_charge: No author: - first_name: Chen full_name: Yu, Chen last_name: Yu - first_name: Hanlin full_name: Tang, Hanlin last_name: Tang - first_name: Cedric full_name: Renggli, Cedric last_name: Renggli - first_name: Simon full_name: Kassing, Simon last_name: Kassing - first_name: Ankit full_name: Singla, Ankit last_name: Singla - first_name: Dan-Adrian full_name: Alistarh, Dan-Adrian id: 4A899BFC-F248-11E8-B48F-1D18A9856A87 last_name: Alistarh orcid: 0000-0003-3650-940X - first_name: Ce full_name: Zhang, Ce last_name: Zhang - first_name: Ji full_name: Liu, Ji last_name: Liu citation: ama: 'Yu C, Tang H, Renggli C, et al. Distributed learning over unreliable networks. In: 36th International Conference on Machine Learning, ICML 2019. Vol 2019-June. IMLS; 2019:12481-12512.' apa: 'Yu, C., Tang, H., Renggli, C., Kassing, S., Singla, A., Alistarh, D.-A., … Liu, J. (2019). Distributed learning over unreliable networks. In 36th International Conference on Machine Learning, ICML 2019 (Vol. 2019–June, pp. 12481–12512). Long Beach, CA, United States: IMLS.' chicago: Yu, Chen, Hanlin Tang, Cedric Renggli, Simon Kassing, Ankit Singla, Dan-Adrian Alistarh, Ce Zhang, and Ji Liu. “Distributed Learning over Unreliable Networks.” In 36th International Conference on Machine Learning, ICML 2019, 2019–June:12481–512. IMLS, 2019. ieee: C. Yu et al., “Distributed learning over unreliable networks,” in 36th International Conference on Machine Learning, ICML 2019, Long Beach, CA, United States, 2019, vol. 2019–June, pp. 12481–12512. ista: 'Yu C, Tang H, Renggli C, Kassing S, Singla A, Alistarh D-A, Zhang C, Liu J. 2019. Distributed learning over unreliable networks. 36th International Conference on Machine Learning, ICML 2019. ICML: International Conference on Machine Learning vol. 2019–June, 12481–12512.' mla: Yu, Chen, et al. “Distributed Learning over Unreliable Networks.” 36th International Conference on Machine Learning, ICML 2019, vol. 2019–June, IMLS, 2019, pp. 12481–512. short: C. Yu, H. Tang, C. Renggli, S. Kassing, A. Singla, D.-A. Alistarh, C. Zhang, J. Liu, in:, 36th International Conference on Machine Learning, ICML 2019, IMLS, 2019, pp. 12481–12512. conference: end_date: 2019-06-15 location: Long Beach, CA, United States name: 'ICML: International Conference on Machine Learning' start_date: 2019-06-10 date_created: 2020-02-02T23:01:06Z date_published: 2019-06-01T00:00:00Z date_updated: 2023-09-06T15:21:48Z day: '01' department: - _id: DaAl external_id: arxiv: - '1810.07766' isi: - '000684034307036' isi: 1 language: - iso: eng main_file_link: - open_access: '1' url: https://arxiv.org/abs/1810.07766 month: '06' oa: 1 oa_version: Preprint page: 12481-12512 publication: 36th International Conference on Machine Learning, ICML 2019 publication_identifier: isbn: - '9781510886988' publication_status: published publisher: IMLS quality_controlled: '1' scopus_import: '1' status: public title: Distributed learning over unreliable networks type: conference user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1 volume: 2019-June year: '2019' ... --- _id: '7412' abstract: - lang: eng text: We develop a framework for the rigorous analysis of focused stochastic local search algorithms. These algorithms search a state space by repeatedly selecting some constraint that is violated in the current state and moving to a random nearby state that addresses the violation, while (we hope) not introducing many new violations. An important class of focused local search algorithms with provable performance guarantees has recently arisen from algorithmizations of the Lovász local lemma (LLL), a nonconstructive tool for proving the existence of satisfying states by introducing a background measure on the state space. While powerful, the state transitions of algorithms in this class must be, in a precise sense, perfectly compatible with the background measure. In many applications this is a very restrictive requirement, and one needs to step outside the class. Here we introduce the notion of measure distortion and develop a framework for analyzing arbitrary focused stochastic local search algorithms, recovering LLL algorithmizations as the special case of no distortion. Our framework takes as input an arbitrary algorithm of such type and an arbitrary probability measure and shows how to use the measure as a yardstick of algorithmic progress, even for algorithms designed independently of the measure. article_processing_charge: No article_type: original author: - first_name: Dimitris full_name: Achlioptas, Dimitris last_name: Achlioptas - first_name: Fotis full_name: Iliopoulos, Fotis last_name: Iliopoulos - first_name: Vladimir full_name: Kolmogorov, Vladimir id: 3D50B0BA-F248-11E8-B48F-1D18A9856A87 last_name: Kolmogorov citation: ama: Achlioptas D, Iliopoulos F, Kolmogorov V. A local lemma for focused stochastical algorithms. SIAM Journal on Computing. 2019;48(5):1583-1602. doi:10.1137/16m109332x apa: Achlioptas, D., Iliopoulos, F., & Kolmogorov, V. (2019). A local lemma for focused stochastical algorithms. SIAM Journal on Computing. SIAM. https://doi.org/10.1137/16m109332x chicago: Achlioptas, Dimitris, Fotis Iliopoulos, and Vladimir Kolmogorov. “A Local Lemma for Focused Stochastical Algorithms.” SIAM Journal on Computing. SIAM, 2019. https://doi.org/10.1137/16m109332x. ieee: D. Achlioptas, F. Iliopoulos, and V. Kolmogorov, “A local lemma for focused stochastical algorithms,” SIAM Journal on Computing, vol. 48, no. 5. SIAM, pp. 1583–1602, 2019. ista: Achlioptas D, Iliopoulos F, Kolmogorov V. 2019. A local lemma for focused stochastical algorithms. SIAM Journal on Computing. 48(5), 1583–1602. mla: Achlioptas, Dimitris, et al. “A Local Lemma for Focused Stochastical Algorithms.” SIAM Journal on Computing, vol. 48, no. 5, SIAM, 2019, pp. 1583–602, doi:10.1137/16m109332x. short: D. Achlioptas, F. Iliopoulos, V. Kolmogorov, SIAM Journal on Computing 48 (2019) 1583–1602. date_created: 2020-01-30T09:27:32Z date_published: 2019-10-31T00:00:00Z date_updated: 2023-09-06T15:25:29Z day: '31' department: - _id: VlKo doi: 10.1137/16m109332x ec_funded: 1 external_id: arxiv: - '1809.01537' isi: - '000493900200005' intvolume: ' 48' isi: 1 issue: '5' language: - iso: eng main_file_link: - open_access: '1' url: https://arxiv.org/abs/1809.01537 month: '10' oa: 1 oa_version: Preprint page: 1583-1602 project: - _id: 25FBA906-B435-11E9-9278-68D0E5697425 call_identifier: FP7 grant_number: '616160' name: 'Discrete Optimization in Computer Vision: Theory and Practice' publication: SIAM Journal on Computing publication_identifier: eissn: - 1095-7111 issn: - 0097-5397 publication_status: published publisher: SIAM quality_controlled: '1' scopus_import: '1' status: public title: A local lemma for focused stochastical algorithms type: journal_article user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1 volume: 48 year: '2019' ... --- _id: '7418' abstract: - lang: eng text: Multiple importance sampling (MIS) has become an indispensable tool in Monte Carlo rendering, widely accepted as a near-optimal solution for combining different sampling techniques. But an MIS combination, using the common balance or power heuristics, often results in an overly defensive estimator, leading to high variance. We show that by generalizing the MIS framework, variance can be substantially reduced. Specifically, we optimize one of the combined sampling techniques so as to decrease the overall variance of the resulting MIS estimator. We apply the approach to the computation of direct illumination due to an HDR environment map and to the computation of global illumination using a path guiding algorithm. The implementation can be as simple as subtracting a constant value from the tabulated sampling density done entirely in a preprocessing step. This produces a consistent noise reduction in all our tests with no negative influence on run time, no artifacts or bias, and no failure cases. article_number: '151' article_processing_charge: No article_type: original author: - first_name: Ondřej full_name: Karlík, Ondřej last_name: Karlík - first_name: Martin full_name: Šik, Martin last_name: Šik - first_name: Petr full_name: Vévoda, Petr last_name: Vévoda - first_name: Tomas full_name: Skrivan, Tomas id: 486A5A46-F248-11E8-B48F-1D18A9856A87 last_name: Skrivan - first_name: Jaroslav full_name: Křivánek, Jaroslav last_name: Křivánek citation: ama: 'Karlík O, Šik M, Vévoda P, Skrivan T, Křivánek J. MIS compensation: Optimizing sampling techniques in multiple importance sampling. ACM Transactions on Graphics. 2019;38(6). doi:10.1145/3355089.3356565' apa: 'Karlík, O., Šik, M., Vévoda, P., Skrivan, T., & Křivánek, J. (2019). MIS compensation: Optimizing sampling techniques in multiple importance sampling. ACM Transactions on Graphics. ACM. https://doi.org/10.1145/3355089.3356565' chicago: 'Karlík, Ondřej, Martin Šik, Petr Vévoda, Tomas Skrivan, and Jaroslav Křivánek. “MIS Compensation: Optimizing Sampling Techniques in Multiple Importance Sampling.” ACM Transactions on Graphics. ACM, 2019. https://doi.org/10.1145/3355089.3356565.' ieee: 'O. Karlík, M. Šik, P. Vévoda, T. Skrivan, and J. Křivánek, “MIS compensation: Optimizing sampling techniques in multiple importance sampling,” ACM Transactions on Graphics, vol. 38, no. 6. ACM, 2019.' ista: 'Karlík O, Šik M, Vévoda P, Skrivan T, Křivánek J. 2019. MIS compensation: Optimizing sampling techniques in multiple importance sampling. ACM Transactions on Graphics. 38(6), 151.' mla: 'Karlík, Ondřej, et al. “MIS Compensation: Optimizing Sampling Techniques in Multiple Importance Sampling.” ACM Transactions on Graphics, vol. 38, no. 6, 151, ACM, 2019, doi:10.1145/3355089.3356565.' short: O. Karlík, M. Šik, P. Vévoda, T. Skrivan, J. Křivánek, ACM Transactions on Graphics 38 (2019). date_created: 2020-01-30T10:19:43Z date_published: 2019-11-01T00:00:00Z date_updated: 2023-09-06T15:22:23Z day: '01' department: - _id: ChWo doi: 10.1145/3355089.3356565 external_id: isi: - '000498397300001' intvolume: ' 38' isi: 1 issue: '6' language: - iso: eng month: '11' oa_version: None publication: ACM Transactions on Graphics publication_identifier: eissn: - 1557-7368 issn: - 0730-0301 publication_status: published publisher: ACM quality_controlled: '1' scopus_import: '1' status: public title: 'MIS compensation: Optimizing sampling techniques in multiple importance sampling' type: journal_article user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1 volume: 38 year: '2019' ... --- _id: '7413' abstract: - lang: eng text: We consider Bose gases consisting of N particles trapped in a box with volume one and interacting through a repulsive potential with scattering length of order N−1 (Gross–Pitaevskii regime). We determine the ground state energy and the low-energy excitation spectrum, up to errors vanishing as N→∞. Our results confirm Bogoliubov’s predictions. article_processing_charge: No article_type: original author: - first_name: Chiara full_name: Boccato, Chiara id: 342E7E22-F248-11E8-B48F-1D18A9856A87 last_name: Boccato - first_name: Christian full_name: Brennecke, Christian last_name: Brennecke - first_name: Serena full_name: Cenatiempo, Serena last_name: Cenatiempo - first_name: Benjamin full_name: Schlein, Benjamin last_name: Schlein citation: ama: Boccato C, Brennecke C, Cenatiempo S, Schlein B. Bogoliubov theory in the Gross–Pitaevskii limit. Acta Mathematica. 2019;222(2):219-335. doi:10.4310/acta.2019.v222.n2.a1 apa: Boccato, C., Brennecke, C., Cenatiempo, S., & Schlein, B. (2019). Bogoliubov theory in the Gross–Pitaevskii limit. Acta Mathematica. International Press of Boston. https://doi.org/10.4310/acta.2019.v222.n2.a1 chicago: Boccato, Chiara, Christian Brennecke, Serena Cenatiempo, and Benjamin Schlein. “Bogoliubov Theory in the Gross–Pitaevskii Limit.” Acta Mathematica. International Press of Boston, 2019. https://doi.org/10.4310/acta.2019.v222.n2.a1. ieee: C. Boccato, C. Brennecke, S. Cenatiempo, and B. Schlein, “Bogoliubov theory in the Gross–Pitaevskii limit,” Acta Mathematica, vol. 222, no. 2. International Press of Boston, pp. 219–335, 2019. ista: Boccato C, Brennecke C, Cenatiempo S, Schlein B. 2019. Bogoliubov theory in the Gross–Pitaevskii limit. Acta Mathematica. 222(2), 219–335. mla: Boccato, Chiara, et al. “Bogoliubov Theory in the Gross–Pitaevskii Limit.” Acta Mathematica, vol. 222, no. 2, International Press of Boston, 2019, pp. 219–335, doi:10.4310/acta.2019.v222.n2.a1. short: C. Boccato, C. Brennecke, S. Cenatiempo, B. Schlein, Acta Mathematica 222 (2019) 219–335. date_created: 2020-01-30T09:30:41Z date_published: 2019-06-07T00:00:00Z date_updated: 2023-09-06T15:24:31Z day: '07' department: - _id: RoSe doi: 10.4310/acta.2019.v222.n2.a1 external_id: arxiv: - '1801.01389' isi: - '000495865300001' intvolume: ' 222' isi: 1 issue: '2' language: - iso: eng main_file_link: - open_access: '1' url: https://arxiv.org/abs/1801.01389 month: '06' oa: 1 oa_version: Preprint page: 219-335 publication: Acta Mathematica publication_identifier: eissn: - 1871-2509 issn: - 0001-5962 publication_status: published publisher: International Press of Boston quality_controlled: '1' scopus_import: '1' status: public title: Bogoliubov theory in the Gross–Pitaevskii limit type: journal_article user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1 volume: 222 year: '2019' ...