---
_id: '8679'
abstract:
- lang: eng
text: A central goal of artificial intelligence in high-stakes decision-making applications
is to design a single algorithm that simultaneously expresses generalizability
by learning coherent representations of their world and interpretable explanations
of its dynamics. Here, we combine brain-inspired neural computation principles
and scalable deep learning architectures to design compact neural controllers
for task-specific compartments of a full-stack autonomous vehicle control system.
We discover that a single algorithm with 19 control neurons, connecting 32 encapsulated
input features to outputs by 253 synapses, learns to map high-dimensional inputs
into steering commands. This system shows superior generalizability, interpretability
and robustness compared with orders-of-magnitude larger black-box learning systems.
The obtained neural agents enable high-fidelity autonomy for task-specific parts
of a complex autonomous system.
article_processing_charge: No
article_type: original
author:
- first_name: Mathias
full_name: Lechner, Mathias
id: 3DC22916-F248-11E8-B48F-1D18A9856A87
last_name: Lechner
- first_name: Ramin
full_name: Hasani, Ramin
last_name: Hasani
- first_name: Alexander
full_name: Amini, Alexander
last_name: Amini
- first_name: Thomas A
full_name: Henzinger, Thomas A
id: 40876CD8-F248-11E8-B48F-1D18A9856A87
last_name: Henzinger
orcid: 0000-0002-2985-7724
- first_name: Daniela
full_name: Rus, Daniela
last_name: Rus
- first_name: Radu
full_name: Grosu, Radu
last_name: Grosu
citation:
ama: Lechner M, Hasani R, Amini A, Henzinger TA, Rus D, Grosu R. Neural circuit
policies enabling auditable autonomy. Nature Machine Intelligence. 2020;2:642-652.
doi:10.1038/s42256-020-00237-3
apa: Lechner, M., Hasani, R., Amini, A., Henzinger, T. A., Rus, D., & Grosu,
R. (2020). Neural circuit policies enabling auditable autonomy. Nature Machine
Intelligence. Springer Nature. https://doi.org/10.1038/s42256-020-00237-3
chicago: Lechner, Mathias, Ramin Hasani, Alexander Amini, Thomas A Henzinger, Daniela
Rus, and Radu Grosu. “Neural Circuit Policies Enabling Auditable Autonomy.” Nature
Machine Intelligence. Springer Nature, 2020. https://doi.org/10.1038/s42256-020-00237-3.
ieee: M. Lechner, R. Hasani, A. Amini, T. A. Henzinger, D. Rus, and R. Grosu, “Neural
circuit policies enabling auditable autonomy,” Nature Machine Intelligence,
vol. 2. Springer Nature, pp. 642–652, 2020.
ista: Lechner M, Hasani R, Amini A, Henzinger TA, Rus D, Grosu R. 2020. Neural circuit
policies enabling auditable autonomy. Nature Machine Intelligence. 2, 642–652.
mla: Lechner, Mathias, et al. “Neural Circuit Policies Enabling Auditable Autonomy.”
Nature Machine Intelligence, vol. 2, Springer Nature, 2020, pp. 642–52,
doi:10.1038/s42256-020-00237-3.
short: M. Lechner, R. Hasani, A. Amini, T.A. Henzinger, D. Rus, R. Grosu, Nature
Machine Intelligence 2 (2020) 642–652.
date_created: 2020-10-19T13:46:06Z
date_published: 2020-10-01T00:00:00Z
date_updated: 2023-08-22T10:36:06Z
day: '01'
department:
- _id: ToHe
doi: 10.1038/s42256-020-00237-3
external_id:
isi:
- '000583337200011'
intvolume: ' 2'
isi: 1
language:
- iso: eng
month: '10'
oa_version: None
page: 642-652
project:
- _id: 25F42A32-B435-11E9-9278-68D0E5697425
call_identifier: FWF
grant_number: Z211
name: The Wittgenstein Prize
publication: Nature Machine Intelligence
publication_identifier:
eissn:
- 2522-5839
publication_status: published
publisher: Springer Nature
quality_controlled: '1'
related_material:
link:
- description: News on IST Homepage
relation: press_release
url: https://ist.ac.at/en/news/new-deep-learning-models/
scopus_import: '1'
status: public
title: Neural circuit policies enabling auditable autonomy
type: journal_article
user_id: 4359f0d1-fa6c-11eb-b949-802e58b17ae8
volume: 2
year: '2020'
...
---
_id: '8670'
abstract:
- lang: eng
text: The α–z Rényi relative entropies are a two-parameter family of Rényi relative
entropies that are quantum generalizations of the classical α-Rényi relative entropies.
In the work [Adv. Math. 365, 107053 (2020)], we decided the full range of (α,
z) for which the data processing inequality (DPI) is valid. In this paper, we
give algebraic conditions for the equality in DPI. For the full range of parameters
(α, z), we give necessary conditions and sufficient conditions. For most parameters,
we give equivalent conditions. This generalizes and strengthens the results of
Leditzky et al. [Lett. Math. Phys. 107, 61–80 (2017)].
acknowledgement: This research was supported by the European Union’s Horizon 2020
research and innovation program under the Marie Skłodowska-Curie Grant Agreement
No. 754411. The author would like to thank Anna Vershynina and Sarah Chehade for
their helpful comments.
article_number: '102201'
article_processing_charge: No
article_type: original
author:
- first_name: Haonan
full_name: Zhang, Haonan
id: D8F41E38-9E66-11E9-A9E2-65C2E5697425
last_name: Zhang
citation:
ama: Zhang H. Equality conditions of data processing inequality for α-z Rényi relative
entropies. Journal of Mathematical Physics. 2020;61(10). doi:10.1063/5.0022787
apa: Zhang, H. (2020). Equality conditions of data processing inequality for α-z
Rényi relative entropies. Journal of Mathematical Physics. AIP Publishing.
https://doi.org/10.1063/5.0022787
chicago: Zhang, Haonan. “Equality Conditions of Data Processing Inequality for α-z
Rényi Relative Entropies.” Journal of Mathematical Physics. AIP Publishing,
2020. https://doi.org/10.1063/5.0022787.
ieee: H. Zhang, “Equality conditions of data processing inequality for α-z Rényi
relative entropies,” Journal of Mathematical Physics, vol. 61, no. 10.
AIP Publishing, 2020.
ista: Zhang H. 2020. Equality conditions of data processing inequality for α-z Rényi
relative entropies. Journal of Mathematical Physics. 61(10), 102201.
mla: Zhang, Haonan. “Equality Conditions of Data Processing Inequality for α-z Rényi
Relative Entropies.” Journal of Mathematical Physics, vol. 61, no. 10,
102201, AIP Publishing, 2020, doi:10.1063/5.0022787.
short: H. Zhang, Journal of Mathematical Physics 61 (2020).
date_created: 2020-10-18T22:01:36Z
date_published: 2020-10-01T00:00:00Z
date_updated: 2023-08-22T10:32:29Z
day: '01'
department:
- _id: JaMa
doi: 10.1063/5.0022787
ec_funded: 1
external_id:
arxiv:
- '2007.06644'
isi:
- '000578529200001'
intvolume: ' 61'
isi: 1
issue: '10'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/2007.06644
month: '10'
oa: 1
oa_version: Preprint
project:
- _id: 260C2330-B435-11E9-9278-68D0E5697425
call_identifier: H2020
grant_number: '754411'
name: ISTplus - Postdoctoral Fellowships
publication: Journal of Mathematical Physics
publication_identifier:
issn:
- '00222488'
publication_status: published
publisher: AIP Publishing
quality_controlled: '1'
scopus_import: '1'
status: public
title: Equality conditions of data processing inequality for α-z Rényi relative entropies
type: journal_article
user_id: 4359f0d1-fa6c-11eb-b949-802e58b17ae8
volume: 61
year: '2020'
...
---
_id: '8698'
abstract:
- lang: eng
text: The brain represents and reasons probabilistically about complex stimuli and
motor actions using a noisy, spike-based neural code. A key building block for
such neural computations, as well as the basis for supervised and unsupervised
learning, is the ability to estimate the surprise or likelihood of incoming high-dimensional
neural activity patterns. Despite progress in statistical modeling of neural responses
and deep learning, current approaches either do not scale to large neural populations
or cannot be implemented using biologically realistic mechanisms. Inspired by
the sparse and random connectivity of real neuronal circuits, we present a model
for neural codes that accurately estimates the likelihood of individual spiking
patterns and has a straightforward, scalable, efficient, learnable, and realistic
neural implementation. This model’s performance on simultaneously recorded spiking
activity of >100 neurons in the monkey visual and prefrontal cortices is comparable
with or better than that of state-of-the-art models. Importantly, the model can
be learned using a small number of samples and using a local learning rule that
utilizes noise intrinsic to neural circuits. Slower, structural changes in random
connectivity, consistent with rewiring and pruning processes, further improve
the efficiency and sparseness of the resulting neural representations. Our results
merge insights from neuroanatomy, machine learning, and theoretical neuroscience
to suggest random sparse connectivity as a key design principle for neuronal computation.
acknowledgement: We thank Udi Karpas, Roy Harpaz, Tal Tamir, Adam Haber, and Amir
Bar for discussions and suggestions; and especially Oren Forkosh and Walter Senn
for invaluable discussions of the learning rule. This work was supported by European
Research Council Grant 311238 (to E.S.) and Israel Science Foundation Grant 1629/12
(to E.S.); as well as research support from Martin Kushner Schnur and Mr. and Mrs.
Lawrence Feis (E.S.); National Institute of Mental Health Grant R01MH109180 (to
R.K.); a Pew Scholarship in Biomedical Sciences (to R.K.); Simons Collaboration
on the Global Brain Grant 542997 (to R.K. and E.S.); and a CRCNS (Collaborative
Research in Computational Neuroscience) grant (to R.K. and E.S.).
article_processing_charge: No
article_type: original
author:
- first_name: Ori
full_name: Maoz, Ori
last_name: Maoz
- first_name: Gašper
full_name: Tkačik, Gašper
id: 3D494DCA-F248-11E8-B48F-1D18A9856A87
last_name: Tkačik
orcid: 0000-0002-6699-1455
- first_name: Mohamad Saleh
full_name: Esteki, Mohamad Saleh
last_name: Esteki
- first_name: Roozbeh
full_name: Kiani, Roozbeh
last_name: Kiani
- first_name: Elad
full_name: Schneidman, Elad
last_name: Schneidman
citation:
ama: Maoz O, Tkačik G, Esteki MS, Kiani R, Schneidman E. Learning probabilistic
neural representations with randomly connected circuits. Proceedings of the
National Academy of Sciences of the United States of America. 2020;117(40):25066-25073.
doi:10.1073/pnas.1912804117
apa: Maoz, O., Tkačik, G., Esteki, M. S., Kiani, R., & Schneidman, E. (2020).
Learning probabilistic neural representations with randomly connected circuits.
Proceedings of the National Academy of Sciences of the United States of America.
National Academy of Sciences. https://doi.org/10.1073/pnas.1912804117
chicago: Maoz, Ori, Gašper Tkačik, Mohamad Saleh Esteki, Roozbeh Kiani, and Elad
Schneidman. “Learning Probabilistic Neural Representations with Randomly Connected
Circuits.” Proceedings of the National Academy of Sciences of the United States
of America. National Academy of Sciences, 2020. https://doi.org/10.1073/pnas.1912804117.
ieee: O. Maoz, G. Tkačik, M. S. Esteki, R. Kiani, and E. Schneidman, “Learning probabilistic
neural representations with randomly connected circuits,” Proceedings of the
National Academy of Sciences of the United States of America, vol. 117, no.
40. National Academy of Sciences, pp. 25066–25073, 2020.
ista: Maoz O, Tkačik G, Esteki MS, Kiani R, Schneidman E. 2020. Learning probabilistic
neural representations with randomly connected circuits. Proceedings of the National
Academy of Sciences of the United States of America. 117(40), 25066–25073.
mla: Maoz, Ori, et al. “Learning Probabilistic Neural Representations with Randomly
Connected Circuits.” Proceedings of the National Academy of Sciences of the
United States of America, vol. 117, no. 40, National Academy of Sciences,
2020, pp. 25066–73, doi:10.1073/pnas.1912804117.
short: O. Maoz, G. Tkačik, M.S. Esteki, R. Kiani, E. Schneidman, Proceedings of
the National Academy of Sciences of the United States of America 117 (2020) 25066–25073.
date_created: 2020-10-25T23:01:16Z
date_published: 2020-10-06T00:00:00Z
date_updated: 2023-08-22T12:11:23Z
day: '06'
ddc:
- '570'
department:
- _id: GaTk
doi: 10.1073/pnas.1912804117
external_id:
isi:
- '000579045200012'
pmid:
- '32948691'
file:
- access_level: open_access
checksum: c6a24fdecf3f28faf447078e7a274a88
content_type: application/pdf
creator: cziletti
date_created: 2020-10-27T14:57:50Z
date_updated: 2020-10-27T14:57:50Z
file_id: '8713'
file_name: 2020_PNAS_Maoz.pdf
file_size: 1755359
relation: main_file
success: 1
file_date_updated: 2020-10-27T14:57:50Z
has_accepted_license: '1'
intvolume: ' 117'
isi: 1
issue: '40'
language:
- iso: eng
license: https://creativecommons.org/licenses/by-nc-nd/4.0/
month: '10'
oa: 1
oa_version: Published Version
page: 25066-25073
pmid: 1
publication: Proceedings of the National Academy of Sciences of the United States
of America
publication_identifier:
eissn:
- '10916490'
issn:
- '00278424'
publication_status: published
publisher: National Academy of Sciences
quality_controlled: '1'
scopus_import: '1'
status: public
title: Learning probabilistic neural representations with randomly connected circuits
tmp:
image: /images/cc_by_nc_nd.png
legal_code_url: https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode
name: Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International
(CC BY-NC-ND 4.0)
short: CC BY-NC-ND (4.0)
type: journal_article
user_id: 4359f0d1-fa6c-11eb-b949-802e58b17ae8
volume: 117
year: '2020'
...
---
_id: '8704'
abstract:
- lang: eng
text: Traditional robotic control suits require profound task-specific knowledge
for designing, building and testing control software. The rise of Deep Learning
has enabled end-to-end solutions to be learned entirely from data, requiring minimal
knowledge about the application area. We design a learning scheme to train end-to-end
linear dynamical systems (LDS)s by gradient descent in imitation learning robotic
domains. We introduce a new regularization loss component together with a learning
algorithm that improves the stability of the learned autonomous system, by forcing
the eigenvalues of the internal state updates of an LDS to be negative reals.
We evaluate our approach on a series of real-life and simulated robotic experiments,
in comparison to linear and nonlinear Recurrent Neural Network (RNN) architectures.
Our results show that our stabilizing method significantly improves test performance
of LDS, enabling such linear models to match the performance of contemporary nonlinear
RNN architectures. A video of the obstacle avoidance performance of our method
on a mobile robot, in unseen environments, compared to other methods can be viewed
at https://youtu.be/mhEsCoNao5E.
acknowledgement: M.L. is supported in parts by the Austrian Science Fund (FWF) under
grant Z211-N23 (Wittgenstein Award). R.H., and R.G. are partially supported by the
Horizon-2020 ECSELProject grant No. 783163 (iDev40), and the Austrian Research Promotion
Agency (FFG), Project No. 860424. R.H. and D.R. is partially supported by the Boeing
Company.
alternative_title:
- ICRA
article_processing_charge: No
author:
- first_name: Mathias
full_name: Lechner, Mathias
id: 3DC22916-F248-11E8-B48F-1D18A9856A87
last_name: Lechner
- first_name: Ramin
full_name: Hasani, Ramin
last_name: Hasani
- first_name: Daniela
full_name: Rus, Daniela
last_name: Rus
- first_name: Radu
full_name: Grosu, Radu
last_name: Grosu
citation:
ama: 'Lechner M, Hasani R, Rus D, Grosu R. Gershgorin loss stabilizes the recurrent
neural network compartment of an end-to-end robot learning scheme. In: Proceedings
- IEEE International Conference on Robotics and Automation. IEEE; 2020:5446-5452.
doi:10.1109/ICRA40945.2020.9196608'
apa: 'Lechner, M., Hasani, R., Rus, D., & Grosu, R. (2020). Gershgorin loss
stabilizes the recurrent neural network compartment of an end-to-end robot learning
scheme. In Proceedings - IEEE International Conference on Robotics and Automation
(pp. 5446–5452). Paris, France: IEEE. https://doi.org/10.1109/ICRA40945.2020.9196608'
chicago: Lechner, Mathias, Ramin Hasani, Daniela Rus, and Radu Grosu. “Gershgorin
Loss Stabilizes the Recurrent Neural Network Compartment of an End-to-End Robot
Learning Scheme.” In Proceedings - IEEE International Conference on Robotics
and Automation, 5446–52. IEEE, 2020. https://doi.org/10.1109/ICRA40945.2020.9196608.
ieee: M. Lechner, R. Hasani, D. Rus, and R. Grosu, “Gershgorin loss stabilizes the
recurrent neural network compartment of an end-to-end robot learning scheme,”
in Proceedings - IEEE International Conference on Robotics and Automation,
Paris, France, 2020, pp. 5446–5452.
ista: 'Lechner M, Hasani R, Rus D, Grosu R. 2020. Gershgorin loss stabilizes the
recurrent neural network compartment of an end-to-end robot learning scheme. Proceedings
- IEEE International Conference on Robotics and Automation. ICRA: International
Conference on Robotics and Automation, ICRA, , 5446–5452.'
mla: Lechner, Mathias, et al. “Gershgorin Loss Stabilizes the Recurrent Neural Network
Compartment of an End-to-End Robot Learning Scheme.” Proceedings - IEEE International
Conference on Robotics and Automation, IEEE, 2020, pp. 5446–52, doi:10.1109/ICRA40945.2020.9196608.
short: M. Lechner, R. Hasani, D. Rus, R. Grosu, in:, Proceedings - IEEE International
Conference on Robotics and Automation, IEEE, 2020, pp. 5446–5452.
conference:
end_date: 2020-08-31
location: Paris, France
name: 'ICRA: International Conference on Robotics and Automation'
start_date: 2020-05-31
date_created: 2020-10-25T23:01:19Z
date_published: 2020-05-01T00:00:00Z
date_updated: 2023-08-22T10:40:15Z
day: '01'
ddc:
- '000'
department:
- _id: ToHe
doi: 10.1109/ICRA40945.2020.9196608
external_id:
isi:
- '000712319503110'
file:
- access_level: open_access
checksum: fccf7b986ac78046918a298cc6849a50
content_type: application/pdf
creator: dernst
date_created: 2020-11-06T10:58:49Z
date_updated: 2020-11-06T10:58:49Z
file_id: '8733'
file_name: 2020_ICRA_Lechner.pdf
file_size: 1070010
relation: main_file
success: 1
file_date_updated: 2020-11-06T10:58:49Z
has_accepted_license: '1'
isi: 1
language:
- iso: eng
month: '05'
oa: 1
oa_version: Submitted Version
page: 5446-5452
project:
- _id: 25F42A32-B435-11E9-9278-68D0E5697425
call_identifier: FWF
grant_number: Z211
name: The Wittgenstein Prize
publication: Proceedings - IEEE International Conference on Robotics and Automation
publication_identifier:
isbn:
- '9781728173955'
issn:
- '10504729'
publication_status: published
publisher: IEEE
quality_controlled: '1'
scopus_import: '1'
status: public
title: Gershgorin loss stabilizes the recurrent neural network compartment of an end-to-end
robot learning scheme
type: conference
user_id: 4359f0d1-fa6c-11eb-b949-802e58b17ae8
year: '2020'
...
---
_id: '8700'
abstract:
- lang: eng
text: Translation termination is a finishing step of protein biosynthesis. The significant
role in this process belongs not only to protein factors of translation termination
but also to the nearest nucleotide environment of stop codons. There are numerous
descriptions of stop codons readthrough, which is due to specific nucleotide sequences
behind them. However, represented data are segmental and don’t explain the mechanism
of the nucleotide context influence on translation termination. It is well known
that stop codon UAA usage is preferential for A/T-rich genes, and UAG, UGA—for
G/C-rich genes, which is related to an expression level of these genes. We investigated
the connection between a frequency of nucleotides occurrence in 3' area of stop
codons in the human genome and their influence on translation termination efficiency.
We found that 3' context motif, which is cognate to the sequence of a stop codon,
stimulates translation termination. At the same time, the nucleotide composition
of 3' sequence that differs from stop codon, decreases translation termination
efficiency.
acknowledgement: We would like to thank the staff of CCU Genome for sequencing, Tat’yana
Pestova, Christopher Helen, and Lyudmila Yur’evna Frolova for the plasmids provided,
as well as the laboratory staff for productive discussion of the results. We also
thank former laboratory employees Yuliya Vladimirovna Bocharova and Polina Nikolaevna
Kryuchkova for the exceptional contribution to the present work.
article_processing_charge: No
article_type: original
author:
- first_name: E. E.
full_name: Sokolova, E. E.
last_name: Sokolova
- first_name: Petr
full_name: Vlasov, Petr
id: 38BB9AC4-F248-11E8-B48F-1D18A9856A87
last_name: Vlasov
- first_name: T. V.
full_name: Egorova, T. V.
last_name: Egorova
- first_name: A. V.
full_name: Shuvalov, A. V.
last_name: Shuvalov
- first_name: E. Z.
full_name: Alkalaeva, E. Z.
last_name: Alkalaeva
citation:
ama: Sokolova EE, Vlasov P, Egorova TV, Shuvalov AV, Alkalaeva EZ. The influence
of A/G composition of 3’ stop codon contexts on translation termination efficiency
in eukaryotes. Molecular Biology. 2020;54(5):739-748. doi:10.1134/S0026893320050088
apa: Sokolova, E. E., Vlasov, P., Egorova, T. V., Shuvalov, A. V., & Alkalaeva,
E. Z. (2020). The influence of A/G composition of 3’ stop codon contexts on translation
termination efficiency in eukaryotes. Molecular Biology. Springer Nature.
https://doi.org/10.1134/S0026893320050088
chicago: Sokolova, E. E., Petr Vlasov, T. V. Egorova, A. V. Shuvalov, and E. Z.
Alkalaeva. “The Influence of A/G Composition of 3’ Stop Codon Contexts on Translation
Termination Efficiency in Eukaryotes.” Molecular Biology. Springer Nature,
2020. https://doi.org/10.1134/S0026893320050088.
ieee: E. E. Sokolova, P. Vlasov, T. V. Egorova, A. V. Shuvalov, and E. Z. Alkalaeva,
“The influence of A/G composition of 3’ stop codon contexts on translation termination
efficiency in eukaryotes,” Molecular Biology, vol. 54, no. 5. Springer
Nature, pp. 739–748, 2020.
ista: Sokolova EE, Vlasov P, Egorova TV, Shuvalov AV, Alkalaeva EZ. 2020. The influence
of A/G composition of 3’ stop codon contexts on translation termination efficiency
in eukaryotes. Molecular Biology. 54(5), 739–748.
mla: Sokolova, E. E., et al. “The Influence of A/G Composition of 3’ Stop Codon
Contexts on Translation Termination Efficiency in Eukaryotes.” Molecular Biology,
vol. 54, no. 5, Springer Nature, 2020, pp. 739–48, doi:10.1134/S0026893320050088.
short: E.E. Sokolova, P. Vlasov, T.V. Egorova, A.V. Shuvalov, E.Z. Alkalaeva, Molecular
Biology 54 (2020) 739–748.
date_created: 2020-10-25T23:01:17Z
date_published: 2020-09-01T00:00:00Z
date_updated: 2023-08-22T10:39:38Z
day: '01'
department:
- _id: FyKo
doi: 10.1134/S0026893320050088
external_id:
isi:
- '000579441200009'
intvolume: ' 54'
isi: 1
issue: '5'
language:
- iso: eng
month: '09'
oa_version: None
page: 739-748
publication: Molecular Biology
publication_identifier:
eissn:
- '16083245'
issn:
- '00268933'
publication_status: published
publisher: Springer Nature
quality_controlled: '1'
related_material:
record:
- id: '8701'
relation: original
status: public
scopus_import: '1'
status: public
title: The influence of A/G composition of 3' stop codon contexts on translation termination
efficiency in eukaryotes
type: journal_article
user_id: 4359f0d1-fa6c-11eb-b949-802e58b17ae8
volume: 54
year: '2020'
...