---
_id: '11366'
abstract:
- lang: eng
text: "Adversarial training (i.e., training on adversarially perturbed input data)
is a well-studied method for making neural networks robust to potential adversarial
attacks during inference. However, the improved robustness does not\r\ncome for
free but rather is accompanied by a decrease in overall model accuracy and performance.
Recent work has shown that, in practical robot learning applications, the effects
of adversarial training do not pose a fair trade-off\r\nbut inflict a net loss
when measured in holistic robot performance. This work revisits the robustness-accuracy
trade-off in robot learning by systematically analyzing if recent advances in
robust training methods and theory in\r\nconjunction with adversarial robot learning
can make adversarial training suitable for real-world robot applications. We evaluate
a wide variety of robot learning tasks ranging from autonomous driving in a high-fidelity
environment\r\namenable to sim-to-real deployment, to mobile robot gesture recognition.
Our results demonstrate that, while these techniques make incremental improvements
on the trade-off on a relative scale, the negative side-effects caused by\r\nadversarial
training still outweigh the improvements by an order of magnitude. We conclude
that more substantial advances in robust learning methods are necessary before
they can benefit robot learning tasks in practice."
acknowledgement: "This work was supported in parts by the ERC-2020-AdG 101020093,
National Science Foundation (NSF), and JP\r\nMorgan Graduate Fellowships. We thank
Christoph Lampert for inspiring this work.\r\n"
article_number: '2204.07373'
article_processing_charge: No
author:
- first_name: Mathias
full_name: Lechner, Mathias
id: 3DC22916-F248-11E8-B48F-1D18A9856A87
last_name: Lechner
- first_name: Alexander
full_name: Amini, Alexander
last_name: Amini
- first_name: Daniela
full_name: Rus, Daniela
last_name: Rus
- first_name: Thomas A
full_name: Henzinger, Thomas A
id: 40876CD8-F248-11E8-B48F-1D18A9856A87
last_name: Henzinger
orcid: 0000-0002-2985-7724
citation:
ama: Lechner M, Amini A, Rus D, Henzinger TA. Revisiting the adversarial robustness-accuracy
tradeoff in robot learning. arXiv. doi:10.48550/arXiv.2204.07373
apa: Lechner, M., Amini, A., Rus, D., & Henzinger, T. A. (n.d.). Revisiting
the adversarial robustness-accuracy tradeoff in robot learning. arXiv.
https://doi.org/10.48550/arXiv.2204.07373
chicago: Lechner, Mathias, Alexander Amini, Daniela Rus, and Thomas A Henzinger.
“Revisiting the Adversarial Robustness-Accuracy Tradeoff in Robot Learning.” ArXiv,
n.d. https://doi.org/10.48550/arXiv.2204.07373.
ieee: M. Lechner, A. Amini, D. Rus, and T. A. Henzinger, “Revisiting the adversarial
robustness-accuracy tradeoff in robot learning,” arXiv. .
ista: Lechner M, Amini A, Rus D, Henzinger TA. Revisiting the adversarial robustness-accuracy
tradeoff in robot learning. arXiv, 2204.07373.
mla: Lechner, Mathias, et al. “Revisiting the Adversarial Robustness-Accuracy Tradeoff
in Robot Learning.” ArXiv, 2204.07373, doi:10.48550/arXiv.2204.07373.
short: M. Lechner, A. Amini, D. Rus, T.A. Henzinger, ArXiv (n.d.).
date_created: 2022-05-12T13:20:17Z
date_published: 2022-04-15T00:00:00Z
date_updated: 2023-08-01T13:36:50Z
day: '15'
department:
- _id: ToHe
doi: 10.48550/arXiv.2204.07373
ec_funded: 1
external_id:
arxiv:
- '2204.07373'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://doi.org/10.48550/arXiv.2204.07373
month: '04'
oa: 1
oa_version: Preprint
project:
- _id: 62781420-2b32-11ec-9570-8d9b63373d4d
call_identifier: H2020
grant_number: '101020093'
name: Vigilant Algorithmic Monitoring of Software
publication: arXiv
publication_status: submitted
related_material:
record:
- id: '11362'
relation: dissertation_contains
status: public
- id: '12704'
relation: later_version
status: public
status: public
title: Revisiting the adversarial robustness-accuracy tradeoff in robot learning
type: preprint
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2022'
...
---
_id: '10891'
abstract:
- lang: eng
text: We present a formal framework for the online black-box monitoring of software
using monitors with quantitative verdict functions. Quantitative verdict functions
have several advantages. First, quantitative monitors can be approximate, i.e.,
the value of the verdict function does not need to correspond exactly to the value
of the property under observation. Second, quantitative monitors can be quantified
universally, i.e., for every possible observed behavior, the monitor tries to
make the best effort to estimate the value of the property under observation.
Third, quantitative monitors can watch boolean as well as quantitative properties,
such as average response time. Fourth, quantitative monitors can use non-finite-state
resources, such as counters. As a consequence, quantitative monitors can be compared
according to how many resources they use (e.g., the number of counters) and how
precisely they approximate the property under observation. This allows for a rich
spectrum of cost-precision trade-offs in monitoring software.
acknowledgement: The formal framework for quantitative monitoring which is presented
in this invited talk was defined jointly with N. Ege Saraç at LICS 2021. This work
was supported in part by the Wittgenstein Award Z211-N23 of the Austrian Science
Fund.
article_processing_charge: No
author:
- first_name: Thomas A
full_name: Henzinger, Thomas A
id: 40876CD8-F248-11E8-B48F-1D18A9856A87
last_name: Henzinger
orcid: 0000-0002-2985-7724
citation:
ama: 'Henzinger TA. Quantitative monitoring of software. In: Software Verification.
Vol 13124. LNCS. Springer Nature; 2022:3-6. doi:10.1007/978-3-030-95561-8_1'
apa: 'Henzinger, T. A. (2022). Quantitative monitoring of software. In Software
Verification (Vol. 13124, pp. 3–6). New Haven, CT, United States: Springer
Nature. https://doi.org/10.1007/978-3-030-95561-8_1'
chicago: Henzinger, Thomas A. “Quantitative Monitoring of Software.” In Software
Verification, 13124:3–6. LNCS. Springer Nature, 2022. https://doi.org/10.1007/978-3-030-95561-8_1.
ieee: T. A. Henzinger, “Quantitative monitoring of software,” in Software Verification,
New Haven, CT, United States, 2022, vol. 13124, pp. 3–6.
ista: 'Henzinger TA. 2022. Quantitative monitoring of software. Software Verification.
NSV: Numerical Software VerificationLNCS vol. 13124, 3–6.'
mla: Henzinger, Thomas A. “Quantitative Monitoring of Software.” Software Verification,
vol. 13124, Springer Nature, 2022, pp. 3–6, doi:10.1007/978-3-030-95561-8_1.
short: T.A. Henzinger, in:, Software Verification, Springer Nature, 2022, pp. 3–6.
conference:
end_date: 2021-10-19
location: New Haven, CT, United States
name: 'NSV: Numerical Software Verification'
start_date: 2021-10-18
date_created: 2022-03-20T23:01:40Z
date_published: 2022-02-22T00:00:00Z
date_updated: 2023-08-03T06:11:55Z
day: '22'
department:
- _id: ToHe
doi: 10.1007/978-3-030-95561-8_1
external_id:
isi:
- '000771713200001'
intvolume: ' 13124'
isi: 1
language:
- iso: eng
month: '02'
oa_version: None
page: 3-6
project:
- _id: 25F42A32-B435-11E9-9278-68D0E5697425
call_identifier: FWF
grant_number: Z211
name: The Wittgenstein Prize
publication: Software Verification
publication_identifier:
eissn:
- 1611-3349
isbn:
- '9783030955601'
issn:
- 0302-9743
publication_status: published
publisher: Springer Nature
quality_controlled: '1'
scopus_import: '1'
series_title: LNCS
status: public
title: Quantitative monitoring of software
type: conference
user_id: 4359f0d1-fa6c-11eb-b949-802e58b17ae8
volume: 13124
year: '2022'
...
---
_id: '11355'
abstract:
- lang: eng
text: "Contract-based design is a promising methodology for taming the complexity
of developing sophisticated systems. A formal contract distinguishes between assumptions,
which are constraints that the designer of a component puts on the environments
in which the component can be used safely, and guarantees, which are promises
that the designer asks from the team that implements the component. A theory of
formal contracts can be formalized as an interface theory, which supports the
composition and refinement of both assumptions and guarantees.\r\nAlthough there
is a rich landscape of contract-based design methods that address functional and
extra-functional properties, we present the first interface theory that is designed
for ensuring system-wide security properties. Our framework provides a refinement
relation and a composition operation that support both incremental design and
independent implementability. We develop our theory for both stateless and stateful
interfaces. We illustrate the applicability of our framework with an example inspired
from the automotive domain."
acknowledgement: This project has received funding from the European Union’s Horizon
2020 research and innovation programme under grant agreement No 956123 and was funded
in part by the FWF project W1255-N23 and by the ERC-2020-AdG 101020093.
alternative_title:
- LNCS
article_processing_charge: No
author:
- first_name: Ezio
full_name: Bartocci, Ezio
last_name: Bartocci
- first_name: Thomas
full_name: Ferrere, Thomas
id: 40960E6E-F248-11E8-B48F-1D18A9856A87
last_name: Ferrere
orcid: 0000-0001-5199-3143
- first_name: Thomas A
full_name: Henzinger, Thomas A
id: 40876CD8-F248-11E8-B48F-1D18A9856A87
last_name: Henzinger
orcid: 0000-0002-2985-7724
- first_name: Dejan
full_name: Nickovic, Dejan
id: 41BCEE5C-F248-11E8-B48F-1D18A9856A87
last_name: Nickovic
- first_name: Ana Oliveira
full_name: Da Costa, Ana Oliveira
last_name: Da Costa
citation:
ama: 'Bartocci E, Ferrere T, Henzinger TA, Nickovic D, Da Costa AO. Information-flow
interfaces. In: Fundamental Approaches to Software Engineering. Vol 13241.
Springer Nature; 2022:3-22. doi:10.1007/978-3-030-99429-7_1'
apa: 'Bartocci, E., Ferrere, T., Henzinger, T. A., Nickovic, D., & Da Costa,
A. O. (2022). Information-flow interfaces. In Fundamental Approaches to Software
Engineering (Vol. 13241, pp. 3–22). Munich, Germany: Springer Nature. https://doi.org/10.1007/978-3-030-99429-7_1'
chicago: Bartocci, Ezio, Thomas Ferrere, Thomas A Henzinger, Dejan Nickovic, and
Ana Oliveira Da Costa. “Information-Flow Interfaces.” In Fundamental Approaches
to Software Engineering, 13241:3–22. Springer Nature, 2022. https://doi.org/10.1007/978-3-030-99429-7_1.
ieee: E. Bartocci, T. Ferrere, T. A. Henzinger, D. Nickovic, and A. O. Da Costa,
“Information-flow interfaces,” in Fundamental Approaches to Software Engineering,
Munich, Germany, 2022, vol. 13241, pp. 3–22.
ista: 'Bartocci E, Ferrere T, Henzinger TA, Nickovic D, Da Costa AO. 2022. Information-flow
interfaces. Fundamental Approaches to Software Engineering. FASE: Fundamental
Approaches to Software Engineering, LNCS, vol. 13241, 3–22.'
mla: Bartocci, Ezio, et al. “Information-Flow Interfaces.” Fundamental Approaches
to Software Engineering, vol. 13241, Springer Nature, 2022, pp. 3–22, doi:10.1007/978-3-030-99429-7_1.
short: E. Bartocci, T. Ferrere, T.A. Henzinger, D. Nickovic, A.O. Da Costa, in:,
Fundamental Approaches to Software Engineering, Springer Nature, 2022, pp. 3–22.
conference:
end_date: 2022-04-07
location: Munich, Germany
name: 'FASE: Fundamental Approaches to Software Engineering'
start_date: 2022-04-02
date_created: 2022-05-08T22:01:44Z
date_published: 2022-03-29T00:00:00Z
date_updated: 2023-08-03T07:03:40Z
day: '29'
ddc:
- '000'
department:
- _id: ToHe
doi: 10.1007/978-3-030-99429-7_1
ec_funded: 1
external_id:
isi:
- '000782393600001'
file:
- access_level: open_access
checksum: 7f6f860b20b8de2a249e9c1b4eee15cf
content_type: application/pdf
creator: dernst
date_created: 2022-05-09T06:52:44Z
date_updated: 2022-05-09T06:52:44Z
file_id: '11357'
file_name: 2022_LNCS_Bartocci.pdf
file_size: 479146
relation: main_file
success: 1
file_date_updated: 2022-05-09T06:52:44Z
has_accepted_license: '1'
intvolume: ' 13241'
isi: 1
language:
- iso: eng
license: https://creativecommons.org/licenses/by/4.0/
month: '03'
oa: 1
oa_version: Published Version
page: 3-22
project:
- _id: 62781420-2b32-11ec-9570-8d9b63373d4d
call_identifier: H2020
grant_number: '101020093'
name: Vigilant Algorithmic Monitoring of Software
publication: Fundamental Approaches to Software Engineering
publication_identifier:
eissn:
- 1611-3349
isbn:
- '9783030994280'
issn:
- 0302-9743
publication_status: published
publisher: Springer Nature
quality_controlled: '1'
scopus_import: '1'
status: public
title: Information-flow interfaces
tmp:
image: /images/cc_by.png
legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
short: CC BY (4.0)
type: conference
user_id: 4359f0d1-fa6c-11eb-b949-802e58b17ae8
volume: 13241
year: '2022'
...
---
_id: '11775'
abstract:
- lang: eng
text: 'Quantitative monitoring can be universal and approximate: For every finite
sequence of observations, the specification provides a value and the monitor outputs
a best-effort approximation of it. The quality of the approximation may depend
on the resources that are available to the monitor. By taking to the limit the
sequences of specification values and monitor outputs, we obtain precision-resource
trade-offs also for limit monitoring. This paper provides a formal framework for
studying such trade-offs using an abstract interpretation for monitors: For each
natural number n, the aggregate semantics of a monitor at time n is an equivalence
relation over all sequences of at most n observations so that two equivalent sequences
are indistinguishable to the monitor and thus mapped to the same output. This
abstract interpretation of quantitative monitors allows us to measure the number
of equivalence classes (or “resource use”) that is necessary for a certain precision
up to a certain time, or at any time. Our framework offers several insights. For
example, we identify a family of specifications for which any resource-optimal
exact limit monitor is independent of any error permitted over finite traces.
Moreover, we present a specification for which any resource-optimal approximate
limit monitor does not minimize its resource use at any time. '
acknowledgement: We thank the anonymous reviewers for their helpful comments. This
work was supported in part by the ERC-2020-AdG 101020093.
alternative_title:
- LNCS
article_processing_charge: Yes
author:
- first_name: Thomas A
full_name: Henzinger, Thomas A
id: 40876CD8-F248-11E8-B48F-1D18A9856A87
last_name: Henzinger
orcid: 0000-0002-2985-7724
- first_name: Nicolas Adrien
full_name: Mazzocchi, Nicolas Adrien
id: b26baa86-3308-11ec-87b0-8990f34baa85
last_name: Mazzocchi
- first_name: Naci E
full_name: Sarac, Naci E
id: 8C6B42F8-C8E6-11E9-A03A-F2DCE5697425
last_name: Sarac
citation:
ama: 'Henzinger TA, Mazzocchi NA, Sarac NE. Abstract monitors for quantitative specifications.
In: 22nd International Conference on Runtime Verification. Vol 13498. Springer
Nature; 2022:200-220. doi:10.1007/978-3-031-17196-3_11'
apa: 'Henzinger, T. A., Mazzocchi, N. A., & Sarac, N. E. (2022). Abstract monitors
for quantitative specifications. In 22nd International Conference on Runtime
Verification (Vol. 13498, pp. 200–220). Tbilisi, Georgia: Springer Nature.
https://doi.org/10.1007/978-3-031-17196-3_11'
chicago: Henzinger, Thomas A, Nicolas Adrien Mazzocchi, and Naci E Sarac. “Abstract
Monitors for Quantitative Specifications.” In 22nd International Conference
on Runtime Verification, 13498:200–220. Springer Nature, 2022. https://doi.org/10.1007/978-3-031-17196-3_11.
ieee: T. A. Henzinger, N. A. Mazzocchi, and N. E. Sarac, “Abstract monitors for
quantitative specifications,” in 22nd International Conference on Runtime Verification,
Tbilisi, Georgia, 2022, vol. 13498, pp. 200–220.
ista: 'Henzinger TA, Mazzocchi NA, Sarac NE. 2022. Abstract monitors for quantitative
specifications. 22nd International Conference on Runtime Verification. RV: Runtime
Verification, LNCS, vol. 13498, 200–220.'
mla: Henzinger, Thomas A., et al. “Abstract Monitors for Quantitative Specifications.”
22nd International Conference on Runtime Verification, vol. 13498, Springer
Nature, 2022, pp. 200–20, doi:10.1007/978-3-031-17196-3_11.
short: T.A. Henzinger, N.A. Mazzocchi, N.E. Sarac, in:, 22nd International Conference
on Runtime Verification, Springer Nature, 2022, pp. 200–220.
conference:
end_date: 2022-09-30
location: Tbilisi, Georgia
name: 'RV: Runtime Verification'
start_date: 2022-09-28
date_created: 2022-08-08T17:09:09Z
date_published: 2022-09-23T00:00:00Z
date_updated: 2023-08-03T13:38:46Z
day: '23'
ddc:
- '000'
department:
- _id: GradSch
- _id: ToHe
doi: 10.1007/978-3-031-17196-3_11
ec_funded: 1
external_id:
isi:
- '000866539700011'
file:
- access_level: open_access
checksum: 05c7dcfbb9053a98f46441fb2eccb213
content_type: application/pdf
creator: dernst
date_created: 2023-01-20T07:34:50Z
date_updated: 2023-01-20T07:34:50Z
file_id: '12317'
file_name: 2022_LNCS_RV_Henzinger.pdf
file_size: 477110
relation: main_file
success: 1
file_date_updated: 2023-01-20T07:34:50Z
has_accepted_license: '1'
intvolume: ' 13498'
isi: 1
language:
- iso: eng
month: '09'
oa: 1
oa_version: Published Version
page: 200-220
project:
- _id: 62781420-2b32-11ec-9570-8d9b63373d4d
call_identifier: H2020
grant_number: '101020093'
name: Vigilant Algorithmic Monitoring of Software
publication: 22nd International Conference on Runtime Verification
publication_identifier:
issn:
- 0302-9743
publication_status: published
publisher: Springer Nature
quality_controlled: '1'
scopus_import: '1'
status: public
title: Abstract monitors for quantitative specifications
tmp:
image: /images/cc_by.png
legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
short: CC BY (4.0)
type: conference
user_id: 4359f0d1-fa6c-11eb-b949-802e58b17ae8
volume: 13498
year: '2022'
...
---
_id: '12147'
abstract:
- lang: eng
text: Continuous-time neural networks are a class of machine learning systems that
can tackle representation learning on spatiotemporal decision-making tasks. These
models are typically represented by continuous differential equations. However,
their expressive power when they are deployed on computers is bottlenecked by
numerical differential equation solvers. This limitation has notably slowed down
the scaling and understanding of numerous natural physical phenomena such as the
dynamics of nervous systems. Ideally, we would circumvent this bottleneck by solving
the given dynamical system in closed form. This is known to be intractable in
general. Here, we show that it is possible to closely approximate the interaction
between neurons and synapses—the building blocks of natural and artificial neural
networks—constructed by liquid time-constant networks efficiently in closed form.
To this end, we compute a tightly bounded approximation of the solution of an
integral appearing in liquid time-constant dynamics that has had no known closed-form
solution so far. This closed-form solution impacts the design of continuous-time
and continuous-depth neural models. For instance, since time appears explicitly
in closed form, the formulation relaxes the need for complex numerical solvers.
Consequently, we obtain models that are between one and five orders of magnitude
faster in training and inference compared with differential equation-based counterparts.
More importantly, in contrast to ordinary differential equation-based continuous
networks, closed-form networks can scale remarkably well compared with other deep
learning instances. Lastly, as these models are derived from liquid networks,
they show good performance in time-series modelling compared with advanced recurrent
neural network models.
acknowledgement: This research was supported in part by the AI2050 program at Schmidt
Futures (grant G-22-63172), the Boeing Company, and the United States Air Force
Research Laboratory and the United States Air Force Artificial Intelligence Accelerator
and was accomplished under cooperative agreement number FA8750-19-2-1000. The views
and conclusions contained in this document are those of the authors and should not
be interpreted as representing the official policies, either expressed or implied,
of the United States Air Force or the U.S. Government. The U.S. Government is authorized
to reproduce and distribute reprints for Government purposes, notwithstanding any
copyright notation herein. This work was further supported by The Boeing Company
and Office of Naval Research grant N00014-18-1-2830. M.T. is supported by the Poul
Due Jensen Foundation, grant 883901. M.L. was supported in part by the Austrian
Science Fund under grant Z211-N23 (Wittgenstein Award). A.A. was supported by the
National Science Foundation Graduate Research Fellowship Program. We thank T.-H.
Wang, P. Kao, M. Chahine, W. Xiao, X. Li, L. Yin and Y. Ben for useful suggestions
and for testing of CfC models to confirm the results across other domains.
article_processing_charge: No
article_type: original
author:
- first_name: Ramin
full_name: Hasani, Ramin
last_name: Hasani
- first_name: Mathias
full_name: Lechner, Mathias
id: 3DC22916-F248-11E8-B48F-1D18A9856A87
last_name: Lechner
- first_name: Alexander
full_name: Amini, Alexander
last_name: Amini
- first_name: Lucas
full_name: Liebenwein, Lucas
last_name: Liebenwein
- first_name: Aaron
full_name: Ray, Aaron
last_name: Ray
- first_name: Max
full_name: Tschaikowski, Max
last_name: Tschaikowski
- first_name: Gerald
full_name: Teschl, Gerald
last_name: Teschl
- first_name: Daniela
full_name: Rus, Daniela
last_name: Rus
citation:
ama: Hasani R, Lechner M, Amini A, et al. Closed-form continuous-time neural networks.
Nature Machine Intelligence. 2022;4(11):992-1003. doi:10.1038/s42256-022-00556-7
apa: Hasani, R., Lechner, M., Amini, A., Liebenwein, L., Ray, A., Tschaikowski,
M., … Rus, D. (2022). Closed-form continuous-time neural networks. Nature Machine
Intelligence. Springer Nature. https://doi.org/10.1038/s42256-022-00556-7
chicago: Hasani, Ramin, Mathias Lechner, Alexander Amini, Lucas Liebenwein, Aaron
Ray, Max Tschaikowski, Gerald Teschl, and Daniela Rus. “Closed-Form Continuous-Time
Neural Networks.” Nature Machine Intelligence. Springer Nature, 2022. https://doi.org/10.1038/s42256-022-00556-7.
ieee: R. Hasani et al., “Closed-form continuous-time neural networks,” Nature
Machine Intelligence, vol. 4, no. 11. Springer Nature, pp. 992–1003, 2022.
ista: Hasani R, Lechner M, Amini A, Liebenwein L, Ray A, Tschaikowski M, Teschl
G, Rus D. 2022. Closed-form continuous-time neural networks. Nature Machine Intelligence.
4(11), 992–1003.
mla: Hasani, Ramin, et al. “Closed-Form Continuous-Time Neural Networks.” Nature
Machine Intelligence, vol. 4, no. 11, Springer Nature, 2022, pp. 992–1003,
doi:10.1038/s42256-022-00556-7.
short: R. Hasani, M. Lechner, A. Amini, L. Liebenwein, A. Ray, M. Tschaikowski,
G. Teschl, D. Rus, Nature Machine Intelligence 4 (2022) 992–1003.
date_created: 2023-01-12T12:07:21Z
date_published: 2022-11-15T00:00:00Z
date_updated: 2023-08-04T09:00:10Z
day: '15'
ddc:
- '000'
department:
- _id: ToHe
doi: 10.1038/s42256-022-00556-7
external_id:
arxiv:
- '2106.13898'
isi:
- '000884215600003'
file:
- access_level: open_access
checksum: b4789122ce04bfb4ac042390f59aaa8b
content_type: application/pdf
creator: dernst
date_created: 2023-01-24T09:49:44Z
date_updated: 2023-01-24T09:49:44Z
file_id: '12355'
file_name: 2022_NatureMachineIntelligence_Hasani.pdf
file_size: 3259553
relation: main_file
success: 1
file_date_updated: 2023-01-24T09:49:44Z
has_accepted_license: '1'
intvolume: ' 4'
isi: 1
issue: '11'
keyword:
- Artificial Intelligence
- Computer Networks and Communications
- Computer Vision and Pattern Recognition
- Human-Computer Interaction
- Software
language:
- iso: eng
month: '11'
oa: 1
oa_version: Published Version
page: 992-1003
project:
- _id: 25F42A32-B435-11E9-9278-68D0E5697425
call_identifier: FWF
grant_number: Z211
name: The Wittgenstein Prize
publication: Nature Machine Intelligence
publication_identifier:
issn:
- 2522-5839
publication_status: published
publisher: Springer Nature
quality_controlled: '1'
related_material:
link:
- relation: erratum
url: https://doi.org/10.1038/s42256-022-00597-y
scopus_import: '1'
status: public
title: Closed-form continuous-time neural networks
tmp:
image: /images/cc_by.png
legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
short: CC BY (4.0)
type: journal_article
user_id: 4359f0d1-fa6c-11eb-b949-802e58b17ae8
volume: 4
year: '2022'
...
---
_id: '11362'
abstract:
- lang: eng
text: "Deep learning has enabled breakthroughs in challenging computing problems
and has emerged as the standard problem-solving tool for computer vision and natural
language processing tasks.\r\nOne exception to this trend is safety-critical tasks
where robustness and resilience requirements contradict the black-box nature of
neural networks. \r\nTo deploy deep learning methods for these tasks, it is vital
to provide guarantees on neural network agents' safety and robustness criteria.
\r\nThis can be achieved by developing formal verification methods to verify the
safety and robustness properties of neural networks.\r\n\r\nOur goal is to design,
develop and assess safety verification methods for neural networks to improve
their reliability and trustworthiness in real-world applications.\r\nThis thesis
establishes techniques for the verification of compressed and adversarially trained
models as well as the design of novel neural networks for verifiably safe decision-making.\r\n\r\nFirst,
we establish the problem of verifying quantized neural networks. Quantization
is a technique that trades numerical precision for the computational efficiency
of running a neural network and is widely adopted in industry.\r\nWe show that
neglecting the reduced precision when verifying a neural network can lead to wrong
conclusions about the robustness and safety of the network, highlighting that
novel techniques for quantized network verification are necessary. We introduce
several bit-exact verification methods explicitly designed for quantized neural
networks and experimentally confirm on realistic networks that the network's robustness
and other formal properties are affected by the quantization.\r\n\r\nFurthermore,
we perform a case study providing evidence that adversarial training, a standard
technique for making neural networks more robust, has detrimental effects on the
network's performance. This robustness-accuracy tradeoff has been studied before
regarding the accuracy obtained on classification datasets where each data point
is independent of all other data points. On the other hand, we investigate the
tradeoff empirically in robot learning settings where a both, a high accuracy
and a high robustness, are desirable.\r\nOur results suggest that the negative
side-effects of adversarial training outweigh its robustness benefits in practice.\r\n\r\nFinally,
we consider the problem of verifying safety when running a Bayesian neural network
policy in a feedback loop with systems over the infinite time horizon. Bayesian
neural networks are probabilistic models for learning uncertainties in the data
and are therefore often used on robotic and healthcare applications where data
is inherently stochastic.\r\nWe introduce a method for recalibrating Bayesian
neural networks so that they yield probability distributions over safe decisions
only.\r\nOur method learns a safety certificate that guarantees safety over the
infinite time horizon to determine which decisions are safe in every possible
state of the system.\r\nWe demonstrate the effectiveness of our approach on a
series of reinforcement learning benchmarks."
alternative_title:
- ISTA Thesis
article_processing_charge: No
author:
- first_name: Mathias
full_name: Lechner, Mathias
id: 3DC22916-F248-11E8-B48F-1D18A9856A87
last_name: Lechner
citation:
ama: Lechner M. Learning verifiable representations. 2022. doi:10.15479/at:ista:11362
apa: Lechner, M. (2022). Learning verifiable representations. Institute of
Science and Technology Austria. https://doi.org/10.15479/at:ista:11362
chicago: Lechner, Mathias. “Learning Verifiable Representations.” Institute of Science
and Technology Austria, 2022. https://doi.org/10.15479/at:ista:11362.
ieee: M. Lechner, “Learning verifiable representations,” Institute of Science and
Technology Austria, 2022.
ista: Lechner M. 2022. Learning verifiable representations. Institute of Science
and Technology Austria.
mla: Lechner, Mathias. Learning Verifiable Representations. Institute of
Science and Technology Austria, 2022, doi:10.15479/at:ista:11362.
short: M. Lechner, Learning Verifiable Representations, Institute of Science and
Technology Austria, 2022.
date_created: 2022-05-12T07:14:01Z
date_published: 2022-05-12T00:00:00Z
date_updated: 2023-08-17T06:58:38Z
day: '12'
ddc:
- '004'
degree_awarded: PhD
department:
- _id: GradSch
- _id: ToHe
doi: 10.15479/at:ista:11362
ec_funded: 1
file:
- access_level: closed
checksum: 8eefa9c7c10ca7e1a2ccdd731962a645
content_type: application/zip
creator: mlechner
date_created: 2022-05-13T12:33:26Z
date_updated: 2022-05-13T12:49:00Z
file_id: '11378'
file_name: src.zip
file_size: 13210143
relation: source_file
- access_level: open_access
checksum: 1b9e1e5a9a83ed9d89dad2f5133dc026
content_type: application/pdf
creator: mlechner
date_created: 2022-05-16T08:02:28Z
date_updated: 2022-05-17T15:19:39Z
file_id: '11382'
file_name: thesis_main-a2.pdf
file_size: 2732536
relation: main_file
file_date_updated: 2022-05-17T15:19:39Z
has_accepted_license: '1'
keyword:
- neural networks
- verification
- machine learning
language:
- iso: eng
license: https://creativecommons.org/licenses/by-nd/4.0/
month: '05'
oa: 1
oa_version: Published Version
page: '124'
project:
- _id: 25F42A32-B435-11E9-9278-68D0E5697425
call_identifier: FWF
grant_number: Z211
name: The Wittgenstein Prize
- _id: 62781420-2b32-11ec-9570-8d9b63373d4d
call_identifier: H2020
grant_number: '101020093'
name: Vigilant Algorithmic Monitoring of Software
publication_identifier:
isbn:
- 978-3-99078-017-6
publication_status: published
publisher: Institute of Science and Technology Austria
related_material:
record:
- id: '10665'
relation: part_of_dissertation
status: public
- id: '10667'
relation: part_of_dissertation
status: public
- id: '11366'
relation: part_of_dissertation
status: public
- id: '7808'
relation: part_of_dissertation
status: public
- id: '10666'
relation: part_of_dissertation
status: public
status: public
supervisor:
- first_name: Thomas A
full_name: Henzinger, Thomas A
id: 40876CD8-F248-11E8-B48F-1D18A9856A87
last_name: Henzinger
orcid: 0000-0002-2985-7724
title: Learning verifiable representations
tmp:
image: /image/cc_by_nd.png
legal_code_url: https://creativecommons.org/licenses/by-nd/4.0/legalcode
name: Creative Commons Attribution-NoDerivatives 4.0 International (CC BY-ND 4.0)
short: CC BY-ND (4.0)
type: dissertation
user_id: 8b945eb4-e2f2-11eb-945a-df72226e66a9
year: '2022'
...
---
_id: '12302'
abstract:
- lang: eng
text: 'We propose a novel algorithm to decide the language inclusion between (nondeterministic)
Büchi automata, a PSPACE-complete problem. Our approach, like others before, leverage
a notion of quasiorder to prune the search for a counterexample by discarding
candidates which are subsumed by others for the quasiorder. Discarded candidates
are guaranteed to not compromise the completeness of the algorithm. The novelty
of our work lies in the quasiorder used to discard candidates. We introduce FORQs
(family of right quasiorders) that we obtain by adapting the notion of family
of right congruences put forward by Maler and Staiger in 1993. We define a FORQ-based
inclusion algorithm which we prove correct and instantiate it for a specific FORQ,
called the structural FORQ, induced by the Büchi automaton to the right of the
inclusion sign. The resulting implementation, called FORKLIFT, scales up better
than the state-of-the-art on a variety of benchmarks including benchmarks from
program verification and theorem proving for word combinatorics. Artifact: https://doi.org/10.5281/zenodo.6552870'
acknowledgement: This work was partially funded by the ESF Investing in your future,
the Madrid regional project S2018/TCS-4339 BLOQUES, the Spanish project PGC2018-102210-B-I00
BOSCO, the Ramón y Cajal fellowship RYC-2016-20281, and the ERC grant PR1001ERC02.
alternative_title:
- LNCS
article_processing_charge: No
author:
- first_name: Kyveli
full_name: Doveri, Kyveli
last_name: Doveri
- first_name: Pierre
full_name: Ganty, Pierre
last_name: Ganty
- first_name: Nicolas Adrien
full_name: Mazzocchi, Nicolas Adrien
id: b26baa86-3308-11ec-87b0-8990f34baa85
last_name: Mazzocchi
citation:
ama: 'Doveri K, Ganty P, Mazzocchi NA. FORQ-based language inclusion formal testing.
In: Computer Aided Verification. Vol 13372. Springer Nature; 2022:109-129.
doi:10.1007/978-3-031-13188-2_6'
apa: 'Doveri, K., Ganty, P., & Mazzocchi, N. A. (2022). FORQ-based language
inclusion formal testing. In Computer Aided Verification (Vol. 13372, pp.
109–129). Haifa, Israel: Springer Nature. https://doi.org/10.1007/978-3-031-13188-2_6'
chicago: Doveri, Kyveli, Pierre Ganty, and Nicolas Adrien Mazzocchi. “FORQ-Based
Language Inclusion Formal Testing.” In Computer Aided Verification, 13372:109–29.
Springer Nature, 2022. https://doi.org/10.1007/978-3-031-13188-2_6.
ieee: K. Doveri, P. Ganty, and N. A. Mazzocchi, “FORQ-based language inclusion formal
testing,” in Computer Aided Verification, Haifa, Israel, 2022, vol. 13372,
pp. 109–129.
ista: 'Doveri K, Ganty P, Mazzocchi NA. 2022. FORQ-based language inclusion formal
testing. Computer Aided Verification. CAV: Computer Aided Verification, LNCS,
vol. 13372, 109–129.'
mla: Doveri, Kyveli, et al. “FORQ-Based Language Inclusion Formal Testing.” Computer
Aided Verification, vol. 13372, Springer Nature, 2022, pp. 109–29, doi:10.1007/978-3-031-13188-2_6.
short: K. Doveri, P. Ganty, N.A. Mazzocchi, in:, Computer Aided Verification, Springer
Nature, 2022, pp. 109–129.
conference:
end_date: 2022-08-10
location: Haifa, Israel
name: 'CAV: Computer Aided Verification'
start_date: 2022-08-07
date_created: 2023-01-16T10:06:31Z
date_published: 2022-08-06T00:00:00Z
date_updated: 2023-09-05T15:13:36Z
day: '06'
ddc:
- '000'
department:
- _id: ToHe
doi: 10.1007/978-3-031-13188-2_6
ec_funded: 1
external_id:
arxiv:
- '2207.13549'
isi:
- '000870310500006'
file:
- access_level: open_access
checksum: edc363b1be5447a09063e115c247918a
content_type: application/pdf
creator: dernst
date_created: 2023-01-30T12:51:02Z
date_updated: 2023-01-30T12:51:02Z
file_id: '12465'
file_name: 2022_LNCS_Doveri.pdf
file_size: 497682
relation: main_file
success: 1
file_date_updated: 2023-01-30T12:51:02Z
has_accepted_license: '1'
intvolume: ' 13372'
isi: 1
language:
- iso: eng
month: '08'
oa: 1
oa_version: Published Version
page: 109-129
project:
- _id: 62781420-2b32-11ec-9570-8d9b63373d4d
call_identifier: H2020
grant_number: '101020093'
name: Vigilant Algorithmic Monitoring of Software
publication: Computer Aided Verification
publication_identifier:
eisbn:
- '9783031131882'
eissn:
- 1611-3349
isbn:
- '9783031131875'
issn:
- 0302-9743
publication_status: published
publisher: Springer Nature
quality_controlled: '1'
scopus_import: '1'
status: public
title: FORQ-based language inclusion formal testing
tmp:
image: /images/cc_by.png
legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
short: CC BY (4.0)
type: conference
user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1
volume: 13372
year: '2022'
...
---
_id: '12175'
abstract:
- lang: eng
text: An automaton is history-deterministic (HD) if one can safely resolve its non-deterministic
choices on the fly. In a recent paper, Henzinger, Lehtinen and Totzke studied
this in the context of Timed Automata [9], where it was conjectured that the class
of timed ω-languages recognised by HD-timed automata strictly extends that of
deterministic ones. We provide a proof for this fact.
acknowledgement: This work was supported in part by the ERC-2020-AdG 101020093, the
EPSRC project EP/V025848/1, and the EPSRC project EP/X017796/1.
alternative_title:
- LNCS
article_processing_charge: No
author:
- first_name: Sougata
full_name: Bose, Sougata
last_name: Bose
- first_name: Thomas A
full_name: Henzinger, Thomas A
id: 40876CD8-F248-11E8-B48F-1D18A9856A87
last_name: Henzinger
orcid: 0000-0002-2985-7724
- first_name: Karoliina
full_name: Lehtinen, Karoliina
last_name: Lehtinen
- first_name: Sven
full_name: Schewe, Sven
last_name: Schewe
- first_name: Patrick
full_name: Totzke, Patrick
last_name: Totzke
citation:
ama: 'Bose S, Henzinger TA, Lehtinen K, Schewe S, Totzke P. History-deterministic
timed automata are not determinizable. In: 16th International Conference on
Reachability Problems. Vol 13608. Springer Nature; 2022:67-76. doi:10.1007/978-3-031-19135-0_5'
apa: 'Bose, S., Henzinger, T. A., Lehtinen, K., Schewe, S., & Totzke, P. (2022).
History-deterministic timed automata are not determinizable. In 16th International
Conference on Reachability Problems (Vol. 13608, pp. 67–76). Kaiserslautern,
Germany: Springer Nature. https://doi.org/10.1007/978-3-031-19135-0_5'
chicago: Bose, Sougata, Thomas A Henzinger, Karoliina Lehtinen, Sven Schewe, and
Patrick Totzke. “History-Deterministic Timed Automata Are Not Determinizable.”
In 16th International Conference on Reachability Problems, 13608:67–76.
Springer Nature, 2022. https://doi.org/10.1007/978-3-031-19135-0_5.
ieee: S. Bose, T. A. Henzinger, K. Lehtinen, S. Schewe, and P. Totzke, “History-deterministic
timed automata are not determinizable,” in 16th International Conference on
Reachability Problems, Kaiserslautern, Germany, 2022, vol. 13608, pp. 67–76.
ista: 'Bose S, Henzinger TA, Lehtinen K, Schewe S, Totzke P. 2022. History-deterministic
timed automata are not determinizable. 16th International Conference on Reachability
Problems. RC: Reachability Problems, LNCS, vol. 13608, 67–76.'
mla: Bose, Sougata, et al. “History-Deterministic Timed Automata Are Not Determinizable.”
16th International Conference on Reachability Problems, vol. 13608, Springer
Nature, 2022, pp. 67–76, doi:10.1007/978-3-031-19135-0_5.
short: S. Bose, T.A. Henzinger, K. Lehtinen, S. Schewe, P. Totzke, in:, 16th International
Conference on Reachability Problems, Springer Nature, 2022, pp. 67–76.
conference:
end_date: 2022-10-21
location: Kaiserslautern, Germany
name: 'RC: Reachability Problems'
start_date: 2022-10-17
date_created: 2023-01-12T12:11:57Z
date_published: 2022-10-12T00:00:00Z
date_updated: 2023-09-05T15:12:08Z
day: '12'
department:
- _id: ToHe
doi: 10.1007/978-3-031-19135-0_5
ec_funded: 1
intvolume: ' 13608'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://hal.science/hal-03849398/
month: '10'
oa: 1
oa_version: Preprint
page: 67-76
project:
- _id: 62781420-2b32-11ec-9570-8d9b63373d4d
call_identifier: H2020
grant_number: '101020093'
name: Vigilant Algorithmic Monitoring of Software
publication: 16th International Conference on Reachability Problems
publication_identifier:
eisbn:
- '9783031191350'
eissn:
- 1611-3349
isbn:
- '9783031191343'
issn:
- 0302-9743
publication_status: published
publisher: Springer Nature
quality_controlled: '1'
scopus_import: '1'
status: public
title: History-deterministic timed automata are not determinizable
type: conference
user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1
volume: 13608
year: '2022'
...
---
_id: '12510'
abstract:
- lang: eng
text: "We introduce a new statistical verification algorithm that formally quantifies
the behavioral robustness of any time-continuous process formulated as a continuous-depth
model. Our algorithm solves a set of global optimization (Go) problems over a
given time horizon to construct a tight enclosure (Tube) of the set of all process
executions starting from a ball of initial states. We call our algorithm GoTube.
Through its construction, GoTube ensures that the bounding tube is conservative
up to a desired probability and up to a desired tightness.\r\n GoTube is implemented
in JAX and optimized to scale to complex continuous-depth neural network models.
Compared to advanced reachability analysis tools for time-continuous neural networks,
GoTube does not accumulate overapproximation errors between time steps and avoids
the infamous wrapping effect inherent in symbolic techniques. We show that GoTube
substantially outperforms state-of-the-art verification tools in terms of the
size of the initial ball, speed, time-horizon, task completion, and scalability
on a large set of experiments.\r\n GoTube is stable and sets the state-of-the-art
in terms of its ability to scale to time horizons well beyond what has been previously
possible."
acknowledgement: SG is funded by the Austrian Science Fund (FWF) project number W1255-N23.
ML and TH are supported in part by FWF under grant Z211-N23 (Wittgenstein Award)
and the ERC-2020-AdG 101020093. SS is supported by NSF awards DCL-2040599, CCF-1918225,
and CPS-1446832. RH and DR are partially supported by Boeing. RG is partially supported
by Horizon-2020 ECSEL Project grant No. 783163 (iDev40).
article_processing_charge: No
article_type: original
author:
- first_name: Sophie A.
full_name: Gruenbacher, Sophie A.
last_name: Gruenbacher
- first_name: Mathias
full_name: Lechner, Mathias
id: 3DC22916-F248-11E8-B48F-1D18A9856A87
last_name: Lechner
- first_name: Ramin
full_name: Hasani, Ramin
last_name: Hasani
- first_name: Daniela
full_name: Rus, Daniela
last_name: Rus
- first_name: Thomas A
full_name: Henzinger, Thomas A
id: 40876CD8-F248-11E8-B48F-1D18A9856A87
last_name: Henzinger
orcid: 0000-0002-2985-7724
- first_name: Scott A.
full_name: Smolka, Scott A.
last_name: Smolka
- first_name: Radu
full_name: Grosu, Radu
last_name: Grosu
citation:
ama: 'Gruenbacher SA, Lechner M, Hasani R, et al. GoTube: Scalable statistical verification
of continuous-depth models. Proceedings of the AAAI Conference on Artificial
Intelligence. 2022;36(6):6755-6764. doi:10.1609/aaai.v36i6.20631'
apa: 'Gruenbacher, S. A., Lechner, M., Hasani, R., Rus, D., Henzinger, T. A., Smolka,
S. A., & Grosu, R. (2022). GoTube: Scalable statistical verification of continuous-depth
models. Proceedings of the AAAI Conference on Artificial Intelligence.
Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v36i6.20631'
chicago: 'Gruenbacher, Sophie A., Mathias Lechner, Ramin Hasani, Daniela Rus, Thomas
A Henzinger, Scott A. Smolka, and Radu Grosu. “GoTube: Scalable Statistical Verification
of Continuous-Depth Models.” Proceedings of the AAAI Conference on Artificial
Intelligence. Association for the Advancement of Artificial Intelligence,
2022. https://doi.org/10.1609/aaai.v36i6.20631.'
ieee: 'S. A. Gruenbacher et al., “GoTube: Scalable statistical verification
of continuous-depth models,” Proceedings of the AAAI Conference on Artificial
Intelligence, vol. 36, no. 6. Association for the Advancement of Artificial
Intelligence, pp. 6755–6764, 2022.'
ista: 'Gruenbacher SA, Lechner M, Hasani R, Rus D, Henzinger TA, Smolka SA, Grosu
R. 2022. GoTube: Scalable statistical verification of continuous-depth models.
Proceedings of the AAAI Conference on Artificial Intelligence. 36(6), 6755–6764.'
mla: 'Gruenbacher, Sophie A., et al. “GoTube: Scalable Statistical Verification
of Continuous-Depth Models.” Proceedings of the AAAI Conference on Artificial
Intelligence, vol. 36, no. 6, Association for the Advancement of Artificial
Intelligence, 2022, pp. 6755–64, doi:10.1609/aaai.v36i6.20631.'
short: S.A. Gruenbacher, M. Lechner, R. Hasani, D. Rus, T.A. Henzinger, S.A. Smolka,
R. Grosu, Proceedings of the AAAI Conference on Artificial Intelligence 36 (2022)
6755–6764.
date_created: 2023-02-05T17:27:42Z
date_published: 2022-06-28T00:00:00Z
date_updated: 2023-09-26T10:46:59Z
day: '28'
department:
- _id: ToHe
doi: 10.1609/aaai.v36i6.20631
ec_funded: 1
external_id:
arxiv:
- '2107.08467'
intvolume: ' 36'
issue: '6'
keyword:
- General Medicine
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/2107.08467
month: '06'
oa: 1
oa_version: Preprint
page: 6755-6764
project:
- _id: 25F42A32-B435-11E9-9278-68D0E5697425
call_identifier: FWF
grant_number: Z211
name: The Wittgenstein Prize
- _id: 62781420-2b32-11ec-9570-8d9b63373d4d
call_identifier: H2020
grant_number: '101020093'
name: Vigilant Algorithmic Monitoring of Software
publication: Proceedings of the AAAI Conference on Artificial Intelligence
publication_identifier:
eissn:
- 2374-3468
isbn:
- '978577358350'
issn:
- 2159-5399
publication_status: published
publisher: Association for the Advancement of Artificial Intelligence
quality_controlled: '1'
scopus_import: '1'
status: public
title: 'GoTube: Scalable statistical verification of continuous-depth models'
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 36
year: '2022'
...
---
_id: '12511'
abstract:
- lang: eng
text: "We consider the problem of formally verifying almost-sure (a.s.) asymptotic
stability in discrete-time nonlinear stochastic control systems. While verifying
stability in deterministic control systems is extensively studied in the literature,
verifying stability in stochastic control systems is an open problem. The few
existing works on this topic either consider only specialized forms of stochasticity
or make restrictive assumptions on the system, rendering them inapplicable to
learning algorithms with neural network policies. \r\n In this work, we present
an approach for general nonlinear stochastic control problems with two novel aspects:
(a) instead of classical stochastic extensions of Lyapunov functions, we use ranking
supermartingales (RSMs) to certify a.s. asymptotic stability, and (b) we present
a method for learning neural network RSMs. \r\n We prove that our approach guarantees
a.s. asymptotic stability of the system and\r\n provides the first method to obtain
bounds on the stabilization time, which stochastic Lyapunov functions do not.\r\n
Finally, we validate our approach experimentally on a set of nonlinear stochastic
reinforcement learning environments with neural network policies."
acknowledgement: "This work was supported in part by the ERC-2020-AdG 101020093, ERC
CoG 863818 (FoRM-SMArt) and the European Union’s Horizon 2020 research and innovation
programme\r\nunder the Marie Skłodowska-Curie Grant Agreement No. 665385."
article_processing_charge: No
article_type: original
author:
- first_name: Mathias
full_name: Lechner, Mathias
id: 3DC22916-F248-11E8-B48F-1D18A9856A87
last_name: Lechner
- first_name: Dorde
full_name: Zikelic, Dorde
id: 294AA7A6-F248-11E8-B48F-1D18A9856A87
last_name: Zikelic
orcid: 0000-0002-4681-1699
- first_name: Krishnendu
full_name: Chatterjee, Krishnendu
id: 2E5DCA20-F248-11E8-B48F-1D18A9856A87
last_name: Chatterjee
orcid: 0000-0002-4561-241X
- first_name: Thomas A
full_name: Henzinger, Thomas A
id: 40876CD8-F248-11E8-B48F-1D18A9856A87
last_name: Henzinger
orcid: 0000-0002-2985-7724
citation:
ama: Lechner M, Zikelic D, Chatterjee K, Henzinger TA. Stability verification in
stochastic control systems via neural network supermartingales. Proceedings
of the AAAI Conference on Artificial Intelligence. 2022;36(7):7326-7336. doi:10.1609/aaai.v36i7.20695
apa: Lechner, M., Zikelic, D., Chatterjee, K., & Henzinger, T. A. (2022). Stability
verification in stochastic control systems via neural network supermartingales.
Proceedings of the AAAI Conference on Artificial Intelligence. Association
for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v36i7.20695
chicago: Lechner, Mathias, Dorde Zikelic, Krishnendu Chatterjee, and Thomas A Henzinger.
“Stability Verification in Stochastic Control Systems via Neural Network Supermartingales.”
Proceedings of the AAAI Conference on Artificial Intelligence. Association
for the Advancement of Artificial Intelligence, 2022. https://doi.org/10.1609/aaai.v36i7.20695.
ieee: M. Lechner, D. Zikelic, K. Chatterjee, and T. A. Henzinger, “Stability verification
in stochastic control systems via neural network supermartingales,” Proceedings
of the AAAI Conference on Artificial Intelligence, vol. 36, no. 7. Association
for the Advancement of Artificial Intelligence, pp. 7326–7336, 2022.
ista: Lechner M, Zikelic D, Chatterjee K, Henzinger TA. 2022. Stability verification
in stochastic control systems via neural network supermartingales. Proceedings
of the AAAI Conference on Artificial Intelligence. 36(7), 7326–7336.
mla: Lechner, Mathias, et al. “Stability Verification in Stochastic Control Systems
via Neural Network Supermartingales.” Proceedings of the AAAI Conference on
Artificial Intelligence, vol. 36, no. 7, Association for the Advancement of
Artificial Intelligence, 2022, pp. 7326–36, doi:10.1609/aaai.v36i7.20695.
short: M. Lechner, D. Zikelic, K. Chatterjee, T.A. Henzinger, Proceedings of the
AAAI Conference on Artificial Intelligence 36 (2022) 7326–7336.
date_created: 2023-02-05T17:29:50Z
date_published: 2022-06-28T00:00:00Z
date_updated: 2023-11-30T10:55:37Z
day: '28'
department:
- _id: ToHe
- _id: KrCh
doi: 10.1609/aaai.v36i7.20695
ec_funded: 1
external_id:
arxiv:
- '2112.09495'
intvolume: ' 36'
issue: '7'
keyword:
- General Medicine
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/2112.09495
month: '06'
oa: 1
oa_version: Preprint
page: 7326-7336
project:
- _id: 62781420-2b32-11ec-9570-8d9b63373d4d
call_identifier: H2020
grant_number: '101020093'
name: Vigilant Algorithmic Monitoring of Software
- _id: 0599E47C-7A3F-11EA-A408-12923DDC885E
call_identifier: H2020
grant_number: '863818'
name: 'Formal Methods for Stochastic Models: Algorithms and Applications'
- _id: 2564DBCA-B435-11E9-9278-68D0E5697425
call_identifier: H2020
grant_number: '665385'
name: International IST Doctoral Program
publication: Proceedings of the AAAI Conference on Artificial Intelligence
publication_identifier:
eissn:
- 2374-3468
isbn:
- '9781577358350'
issn:
- 2159-5399
publication_status: published
publisher: Association for the Advancement of Artificial Intelligence
quality_controlled: '1'
related_material:
record:
- id: '14539'
relation: dissertation_contains
status: public
scopus_import: '1'
status: public
title: Stability verification in stochastic control systems via neural network supermartingales
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 36
year: '2022'
...
---
_id: '14601'
abstract:
- lang: eng
text: "In this work, we address the problem of learning provably stable neural\r\nnetwork
policies for stochastic control systems. While recent work has\r\ndemonstrated
the feasibility of certifying given policies using martingale\r\ntheory, the problem
of how to learn such policies is little explored. Here, we\r\nstudy the effectiveness
of jointly learning a policy together with a martingale\r\ncertificate that proves
its stability using a single learning algorithm. We\r\nobserve that the joint
optimization problem becomes easily stuck in local\r\nminima when starting from
a randomly initialized policy. Our results suggest\r\nthat some form of pre-training
of the policy is required for the joint\r\noptimization to repair and verify the
policy successfully."
article_processing_charge: No
author:
- first_name: Dorde
full_name: Zikelic, Dorde
id: 294AA7A6-F248-11E8-B48F-1D18A9856A87
last_name: Zikelic
orcid: 0000-0002-4681-1699
- first_name: Mathias
full_name: Lechner, Mathias
id: 3DC22916-F248-11E8-B48F-1D18A9856A87
last_name: Lechner
- first_name: Krishnendu
full_name: Chatterjee, Krishnendu
id: 2E5DCA20-F248-11E8-B48F-1D18A9856A87
last_name: Chatterjee
orcid: 0000-0002-4561-241X
- first_name: Thomas A
full_name: Henzinger, Thomas A
id: 40876CD8-F248-11E8-B48F-1D18A9856A87
last_name: Henzinger
orcid: 0000-0002-2985-7724
citation:
ama: Zikelic D, Lechner M, Chatterjee K, Henzinger TA. Learning stabilizing policies
in stochastic control systems. arXiv. doi:10.48550/arXiv.2205.11991
apa: Zikelic, D., Lechner, M., Chatterjee, K., & Henzinger, T. A. (n.d.). Learning
stabilizing policies in stochastic control systems. arXiv. https://doi.org/10.48550/arXiv.2205.11991
chicago: Zikelic, Dorde, Mathias Lechner, Krishnendu Chatterjee, and Thomas A Henzinger.
“Learning Stabilizing Policies in Stochastic Control Systems.” ArXiv, n.d.
https://doi.org/10.48550/arXiv.2205.11991.
ieee: D. Zikelic, M. Lechner, K. Chatterjee, and T. A. Henzinger, “Learning stabilizing
policies in stochastic control systems,” arXiv. .
ista: Zikelic D, Lechner M, Chatterjee K, Henzinger TA. Learning stabilizing policies
in stochastic control systems. arXiv, 10.48550/arXiv.2205.11991.
mla: Zikelic, Dorde, et al. “Learning Stabilizing Policies in Stochastic Control
Systems.” ArXiv, doi:10.48550/arXiv.2205.11991.
short: D. Zikelic, M. Lechner, K. Chatterjee, T.A. Henzinger, ArXiv (n.d.).
date_created: 2023-11-24T13:22:30Z
date_published: 2022-05-24T00:00:00Z
date_updated: 2023-11-30T10:55:37Z
day: '24'
department:
- _id: KrCh
- _id: ToHe
doi: 10.48550/arXiv.2205.11991
ec_funded: 1
external_id:
arxiv:
- '2205.11991'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/2205.11991
month: '05'
oa: 1
oa_version: Preprint
project:
- _id: 62781420-2b32-11ec-9570-8d9b63373d4d
call_identifier: H2020
grant_number: '101020093'
name: Vigilant Algorithmic Monitoring of Software
- _id: 0599E47C-7A3F-11EA-A408-12923DDC885E
call_identifier: H2020
grant_number: '863818'
name: 'Formal Methods for Stochastic Models: Algorithms and Applications'
- _id: 2564DBCA-B435-11E9-9278-68D0E5697425
call_identifier: H2020
grant_number: '665385'
name: International IST Doctoral Program
publication: arXiv
publication_status: submitted
related_material:
record:
- id: '14539'
relation: dissertation_contains
status: public
status: public
title: Learning stabilizing policies in stochastic control systems
type: preprint
user_id: 8b945eb4-e2f2-11eb-945a-df72226e66a9
year: '2022'
...
---
_id: '14600'
abstract:
- lang: eng
text: We study the problem of learning controllers for discrete-time non-linear
stochastic dynamical systems with formal reach-avoid guarantees. This work presents
the first method for providing formal reach-avoid guarantees, which combine and
generalize stability and safety guarantees, with a tolerable probability threshold
$p\in[0,1]$ over the infinite time horizon. Our method leverages advances in machine
learning literature and it represents formal certificates as neural networks.
In particular, we learn a certificate in the form of a reach-avoid supermartingale
(RASM), a novel notion that we introduce in this work. Our RASMs provide reachability
and avoidance guarantees by imposing constraints on what can be viewed as a stochastic
extension of level sets of Lyapunov functions for deterministic systems. Our approach
solves several important problems -- it can be used to learn a control policy
from scratch, to verify a reach-avoid specification for a fixed control policy,
or to fine-tune a pre-trained policy if it does not satisfy the reach-avoid specification.
We validate our approach on $3$ stochastic non-linear reinforcement learning tasks.
article_processing_charge: No
author:
- first_name: Dorde
full_name: Zikelic, Dorde
id: 294AA7A6-F248-11E8-B48F-1D18A9856A87
last_name: Zikelic
orcid: 0000-0002-4681-1699
- first_name: Mathias
full_name: Lechner, Mathias
id: 3DC22916-F248-11E8-B48F-1D18A9856A87
last_name: Lechner
- first_name: Thomas A
full_name: Henzinger, Thomas A
id: 40876CD8-F248-11E8-B48F-1D18A9856A87
last_name: Henzinger
orcid: 0000-0002-2985-7724
- first_name: Krishnendu
full_name: Chatterjee, Krishnendu
id: 2E5DCA20-F248-11E8-B48F-1D18A9856A87
last_name: Chatterjee
orcid: 0000-0002-4561-241X
citation:
ama: Zikelic D, Lechner M, Henzinger TA, Chatterjee K. Learning control policies
for stochastic systems with reach-avoid guarantees. arXiv. doi:10.48550/ARXIV.2210.05308
apa: Zikelic, D., Lechner, M., Henzinger, T. A., & Chatterjee, K. (n.d.). Learning
control policies for stochastic systems with reach-avoid guarantees. arXiv.
https://doi.org/10.48550/ARXIV.2210.05308
chicago: Zikelic, Dorde, Mathias Lechner, Thomas A Henzinger, and Krishnendu Chatterjee.
“Learning Control Policies for Stochastic Systems with Reach-Avoid Guarantees.”
ArXiv, n.d. https://doi.org/10.48550/ARXIV.2210.05308.
ieee: D. Zikelic, M. Lechner, T. A. Henzinger, and K. Chatterjee, “Learning control
policies for stochastic systems with reach-avoid guarantees,” arXiv. .
ista: Zikelic D, Lechner M, Henzinger TA, Chatterjee K. Learning control policies
for stochastic systems with reach-avoid guarantees. arXiv, 10.48550/ARXIV.2210.05308.
mla: Zikelic, Dorde, et al. “Learning Control Policies for Stochastic Systems with
Reach-Avoid Guarantees.” ArXiv, doi:10.48550/ARXIV.2210.05308.
short: D. Zikelic, M. Lechner, T.A. Henzinger, K. Chatterjee, ArXiv (n.d.).
date_created: 2023-11-24T13:10:09Z
date_published: 2022-11-29T00:00:00Z
date_updated: 2024-01-22T14:08:29Z
day: '29'
department:
- _id: KrCh
- _id: ToHe
doi: 10.48550/ARXIV.2210.05308
ec_funded: 1
external_id:
arxiv:
- '2210.05308'
language:
- iso: eng
license: https://creativecommons.org/licenses/by-sa/4.0/
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/2210.05308
month: '11'
oa: 1
oa_version: Preprint
project:
- _id: 0599E47C-7A3F-11EA-A408-12923DDC885E
call_identifier: H2020
grant_number: '863818'
name: 'Formal Methods for Stochastic Models: Algorithms and Applications'
- _id: 62781420-2b32-11ec-9570-8d9b63373d4d
call_identifier: H2020
grant_number: '101020093'
name: Vigilant Algorithmic Monitoring of Software
- _id: 2564DBCA-B435-11E9-9278-68D0E5697425
call_identifier: H2020
grant_number: '665385'
name: International IST Doctoral Program
publication: arXiv
publication_status: submitted
related_material:
record:
- id: '14539'
relation: dissertation_contains
status: public
- id: '14830'
relation: later_version
status: public
status: public
title: Learning control policies for stochastic systems with reach-avoid guarantees
tmp:
image: /images/cc_by_sa.png
legal_code_url: https://creativecommons.org/licenses/by-sa/4.0/legalcode
name: Creative Commons Attribution-ShareAlike 4.0 International Public License (CC
BY-SA 4.0)
short: CC BY-SA (4.0)
type: preprint
user_id: 8b945eb4-e2f2-11eb-945a-df72226e66a9
year: '2022'
...
---
_id: '10153'
abstract:
- lang: eng
text: "Gradual typing is a principled means for mixing typed and untyped code. But
typed and untyped code often exhibit different programming patterns. There is
already substantial research investigating gradually giving types to code exhibiting
typical untyped patterns, and some research investigating gradually removing types
from code exhibiting typical typed patterns. This paper investigates how to extend
these established gradual-typing concepts to give formal guarantees not only about
how to change types as code evolves but also about how to change such programming
patterns as well.\r\n\r\nIn particular, we explore mixing untyped \"structural\"
code with typed \"nominal\" code in an object-oriented language. But whereas previous
work only allowed \"nominal\" objects to be treated as \"structural\" objects,
we also allow \"structural\" objects to dynamically acquire certain nominal types,
namely interfaces. We present a calculus that supports such \"cross-paradigm\"
code migration and interoperation in a manner satisfying both the static and dynamic
gradual guarantees, and demonstrate that the calculus can be implemented efficiently."
acknowledgement: "We thank the reviewers for their valuable suggestions towards improving
the paper. We also \r\nthank Mae Milano and Adrian Sampson, as well as the members
of the Programming Languages Discussion Group at Cornell University and of the Programming
Research Laboratory at Northeastern University, for their helpful feedback on preliminary
findings of this work.\r\n\r\nThis material is based upon work supported in part
by the National Science Foundation (NSF) through grant CCF-1350182 and the Austrian
Science Fund (FWF) through grant Z211-N23 (Wittgenstein~Award).\r\nAny opinions,
findings, and conclusions or recommendations expressed in this material are those
of the authors and do not necessarily reflect the views of the NSF or the FWF."
article_number: '127'
article_processing_charge: No
article_type: original
author:
- first_name: Fabian
full_name: Mühlböck, Fabian
id: 6395C5F6-89DF-11E9-9C97-6BDFE5697425
last_name: Mühlböck
orcid: 0000-0003-1548-0177
- first_name: Ross
full_name: Tate, Ross
last_name: Tate
citation:
ama: Mühlböck F, Tate R. Transitioning from structural to nominal code with efficient
gradual typing. Proceedings of the ACM on Programming Languages. 2021;5.
doi:10.1145/3485504
apa: 'Mühlböck, F., & Tate, R. (2021). Transitioning from structural to nominal
code with efficient gradual typing. Proceedings of the ACM on Programming Languages.
Chicago, IL, United States: Association for Computing Machinery. https://doi.org/10.1145/3485504'
chicago: Mühlböck, Fabian, and Ross Tate. “Transitioning from Structural to Nominal
Code with Efficient Gradual Typing.” Proceedings of the ACM on Programming
Languages. Association for Computing Machinery, 2021. https://doi.org/10.1145/3485504.
ieee: F. Mühlböck and R. Tate, “Transitioning from structural to nominal code with
efficient gradual typing,” Proceedings of the ACM on Programming Languages,
vol. 5. Association for Computing Machinery, 2021.
ista: Mühlböck F, Tate R. 2021. Transitioning from structural to nominal code with
efficient gradual typing. Proceedings of the ACM on Programming Languages. 5,
127.
mla: Mühlböck, Fabian, and Ross Tate. “Transitioning from Structural to Nominal
Code with Efficient Gradual Typing.” Proceedings of the ACM on Programming
Languages, vol. 5, 127, Association for Computing Machinery, 2021, doi:10.1145/3485504.
short: F. Mühlböck, R. Tate, Proceedings of the ACM on Programming Languages 5 (2021).
conference:
end_date: 2021-10-23
location: Chicago, IL, United States
name: 'OOPSLA: Object-Oriented Programming, Systems, Languages, and Applications'
start_date: 2021-10-17
date_created: 2021-10-19T12:48:44Z
date_published: 2021-10-15T00:00:00Z
date_updated: 2021-11-12T11:30:07Z
day: '15'
ddc:
- '005'
department:
- _id: ToHe
doi: 10.1145/3485504
file:
- access_level: open_access
checksum: 71011efd2da771cafdec7f0d9693f8c1
content_type: application/pdf
creator: fmuehlbo
date_created: 2021-10-19T12:52:23Z
date_updated: 2021-10-19T12:52:23Z
file_id: '10154'
file_name: monnom-oopsla21.pdf
file_size: 770269
relation: main_file
success: 1
file_date_updated: 2021-10-19T12:52:23Z
has_accepted_license: '1'
intvolume: ' 5'
keyword:
- gradual typing
- gradual guarantee
- nominal
- structural
- call tags
language:
- iso: eng
month: '10'
oa: 1
oa_version: Published Version
project:
- _id: 25F42A32-B435-11E9-9278-68D0E5697425
call_identifier: FWF
grant_number: Z211
name: The Wittgenstein Prize
publication: Proceedings of the ACM on Programming Languages
publication_identifier:
eissn:
- 2475-1421
publication_status: published
publisher: Association for Computing Machinery
quality_controlled: '1'
status: public
title: Transitioning from structural to nominal code with efficient gradual typing
tmp:
image: /image/cc_by_nd.png
legal_code_url: https://creativecommons.org/licenses/by-nd/4.0/legalcode
name: Creative Commons Attribution-NoDerivatives 4.0 International (CC BY-ND 4.0)
short: CC BY-ND (4.0)
type: journal_article
user_id: 8b945eb4-e2f2-11eb-945a-df72226e66a9
volume: 5
year: '2021'
...
---
_id: '10669'
abstract:
- lang: eng
text: "We show that Neural ODEs, an emerging class of timecontinuous neural networks,
can be verified by solving a set of global-optimization problems. For this purpose,
we introduce Stochastic Lagrangian Reachability (SLR), an\r\nabstraction-based
technique for constructing a tight Reachtube (an over-approximation of the set
of reachable states\r\nover a given time-horizon), and provide stochastic guarantees
in the form of confidence intervals for the Reachtube bounds. SLR inherently avoids
the infamous wrapping effect (accumulation of over-approximation errors) by performing
local optimization steps to expand safe regions instead of repeatedly forward-propagating
them as is done by deterministic reachability methods. To enable fast local optimizations,
we introduce a novel forward-mode adjoint sensitivity method to compute gradients
without the need for backpropagation. Finally, we establish asymptotic and non-asymptotic
convergence rates for SLR."
acknowledgement: "The authors would like to thank the reviewers for their insightful
comments. RH and RG were partially supported by\r\nHorizon-2020 ECSEL Project grant
No. 783163 (iDev40). RH was partially supported by Boeing. ML was supported\r\nin
part by the Austrian Science Fund (FWF) under grant Z211-N23 (Wittgenstein Award).
SG was funded by FWF\r\nproject W1255-N23. JC was partially supported by NAWA Polish
Returns grant PPN/PPO/2018/1/00029. SS was supported by NSF awards DCL-2040599,
CCF-1918225, and CPS-1446832.\r\n"
alternative_title:
- Technical Tracks
article_processing_charge: No
author:
- first_name: Sophie
full_name: Grunbacher, Sophie
last_name: Grunbacher
- first_name: Ramin
full_name: Hasani, Ramin
last_name: Hasani
- first_name: Mathias
full_name: Lechner, Mathias
id: 3DC22916-F248-11E8-B48F-1D18A9856A87
last_name: Lechner
- first_name: Jacek
full_name: Cyranka, Jacek
last_name: Cyranka
- first_name: Scott A
full_name: Smolka, Scott A
last_name: Smolka
- first_name: Radu
full_name: Grosu, Radu
last_name: Grosu
citation:
ama: 'Grunbacher S, Hasani R, Lechner M, Cyranka J, Smolka SA, Grosu R. On the verification
of neural ODEs with stochastic guarantees. In: Proceedings of the AAAI Conference
on Artificial Intelligence. Vol 35. AAAI Press; 2021:11525-11535.'
apa: 'Grunbacher, S., Hasani, R., Lechner, M., Cyranka, J., Smolka, S. A., &
Grosu, R. (2021). On the verification of neural ODEs with stochastic guarantees.
In Proceedings of the AAAI Conference on Artificial Intelligence (Vol.
35, pp. 11525–11535). Virtual: AAAI Press.'
chicago: Grunbacher, Sophie, Ramin Hasani, Mathias Lechner, Jacek Cyranka, Scott
A Smolka, and Radu Grosu. “On the Verification of Neural ODEs with Stochastic
Guarantees.” In Proceedings of the AAAI Conference on Artificial Intelligence,
35:11525–35. AAAI Press, 2021.
ieee: S. Grunbacher, R. Hasani, M. Lechner, J. Cyranka, S. A. Smolka, and R. Grosu,
“On the verification of neural ODEs with stochastic guarantees,” in Proceedings
of the AAAI Conference on Artificial Intelligence, Virtual, 2021, vol. 35,
no. 13, pp. 11525–11535.
ista: 'Grunbacher S, Hasani R, Lechner M, Cyranka J, Smolka SA, Grosu R. 2021. On
the verification of neural ODEs with stochastic guarantees. Proceedings of the
AAAI Conference on Artificial Intelligence. AAAI: Association for the Advancement
of Artificial Intelligence, Technical Tracks, vol. 35, 11525–11535.'
mla: Grunbacher, Sophie, et al. “On the Verification of Neural ODEs with Stochastic
Guarantees.” Proceedings of the AAAI Conference on Artificial Intelligence,
vol. 35, no. 13, AAAI Press, 2021, pp. 11525–35.
short: S. Grunbacher, R. Hasani, M. Lechner, J. Cyranka, S.A. Smolka, R. Grosu,
in:, Proceedings of the AAAI Conference on Artificial Intelligence, AAAI Press,
2021, pp. 11525–11535.
conference:
end_date: 2021-02-09
location: Virtual
name: 'AAAI: Association for the Advancement of Artificial Intelligence'
start_date: 2021-02-02
date_created: 2022-01-25T15:47:20Z
date_published: 2021-05-28T00:00:00Z
date_updated: 2022-05-24T06:33:14Z
day: '28'
ddc:
- '000'
department:
- _id: GradSch
- _id: ToHe
external_id:
arxiv:
- '2012.08863'
file:
- access_level: open_access
checksum: 468d07041e282a1d46ffdae92f709630
content_type: application/pdf
creator: mlechner
date_created: 2022-01-26T07:38:08Z
date_updated: 2022-01-26T07:38:08Z
file_id: '10680'
file_name: 17372-Article Text-20866-1-2-20210518.pdf
file_size: 286906
relation: main_file
success: 1
file_date_updated: 2022-01-26T07:38:08Z
has_accepted_license: '1'
intvolume: ' 35'
issue: '13'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://ojs.aaai.org/index.php/AAAI/article/view/17372
month: '05'
oa: 1
oa_version: Published Version
page: 11525-11535
project:
- _id: 25F42A32-B435-11E9-9278-68D0E5697425
call_identifier: FWF
grant_number: Z211
name: The Wittgenstein Prize
publication: Proceedings of the AAAI Conference on Artificial Intelligence
publication_identifier:
eissn:
- 2374-3468
isbn:
- 978-1-57735-866-4
issn:
- 2159-5399
publication_status: published
publisher: AAAI Press
quality_controlled: '1'
status: public
title: On the verification of neural ODEs with stochastic guarantees
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 35
year: '2021'
...
---
_id: '10671'
abstract:
- lang: eng
text: We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system’s dynamics by implicit nonlinearities,
we construct networks of linear first-order dynamical systems modulated via nonlinear
interlinked gates. The resulting models represent dynamical systems with varying
(i.e., liquid) time-constants coupled to their hidden state, with outputs being
computed by numerical differential equation solvers. These neural networks exhibit
stable and bounded behavior, yield superior expressivity within the family of
neural ordinary differential equations, and give rise to improved performance
on time-series prediction tasks. To demonstrate these properties, we first take
a theoretical approach to find bounds over their dynamics, and compute their expressive
power by the trajectory length measure in a latent trajectory space. We then conduct
a series of time-series prediction experiments to manifest the approximation capability
of Liquid Time-Constant Networks (LTCs) compared to classical and modern RNNs.
acknowledgement: "R.H. and D.R. are partially supported by Boeing. R.H. and R.G. were
partially supported by the Horizon-2020 ECSEL\r\nProject grant No. 783163 (iDev40).
M.L. was supported in part by the Austrian Science Fund (FWF) under grant Z211-N23
(Wittgenstein Award). A.A. is supported by the National Science Foundation (NSF)
Graduate Research Fellowship Program. This research work is partially drawn from
the PhD dissertation of R.H."
alternative_title:
- Technical Tracks
article_processing_charge: No
author:
- first_name: Ramin
full_name: Hasani, Ramin
last_name: Hasani
- first_name: Mathias
full_name: Lechner, Mathias
id: 3DC22916-F248-11E8-B48F-1D18A9856A87
last_name: Lechner
- first_name: Alexander
full_name: Amini, Alexander
last_name: Amini
- first_name: Daniela
full_name: Rus, Daniela
last_name: Rus
- first_name: Radu
full_name: Grosu, Radu
last_name: Grosu
citation:
ama: 'Hasani R, Lechner M, Amini A, Rus D, Grosu R. Liquid time-constant networks.
In: Proceedings of the AAAI Conference on Artificial Intelligence. Vol
35. AAAI Press; 2021:7657-7666.'
apa: 'Hasani, R., Lechner, M., Amini, A., Rus, D., & Grosu, R. (2021). Liquid
time-constant networks. In Proceedings of the AAAI Conference on Artificial
Intelligence (Vol. 35, pp. 7657–7666). Virtual: AAAI Press.'
chicago: Hasani, Ramin, Mathias Lechner, Alexander Amini, Daniela Rus, and Radu
Grosu. “Liquid Time-Constant Networks.” In Proceedings of the AAAI Conference
on Artificial Intelligence, 35:7657–66. AAAI Press, 2021.
ieee: R. Hasani, M. Lechner, A. Amini, D. Rus, and R. Grosu, “Liquid time-constant
networks,” in Proceedings of the AAAI Conference on Artificial Intelligence,
Virtual, 2021, vol. 35, no. 9, pp. 7657–7666.
ista: 'Hasani R, Lechner M, Amini A, Rus D, Grosu R. 2021. Liquid time-constant
networks. Proceedings of the AAAI Conference on Artificial Intelligence. AAAI:
Association for the Advancement of Artificial Intelligence, Technical Tracks,
vol. 35, 7657–7666.'
mla: Hasani, Ramin, et al. “Liquid Time-Constant Networks.” Proceedings of the
AAAI Conference on Artificial Intelligence, vol. 35, no. 9, AAAI Press, 2021,
pp. 7657–66.
short: R. Hasani, M. Lechner, A. Amini, D. Rus, R. Grosu, in:, Proceedings of the
AAAI Conference on Artificial Intelligence, AAAI Press, 2021, pp. 7657–7666.
conference:
end_date: 2021-02-09
location: Virtual
name: 'AAAI: Association for the Advancement of Artificial Intelligence'
start_date: 2021-02-02
date_created: 2022-01-25T15:48:36Z
date_published: 2021-05-28T00:00:00Z
date_updated: 2022-05-24T06:36:54Z
day: '28'
ddc:
- '000'
department:
- _id: GradSch
- _id: ToHe
external_id:
arxiv:
- '2006.04439'
file:
- access_level: open_access
checksum: 0f06995fba06dbcfa7ed965fc66027ff
content_type: application/pdf
creator: mlechner
date_created: 2022-01-26T07:36:03Z
date_updated: 2022-01-26T07:36:03Z
file_id: '10678'
file_name: 16936-Article Text-20430-1-2-20210518 (1).pdf
file_size: 4302669
relation: main_file
success: 1
file_date_updated: 2022-01-26T07:36:03Z
has_accepted_license: '1'
intvolume: ' 35'
issue: '9'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://ojs.aaai.org/index.php/AAAI/article/view/16936
month: '05'
oa: 1
oa_version: Published Version
page: 7657-7666
project:
- _id: 25F42A32-B435-11E9-9278-68D0E5697425
call_identifier: FWF
grant_number: Z211
name: The Wittgenstein Prize
publication: Proceedings of the AAAI Conference on Artificial Intelligence
publication_identifier:
eissn:
- 2374-3468
isbn:
- 978-1-57735-866-4
issn:
- 2159-5399
publication_status: published
publisher: AAAI Press
quality_controlled: '1'
status: public
title: Liquid time-constant networks
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 35
year: '2021'
...
---
_id: '10668'
abstract:
- lang: eng
text: 'Robustness to variations in lighting conditions is a key objective for any
deep vision system. To this end, our paper extends the receptive field of convolutional
neural networks with two residual components, ubiquitous in the visual processing
system of vertebrates: On-center and off-center pathways, with an excitatory center
and inhibitory surround; OOCS for short. The On-center pathway is excited by the
presence of a light stimulus in its center, but not in its surround, whereas the
Off-center pathway is excited by the absence of a light stimulus in its center,
but not in its surround. We design OOCS pathways via a difference of Gaussians,
with their variance computed analytically from the size of the receptive fields.
OOCS pathways complement each other in their response to light stimuli, ensuring
this way a strong edge-detection capability, and as a result an accurate and robust
inference under challenging lighting conditions. We provide extensive empirical
evidence showing that networks supplied with OOCS pathways gain accuracy and illumination-robustness
from the novel edge representation, compared to other baselines.'
acknowledgement: Z.B. is supported by the Doctoral College Resilient Embedded Systems,
which is run jointly by the TU Wien’s Faculty of Informatics and the UAS Technikum
Wien. R.G. is partially supported by the Horizon 2020 Era-Permed project Persorad,
and ECSEL Project grant no. 783163 (iDev40). R.H and D.R were partially supported
by Boeing and MIT. M.L. is supported in part by the Austrian Science Fund (FWF)
under grant Z211-N23 (Wittgenstein Award).
alternative_title:
- PMLR
article_processing_charge: No
author:
- first_name: Zahra
full_name: Babaiee, Zahra
last_name: Babaiee
- first_name: Ramin
full_name: Hasani, Ramin
last_name: Hasani
- first_name: Mathias
full_name: Lechner, Mathias
id: 3DC22916-F248-11E8-B48F-1D18A9856A87
last_name: Lechner
- first_name: Daniela
full_name: Rus, Daniela
last_name: Rus
- first_name: Radu
full_name: Grosu, Radu
last_name: Grosu
citation:
ama: 'Babaiee Z, Hasani R, Lechner M, Rus D, Grosu R. On-off center-surround receptive
fields for accurate and robust image classification. In: Proceedings of the
38th International Conference on Machine Learning. Vol 139. ML Research Press;
2021:478-489.'
apa: 'Babaiee, Z., Hasani, R., Lechner, M., Rus, D., & Grosu, R. (2021). On-off
center-surround receptive fields for accurate and robust image classification.
In Proceedings of the 38th International Conference on Machine Learning
(Vol. 139, pp. 478–489). Virtual: ML Research Press.'
chicago: Babaiee, Zahra, Ramin Hasani, Mathias Lechner, Daniela Rus, and Radu Grosu.
“On-off Center-Surround Receptive Fields for Accurate and Robust Image Classification.”
In Proceedings of the 38th International Conference on Machine Learning,
139:478–89. ML Research Press, 2021.
ieee: Z. Babaiee, R. Hasani, M. Lechner, D. Rus, and R. Grosu, “On-off center-surround
receptive fields for accurate and robust image classification,” in Proceedings
of the 38th International Conference on Machine Learning, Virtual, 2021, vol.
139, pp. 478–489.
ista: 'Babaiee Z, Hasani R, Lechner M, Rus D, Grosu R. 2021. On-off center-surround
receptive fields for accurate and robust image classification. Proceedings of
the 38th International Conference on Machine Learning. ML: Machine Learning, PMLR,
vol. 139, 478–489.'
mla: Babaiee, Zahra, et al. “On-off Center-Surround Receptive Fields for Accurate
and Robust Image Classification.” Proceedings of the 38th International Conference
on Machine Learning, vol. 139, ML Research Press, 2021, pp. 478–89.
short: Z. Babaiee, R. Hasani, M. Lechner, D. Rus, R. Grosu, in:, Proceedings of
the 38th International Conference on Machine Learning, ML Research Press, 2021,
pp. 478–489.
conference:
end_date: 2021-07-24
location: Virtual
name: 'ML: Machine Learning'
start_date: 2021-07-18
date_created: 2022-01-25T15:46:33Z
date_published: 2021-07-01T00:00:00Z
date_updated: 2022-05-04T15:02:27Z
day: '01'
ddc:
- '000'
department:
- _id: GradSch
- _id: ToHe
file:
- access_level: open_access
checksum: d30eae62561bb517d9f978437d7677db
content_type: application/pdf
creator: mlechner
date_created: 2022-01-26T07:38:32Z
date_updated: 2022-01-26T07:38:32Z
file_id: '10681'
file_name: babaiee21a.pdf
file_size: 4246561
relation: main_file
success: 1
file_date_updated: 2022-01-26T07:38:32Z
has_accepted_license: '1'
intvolume: ' 139'
language:
- iso: eng
license: https://creativecommons.org/licenses/by-nc-nd/3.0/
main_file_link:
- open_access: '1'
url: https://proceedings.mlr.press/v139/babaiee21a
month: '07'
oa: 1
oa_version: Published Version
page: 478-489
project:
- _id: 25F42A32-B435-11E9-9278-68D0E5697425
call_identifier: FWF
grant_number: Z211
name: The Wittgenstein Prize
publication: Proceedings of the 38th International Conference on Machine Learning
publication_identifier:
issn:
- 2640-3498
publication_status: published
publisher: ML Research Press
quality_controlled: '1'
status: public
title: On-off center-surround receptive fields for accurate and robust image classification
tmp:
image: /images/cc_by_nc_nd.png
legal_code_url: https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
name: Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported (CC BY-NC-ND
3.0)
short: CC BY-NC-ND (3.0)
type: conference
user_id: 2EBD1598-F248-11E8-B48F-1D18A9856A87
volume: 139
year: '2021'
...
---
_id: '10670'
abstract:
- lang: eng
text: "Imitation learning enables high-fidelity, vision-based learning of policies
within rich, photorealistic environments. However, such techniques often rely
on traditional discrete-time neural models and face difficulties in generalizing
to domain shifts by failing to account for the causal relationships between the
agent and the environment. In this paper, we propose a theoretical and experimental
framework for learning causal representations using continuous-time neural networks,
specifically over their discrete-time counterparts. We evaluate our method in
the context of visual-control learning of drones over a series of complex tasks,
ranging from short- and long-term navigation, to chasing static and dynamic objects
through photorealistic environments. Our results demonstrate that causal continuous-time\r\ndeep
models can perform robust navigation tasks, where advanced recurrent models fail.
These models learn complex causal control representations directly from raw visual
inputs and scale to solve a variety of tasks using imitation learning."
acknowledgement: "C.V., R.H. A.A. and D.R. are partially supported by Boeing and MIT.
A.A. is supported by the National Science Foundation (NSF) Graduate Research Fellowship
Program. M.L. is supported in part by the Austrian Science Fund (FWF) under grant
Z211-N23 (Wittgenstein Award). Research was sponsored by the United States Air Force
Research Laboratory and the United States Air Force Artificial Intelligence Accelerator
and was accomplished under Cooperative Agreement Number FA8750-19-2-1000. The views
and conclusions contained in this document are those of the authors\r\nand should
not be interpreted as representing the official policies, either expressed or implied,
of the United States Air Force or the U.S. Government. The U.S. Government is authorized
to reproduce and distribute reprints for Government purposes notwithstanding any
copyright notation herein.\r\n"
alternative_title:
- ' Advances in Neural Information Processing Systems'
article_processing_charge: No
author:
- first_name: Charles J
full_name: Vorbach, Charles J
last_name: Vorbach
- first_name: Ramin
full_name: Hasani, Ramin
last_name: Hasani
- first_name: Alexander
full_name: Amini, Alexander
last_name: Amini
- first_name: Mathias
full_name: Lechner, Mathias
id: 3DC22916-F248-11E8-B48F-1D18A9856A87
last_name: Lechner
- first_name: Daniela
full_name: Rus, Daniela
last_name: Rus
citation:
ama: 'Vorbach CJ, Hasani R, Amini A, Lechner M, Rus D. Causal navigation by continuous-time
neural networks. In: 35th Conference on Neural Information Processing Systems.
; 2021.'
apa: Vorbach, C. J., Hasani, R., Amini, A., Lechner, M., & Rus, D. (2021). Causal
navigation by continuous-time neural networks. In 35th Conference on Neural
Information Processing Systems. Virtual.
chicago: Vorbach, Charles J, Ramin Hasani, Alexander Amini, Mathias Lechner, and
Daniela Rus. “Causal Navigation by Continuous-Time Neural Networks.” In 35th
Conference on Neural Information Processing Systems, 2021.
ieee: C. J. Vorbach, R. Hasani, A. Amini, M. Lechner, and D. Rus, “Causal navigation
by continuous-time neural networks,” in 35th Conference on Neural Information
Processing Systems, Virtual, 2021.
ista: 'Vorbach CJ, Hasani R, Amini A, Lechner M, Rus D. 2021. Causal navigation
by continuous-time neural networks. 35th Conference on Neural Information Processing
Systems. NeurIPS: Neural Information Processing Systems, Advances in Neural Information
Processing Systems, .'
mla: Vorbach, Charles J., et al. “Causal Navigation by Continuous-Time Neural Networks.”
35th Conference on Neural Information Processing Systems, 2021.
short: C.J. Vorbach, R. Hasani, A. Amini, M. Lechner, D. Rus, in:, 35th Conference
on Neural Information Processing Systems, 2021.
conference:
end_date: 2021-12-10
location: Virtual
name: 'NeurIPS: Neural Information Processing Systems'
start_date: 2021-12-06
date_created: 2022-01-25T15:47:50Z
date_published: 2021-12-01T00:00:00Z
date_updated: 2022-01-26T14:33:31Z
day: '01'
ddc:
- '000'
department:
- _id: GradSch
- _id: ToHe
external_id:
arxiv:
- '2106.08314'
file:
- access_level: open_access
checksum: be81f0ade174a8c9b2d4fe09590b2021
content_type: application/pdf
creator: mlechner
date_created: 2022-01-26T07:37:24Z
date_updated: 2022-01-26T07:37:24Z
file_id: '10679'
file_name: NeurIPS-2021-causal-navigation-by-continuous-time-neural-networks-Paper.pdf
file_size: 6841228
relation: main_file
success: 1
file_date_updated: 2022-01-26T07:37:24Z
has_accepted_license: '1'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://proceedings.neurips.cc/paper/2021/hash/67ba02d73c54f0b83c05507b7fb7267f-Abstract.html
month: '12'
oa: 1
oa_version: Published Version
project:
- _id: 25F42A32-B435-11E9-9278-68D0E5697425
call_identifier: FWF
grant_number: Z211
name: The Wittgenstein Prize
publication: 35th Conference on Neural Information Processing Systems
publication_status: published
quality_controlled: '1'
status: public
title: Causal navigation by continuous-time neural networks
tmp:
image: /images/cc_by_nc_nd.png
legal_code_url: https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode
name: Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported (CC BY-NC-ND
3.0)
short: CC BY-NC-ND (3.0)
type: conference
user_id: 8b945eb4-e2f2-11eb-945a-df72226e66a9
year: '2021'
...
---
_id: '10688'
abstract:
- lang: eng
text: "Civl is a static verifier for concurrent programs designed around the conceptual
framework of layered refinement,\r\nwhich views the task of verifying a program
as a sequence of program simplification steps each justified by its own invariant.
Civl verifies a layered concurrent program that compactly expresses all the programs
in this sequence and the supporting invariants. This paper presents the design
and implementation of the Civl verifier."
acknowledgement: This research was performed while Bernhard Kragl was at IST Austria,
supported in part by the Austrian Science Fund (FWF) under grant Z211-N23 (Wittgenstein
Award).
alternative_title:
- Conference Series
article_processing_charge: No
author:
- first_name: Bernhard
full_name: Kragl, Bernhard
id: 320FC952-F248-11E8-B48F-1D18A9856A87
last_name: Kragl
orcid: 0000-0001-7745-9117
- first_name: Shaz
full_name: Qadeer, Shaz
last_name: Qadeer
citation:
ama: 'Kragl B, Qadeer S. The Civl verifier. In: Ruzica P, Whalen MW, eds. Proceedings
of the 21st Conference on Formal Methods in Computer-Aided Design. Vol 2.
TU Wien Academic Press; 2021:143–152. doi:10.34727/2021/isbn.978-3-85448-046-4_23'
apa: 'Kragl, B., & Qadeer, S. (2021). The Civl verifier. In P. Ruzica &
M. W. Whalen (Eds.), Proceedings of the 21st Conference on Formal Methods in
Computer-Aided Design (Vol. 2, pp. 143–152). Virtual: TU Wien Academic Press.
https://doi.org/10.34727/2021/isbn.978-3-85448-046-4_23'
chicago: Kragl, Bernhard, and Shaz Qadeer. “The Civl Verifier.” In Proceedings
of the 21st Conference on Formal Methods in Computer-Aided Design, edited
by Piskac Ruzica and Michael W. Whalen, 2:143–152. TU Wien Academic Press, 2021.
https://doi.org/10.34727/2021/isbn.978-3-85448-046-4_23.
ieee: B. Kragl and S. Qadeer, “The Civl verifier,” in Proceedings of the 21st
Conference on Formal Methods in Computer-Aided Design, Virtual, 2021, vol.
2, pp. 143–152.
ista: 'Kragl B, Qadeer S. 2021. The Civl verifier. Proceedings of the 21st Conference
on Formal Methods in Computer-Aided Design. FMCAD: Formal Methods in Computer-Aided
Design, Conference Series, vol. 2, 143–152.'
mla: Kragl, Bernhard, and Shaz Qadeer. “The Civl Verifier.” Proceedings of the
21st Conference on Formal Methods in Computer-Aided Design, edited by Piskac
Ruzica and Michael W. Whalen, vol. 2, TU Wien Academic Press, 2021, pp. 143–152,
doi:10.34727/2021/isbn.978-3-85448-046-4_23.
short: B. Kragl, S. Qadeer, in:, P. Ruzica, M.W. Whalen (Eds.), Proceedings of the
21st Conference on Formal Methods in Computer-Aided Design, TU Wien Academic Press,
2021, pp. 143–152.
conference:
end_date: 2021-10-22
location: Virtual
name: 'FMCAD: Formal Methods in Computer-Aided Design'
start_date: 2021-10-20
date_created: 2022-01-26T08:01:30Z
date_published: 2021-10-01T00:00:00Z
date_updated: 2022-01-26T08:20:41Z
day: '01'
ddc:
- '000'
department:
- _id: ToHe
doi: 10.34727/2021/isbn.978-3-85448-046-4_23
editor:
- first_name: Piskac
full_name: Ruzica, Piskac
last_name: Ruzica
- first_name: Michael W.
full_name: Whalen, Michael W.
last_name: Whalen
file:
- access_level: open_access
checksum: 35438ac9f9750340b7f8ae4ae3220d9f
content_type: application/pdf
creator: cchlebak
date_created: 2022-01-26T08:04:29Z
date_updated: 2022-01-26T08:04:29Z
file_id: '10689'
file_name: 2021_FCAD2021_Kragl.pdf
file_size: 390555
relation: main_file
success: 1
file_date_updated: 2022-01-26T08:04:29Z
has_accepted_license: '1'
intvolume: ' 2'
language:
- iso: eng
month: '10'
oa: 1
oa_version: Published Version
page: 143–152
project:
- _id: 25F42A32-B435-11E9-9278-68D0E5697425
call_identifier: FWF
grant_number: Z211
name: The Wittgenstein Prize
publication: Proceedings of the 21st Conference on Formal Methods in Computer-Aided
Design
publication_identifier:
isbn:
- 978-3-85448-046-4
publication_status: published
publisher: TU Wien Academic Press
quality_controlled: '1'
status: public
title: The Civl verifier
tmp:
image: /images/cc_by.png
legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
short: CC BY (4.0)
type: conference
user_id: 8b945eb4-e2f2-11eb-945a-df72226e66a9
volume: 2
year: '2021'
...
---
_id: '9281'
abstract:
- lang: eng
text: We comment on two formal proofs of Fermat's sum of two squares theorem, written
using the Mathematical Components libraries of the Coq proof assistant. The first
one follows Zagier's celebrated one-sentence proof; the second follows David Christopher's
recent new proof relying on partition-theoretic arguments. Both formal proofs
rely on a general property of involutions of finite sets, of independent interest.
The proof technique consists for the most part of automating recurrent tasks (such
as case distinctions and computations on natural numbers) via ad hoc tactics.
article_number: '2103.11389'
article_processing_charge: No
author:
- first_name: Guillaume
full_name: Dubach, Guillaume
id: D5C6A458-10C4-11EA-ABF4-A4B43DDC885E
last_name: Dubach
orcid: 0000-0001-6892-8137
- first_name: Fabian
full_name: Mühlböck, Fabian
id: 6395C5F6-89DF-11E9-9C97-6BDFE5697425
last_name: Mühlböck
orcid: 0000-0003-1548-0177
citation:
ama: Dubach G, Mühlböck F. Formal verification of Zagier’s one-sentence proof. arXiv.
doi:10.48550/arXiv.2103.11389
apa: Dubach, G., & Mühlböck, F. (n.d.). Formal verification of Zagier’s one-sentence
proof. arXiv. https://doi.org/10.48550/arXiv.2103.11389
chicago: Dubach, Guillaume, and Fabian Mühlböck. “Formal Verification of Zagier’s
One-Sentence Proof.” ArXiv, n.d. https://doi.org/10.48550/arXiv.2103.11389.
ieee: G. Dubach and F. Mühlböck, “Formal verification of Zagier’s one-sentence proof,”
arXiv. .
ista: Dubach G, Mühlböck F. Formal verification of Zagier’s one-sentence proof.
arXiv, 2103.11389.
mla: Dubach, Guillaume, and Fabian Mühlböck. “Formal Verification of Zagier’s One-Sentence
Proof.” ArXiv, 2103.11389, doi:10.48550/arXiv.2103.11389.
short: G. Dubach, F. Mühlböck, ArXiv (n.d.).
date_created: 2021-03-23T05:38:48Z
date_published: 2021-03-21T00:00:00Z
date_updated: 2023-05-03T10:26:45Z
day: '21'
department:
- _id: LaEr
- _id: ToHe
doi: 10.48550/arXiv.2103.11389
ec_funded: 1
external_id:
arxiv:
- '2103.11389'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/2103.11389
month: '03'
oa: 1
oa_version: Preprint
project:
- _id: 260C2330-B435-11E9-9278-68D0E5697425
call_identifier: H2020
grant_number: '754411'
name: ISTplus - Postdoctoral Fellowships
publication: arXiv
publication_status: submitted
related_material:
record:
- id: '9946'
relation: other
status: public
status: public
title: Formal verification of Zagier's one-sentence proof
type: preprint
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2021'
...
---
_id: '10665'
abstract:
- lang: eng
text: "Formal verification of neural networks is an active topic of research, and
recent advances have significantly increased the size of the networks that verification
tools can handle. However, most methods are designed for verification of an idealized
model of the actual network which works over real arithmetic and ignores rounding
imprecisions. This idealization is in stark contrast to network quantization,
which is a technique that trades numerical precision for computational efficiency
and is, therefore, often applied in practice. Neglecting rounding errors of such
low-bit quantized neural networks has been shown to lead to wrong conclusions
about the network’s correctness. Thus, the desired approach for verifying quantized
neural networks would be one that takes these rounding errors\r\ninto account.
In this paper, we show that verifying the bitexact implementation of quantized
neural networks with bitvector specifications is PSPACE-hard, even though verifying
idealized real-valued networks and satisfiability of bit-vector specifications
alone are each in NP. Furthermore, we explore several practical heuristics toward
closing the complexity gap between idealized and bit-exact verification. In particular,
we propose three techniques for making SMT-based verification of quantized neural
networks more scalable. Our experiments demonstrate that our proposed methods
allow a speedup of up to three orders of magnitude over existing approaches."
acknowledgement: "This research was supported in part by the Austrian Science Fund
(FWF) under grant Z211-N23 (Wittgenstein\r\nAward), ERC CoG 863818 (FoRM-SMArt),
and the European Union’s Horizon 2020 research and innovation programme under the
Marie Skłodowska-Curie Grant Agreement No. 665385.\r\n"
alternative_title:
- Technical Tracks
article_processing_charge: No
author:
- first_name: Thomas A
full_name: Henzinger, Thomas A
id: 40876CD8-F248-11E8-B48F-1D18A9856A87
last_name: Henzinger
orcid: 0000-0002-2985-7724
- first_name: Mathias
full_name: Lechner, Mathias
id: 3DC22916-F248-11E8-B48F-1D18A9856A87
last_name: Lechner
- first_name: Dorde
full_name: Zikelic, Dorde
id: 294AA7A6-F248-11E8-B48F-1D18A9856A87
last_name: Zikelic
citation:
ama: 'Henzinger TA, Lechner M, Zikelic D. Scalable verification of quantized neural
networks. In: Proceedings of the AAAI Conference on Artificial Intelligence.
Vol 35. AAAI Press; 2021:3787-3795.'
apa: 'Henzinger, T. A., Lechner, M., & Zikelic, D. (2021). Scalable verification
of quantized neural networks. In Proceedings of the AAAI Conference on Artificial
Intelligence (Vol. 35, pp. 3787–3795). Virtual: AAAI Press.'
chicago: Henzinger, Thomas A, Mathias Lechner, and Dorde Zikelic. “Scalable Verification
of Quantized Neural Networks.” In Proceedings of the AAAI Conference on Artificial
Intelligence, 35:3787–95. AAAI Press, 2021.
ieee: T. A. Henzinger, M. Lechner, and D. Zikelic, “Scalable verification of quantized
neural networks,” in Proceedings of the AAAI Conference on Artificial Intelligence,
Virtual, 2021, vol. 35, no. 5A, pp. 3787–3795.
ista: 'Henzinger TA, Lechner M, Zikelic D. 2021. Scalable verification of quantized
neural networks. Proceedings of the AAAI Conference on Artificial Intelligence.
AAAI: Association for the Advancement of Artificial Intelligence, Technical Tracks,
vol. 35, 3787–3795.'
mla: Henzinger, Thomas A., et al. “Scalable Verification of Quantized Neural Networks.”
Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35,
no. 5A, AAAI Press, 2021, pp. 3787–95.
short: T.A. Henzinger, M. Lechner, D. Zikelic, in:, Proceedings of the AAAI Conference
on Artificial Intelligence, AAAI Press, 2021, pp. 3787–3795.
conference:
end_date: 2021-02-09
location: Virtual
name: 'AAAI: Association for the Advancement of Artificial Intelligence'
start_date: 2021-02-02
date_created: 2022-01-25T15:15:02Z
date_published: 2021-05-28T00:00:00Z
date_updated: 2023-06-23T07:01:11Z
day: '28'
ddc:
- '000'
department:
- _id: GradSch
- _id: ToHe
ec_funded: 1
external_id:
arxiv:
- '2012.08185'
file:
- access_level: open_access
checksum: 2bc8155b2526a70fba5b7301bc89dbd1
content_type: application/pdf
creator: mlechner
date_created: 2022-01-26T07:41:16Z
date_updated: 2022-01-26T07:41:16Z
file_id: '10684'
file_name: 16496-Article Text-19990-1-2-20210518 (1).pdf
file_size: 137235
relation: main_file
success: 1
file_date_updated: 2022-01-26T07:41:16Z
has_accepted_license: '1'
intvolume: ' 35'
issue: 5A
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://ojs.aaai.org/index.php/AAAI/article/view/16496
month: '05'
oa: 1
oa_version: Published Version
page: 3787-3795
project:
- _id: 2564DBCA-B435-11E9-9278-68D0E5697425
call_identifier: H2020
grant_number: '665385'
name: International IST Doctoral Program
- _id: 25F42A32-B435-11E9-9278-68D0E5697425
call_identifier: FWF
grant_number: Z211
name: The Wittgenstein Prize
- _id: 0599E47C-7A3F-11EA-A408-12923DDC885E
call_identifier: H2020
grant_number: '863818'
name: 'Formal Methods for Stochastic Models: Algorithms and Applications'
publication: Proceedings of the AAAI Conference on Artificial Intelligence
publication_identifier:
eissn:
- 2374-3468
isbn:
- 978-1-57735-866-4
issn:
- 2159-5399
publication_status: published
publisher: AAAI Press
quality_controlled: '1'
related_material:
record:
- id: '11362'
relation: dissertation_contains
status: public
scopus_import: '1'
status: public
title: Scalable verification of quantized neural networks
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 35
year: '2021'
...