TY - CONF
AB - Quantization converts neural networks into low-bit fixed-point computations which can be carried out by efficient integer-only hardware, and is standard practice for the deployment of neural networks on real-time embedded devices. However, like their real-numbered counterpart, quantized networks are not immune to malicious misclassification caused by adversarial attacks. We investigate how quantization affects a network’s robustness to adversarial attacks, which is a formal verification question. We show that neither robustness nor non-robustness are monotonic with changing the number of bits for the representation and, also, neither are preserved by quantization from a real-numbered network. For this reason, we introduce a verification method for quantized neural networks which, using SMT solving over bit-vectors, accounts for their exact, bit-precise semantics. We built a tool and analyzed the effect of quantization on a classifier for the MNIST dataset. We demonstrate that, compared to our method, existing methods for the analysis of real-numbered networks often derive false conclusions about their quantizations, both when determining robustness and when detecting attacks, and that existing methods for quantized networks often miss attacks. Furthermore, we applied our method beyond robustness, showing how the number of bits in quantization enlarges the gender bias of a predictor for students’ grades.
AU - Giacobbe, Mirco
AU - Henzinger, Thomas A
AU - Lechner, Mathias
ID - 7808
SN - 03029743
T2 - International Conference on Tools and Algorithms for the Construction and Analysis of Systems
TI - How many bits does it take to quantize your neural network?
VL - 12079
ER -
TY - CONF
AB - Fixed-point arithmetic is a popular alternative to floating-point arithmetic on embedded systems. Existing work on the verification of fixed-point programs relies on custom formalizations of fixed-point arithmetic, which makes it hard to compare the described techniques or reuse the implementations. In this paper, we address this issue by proposing and formalizing an SMT theory of fixed-point arithmetic. We present an intuitive yet comprehensive syntax of the fixed-point theory, and provide formal semantics for it based on rational arithmetic. We also describe two decision procedures for this theory: one based on the theory of bit-vectors and the other on the theory of reals. We implement the two decision procedures, and evaluate our implementations using existing mature SMT solvers on a benchmark suite we created. Finally, we perform a case study of using the theory we propose to verify properties of quantized neural networks.
AU - Baranowski, Marek
AU - He, Shaobo
AU - Lechner, Mathias
AU - Nguyen, Thanh Son
AU - Rakamarić, Zvonimir
ID - 8194
SN - 03029743
T2 - Automated Reasoning
TI - An SMT theory of fixed-point arithmetic
VL - 12166
ER -
TY - JOUR
AB - A central goal of artificial intelligence in high-stakes decision-making applications is to design a single algorithm that simultaneously expresses generalizability by learning coherent representations of their world and interpretable explanations of its dynamics. Here, we combine brain-inspired neural computation principles and scalable deep learning architectures to design compact neural controllers for task-specific compartments of a full-stack autonomous vehicle control system. We discover that a single algorithm with 19 control neurons, connecting 32 encapsulated input features to outputs by 253 synapses, learns to map high-dimensional inputs into steering commands. This system shows superior generalizability, interpretability and robustness compared with orders-of-magnitude larger black-box learning systems. The obtained neural agents enable high-fidelity autonomy for task-specific parts of a complex autonomous system.
AU - Lechner, Mathias
AU - Hasani, Ramin
AU - Amini, Alexander
AU - Henzinger, Thomas A
AU - Rus, Daniela
AU - Grosu, Radu
ID - 8679
JF - Nature Machine Intelligence
TI - Neural circuit policies enabling auditable autonomy
VL - 2
ER -
TY - CONF
AB - In this paper, we introduce a novel method to interpret recurrent neural networks (RNNs), particularly long short-term memory networks (LSTMs) at the cellular level. We propose a systematic pipeline for interpreting individual hidden state dynamics within the network using response characterization methods. The ranked contribution of individual cells to the network's output is computed by analyzing a set of interpretable metrics of their decoupled step and sinusoidal responses. As a result, our method is able to uniquely identify neurons with insightful dynamics, quantify relationships between dynamical properties and test accuracy through ablation analysis, and interpret the impact of network capacity on a network's dynamical distribution. Finally, we demonstrate the generalizability and scalability of our method by evaluating a series of different benchmark sequential datasets.
AU - Hasani, Ramin
AU - Amini, Alexander
AU - Lechner, Mathias
AU - Naser, Felix
AU - Grosu, Radu
AU - Rus, Daniela
ID - 6985
SN - 9781728119854
T2 - Proceedings of the International Joint Conference on Neural Networks
TI - Response characterization for auditing cell dynamics in long short-term memory networks
ER -