TY - GEN
AB - We argue that the time is ripe to investigate differential monitoring, in which the specification of a program's behavior is implicitly given by a second program implementing the same informal specification. Similar ideas have been proposed before, and are currently implemented in restricted form for testing and specialized run-time analyses, aspects of which we combine. We discuss the challenges of implementing differential monitoring as a general-purpose, black-box run-time monitoring framework, and present promising results of a preliminary implementation, showing low monitoring overheads for diverse programs.
AU - Mühlböck, Fabian
AU - Henzinger, Thomas A
ID - 9946
KW - Run-time verification
KW - Software engineering
KW - Implicit specification
SN - 2664-1690
TI - Differential monitoring
ER -
TY - GEN
AB - With the lithium-ion technology approaching its intrinsic limit with graphite-based anodes, lithium metal is recently receiving renewed interest from the battery community as potential high capacity anode for next-generation rechargeable batteries. In this focus paper, we review the main advances in this field since the first attempts in the
mid-1970s. Strategies for enabling reversible cycling and avoiding dendrite growth are thoroughly discussed, including specific applications in all-solid-state (polymeric and inorganic), Lithium-sulphur and Li-O2 (air) batteries. A particular attention is paid to review recent developments in regard of prototype manufacturing and current state-ofthe-art of these battery technologies with respect to the 2030 targets of the EU Integrated Strategic Energy Technology Plan (SET-Plan) Action 7.
AU - Varzi, Alberto
AU - Thanner, Katharina
AU - Scipioni, Roberto
AU - Di Lecce, Daniele
AU - Hassoun, Jusef
AU - Dörfler, Susanne
AU - Altheus, Holger
AU - Kaskel, Stefan
AU - Prehal, Christian
AU - Freunberger, Stefan Alexander
ID - 8067
KW - Battery
KW - Lithium metal
KW - Lithium-sulphur
KW - Lithium-air
KW - All-solid-state
SN - 2664-1690
TI - Current status and future perspectives of Lithium metal batteries
ER -
TY - GEN
AB - We consider the problem of expected cost analysis over nondeterministic probabilistic programs, which aims at automated methods for analyzing the resource-usage of such programs. Previous approaches for this problem could only handle nonnegative bounded costs. However, in many scenarios, such as queuing networks or analysis of cryptocurrency protocols, both positive and negative costs are necessary and the costs are unbounded as well.
In this work, we present a sound and efficient approach to obtain polynomial bounds on the expected accumulated cost of nondeterministic probabilistic programs. Our approach can handle (a) general positive and negative costs with bounded updates in variables; and (b) nonnegative costs with general updates to variables. We show that several natural examples which could not be handled by previous approaches are captured in our framework.
Moreover, our approach leads to an efficient polynomial-time algorithm, while no previous approach for cost analysis of probabilistic programs could guarantee polynomial runtime. Finally, we show the effectiveness of our approach by presenting experimental results on a variety of programs, motivated by real-world applications, for which we efficiently synthesize tight resource-usage bounds.
AU - Anonymous, 1
AU - Anonymous, 2
AU - Anonymous, 3
AU - Anonymous, 4
AU - Anonymous, 5
AU - Anonymous, 6
ID - 5457
SN - 2664-1690
TI - Cost analysis of nondeterministic probabilistic programs
ER -
TY - GEN
AB - We present a new dynamic partial-order reduction method for stateless model checking of concurrent programs. A common approach for exploring program behaviors relies on enumerating the traces of the program, without storing the visited states (aka stateless exploration). As the number of distinct traces grows exponentially, dynamic partial-order reduction (DPOR) techniques have been successfully used to partition the space of traces into equivalence classes (Mazurkiewicz partitioning), with the goal of exploring only few representative traces from each class.
We introduce a new equivalence on traces under sequential consistency semantics, which we call the observation equivalence. Two traces are observationally equivalent if every read event observes the same write event in both traces. While the traditional Mazurkiewicz equivalence is control-centric, our new definition is data-centric. We show that our observation equivalence is coarser than the Mazurkiewicz equivalence, and in many cases even exponentially coarser. We devise a DPOR exploration of the trace space, called data-centric DPOR, based on the observation equivalence.
1. For acyclic architectures, our algorithm is guaranteed to explore exactly one representative trace from each observation class, while spending polynomial time per class. Hence, our algorithm is optimal wrt the observation equivalence, and in several cases explores exponentially fewer traces than any enumerative method based on the Mazurkiewicz equivalence.
2. For cyclic architectures, we consider an equivalence between traces which is finer than the observation equivalence; but coarser than the Mazurkiewicz equivalence, and in some cases is exponentially coarser. Our data-centric DPOR algorithm remains optimal under this trace equivalence.
Finally, we perform a basic experimental comparison between the existing Mazurkiewicz-based DPOR and our data-centric DPOR on a set of academic benchmarks. Our results show a significant reduction in both running time and the number of explored equivalence classes.
AU - Chalupa, Marek
AU - Chatterjee, Krishnendu
AU - Pavlogiannis, Andreas
AU - Sinha, Nishant
AU - Vaidya, Kapil
ID - 5456
SN - 2664-1690
TI - Data-centric dynamic partial order reduction
ER -
TY - GEN
AB - Synchronous programs are easy to specify because the side effects of an operation are finished by the time the invocation of the operation returns to the caller. Asynchronous programs, on the other hand, are difficult to specify because there are side effects due to pending computation scheduled as a result of the invocation of an operation. They are also difficult to verify because of the large number of possible interleavings of concurrent asynchronous computation threads. We show that specifications and correctness proofs for asynchronous programs can be structured by introducing the fiction, for proof purposes, that intermediate, non-quiescent states of asynchronous operations can be ignored. Then, the task of specification becomes relatively simple and the task of verification can be naturally decomposed into smaller sub-tasks. The sub-tasks iteratively summarize, guided by the structure of an asynchronous program, the atomic effect of non-atomic operations and the synchronous effect of asynchronous operations. This structuring of specifications and proofs corresponds to the introduction of multiple layers of stepwise refinement for asynchronous programs. We present the first proof rule, called synchronization, to reduce asynchronous invocations on a lower layer to synchronous invocations on a higher layer. We implemented our proof method in CIVL and evaluated it on a collection of benchmark programs.
AU - Henzinger, Thomas A
AU - Kragl, Bernhard
AU - Qadeer, Shaz
ID - 6426
SN - 2664-1690
TI - Synchronizing the asynchronous
ER -
TY - GEN
AB - A fundamental algorithmic problem at the heart of static analysis is Dyck reachability. The input is a graphwhere the edges are labeled with different types of opening and closing parentheses, and the reachabilityinformation is computed via paths whose parentheses are properly matched. We present new results for Dyckreachability problems with applications to alias analysis and data-dependence analysis. Our main contributions,that include improved upper bounds as well as lower bounds that establish optimality guarantees, are asfollows:First, we consider Dyck reachability on bidirected graphs, which is the standard way of performing field-sensitive points-to analysis. Given a bidirected graph withnnodes andmedges, we present: (i) an algorithmwith worst-case running timeO(m+n·α(n)), whereα(n)is the inverse Ackermann function, improving thepreviously knownO(n2)time bound; (ii) a matching lower bound that shows that our algorithm is optimalwrt to worst-case complexity; and (iii) an optimal average-case upper bound ofO(m)time, improving thepreviously knownO(m·logn)bound.Second, we consider the problem of context-sensitive data-dependence analysis, where the task is to obtainanalysis summaries of library code in the presence of callbacks. Our algorithm preprocesses libraries in almostlinear time, after which the contribution of the library in the complexity of the client analysis is only linear,and only wrt the number of call sites.Third, we prove that combinatorial algorithms for Dyck reachability on general graphs with truly sub-cubic bounds cannot be obtained without obtaining sub-cubic combinatorial algorithms for Boolean MatrixMultiplication, which is a long-standing open problem. Thus we establish that the existing combinatorialalgorithms for Dyck reachability are (conditionally) optimal for general graphs. We also show that the samehardness holds for graphs of constant treewidth.Finally, we provide a prototype implementation of our algorithms for both alias analysis and data-dependenceanalysis. Our experimental evaluation demonstrates that the new algorithms significantly outperform allexisting methods on the two problems, over real-world benchmarks.
AU - Chatterjee, Krishnendu
AU - Choudhary, Bhavya
AU - Pavlogiannis, Andreas
ID - 5455
SN - 2664-1690
TI - Optimal Dyck reachability for data-dependence and alias analysis
ER -
TY - GEN
AB - We consider the quantitative analysis problem for interprocedural control-flow graphs (ICFGs). The input consists of an ICFG, a positive weight function that assigns every transition a positive integer-valued number, and a labelling of the transitions (events) as good, bad, and neutral events. The weight function assigns to each transition a numerical value that represents ameasure of how good or bad an event is. The quantitative analysis problem asks whether there is a run of the ICFG where the ratio of the sum of the numerical weights of good events versus the sum of weights of bad events in the long-run is at least a given threshold (or equivalently, to compute the maximal ratio among all valid paths in the ICFG). The quantitative analysis problem for ICFGs can be solved in polynomial time, and we present an efficient and practical algorithm for the problem. We show that several problems relevant for static program analysis, such as estimating the worst-case execution time of a program or the average energy consumption of a mobile application, can be modeled in our framework. We have implemented our algorithm as a tool in the Java Soot framework. We demonstrate the effectiveness of our approach with two case studies. First, we show that our framework provides a sound approach (no false positives) for the analysis of inefficiently-used containers. Second, we show that our approach can also be used for static profiling of programs which reasons about methods that are frequently invoked. Our experimental results show that our tool scales to relatively large benchmarks, and discovers relevant and useful information that can be used to optimize performance of the programs.
AU - Chatterjee, Krishnendu
AU - Pavlogiannis, Andreas
AU - Velner, Yaron
ID - 5445
SN - 2664-1690
TI - Quantitative interprocedural analysis
ER -
TY - GEN
AB - We study the problem of developing efficient approaches for proving termination of recursive programs with one-dimensional arrays. Ranking functions serve as a sound and complete approach for proving termination of non-recursive programs without array operations. First, we generalize ranking functions to the notion of measure functions, and prove that measure functions (i) provide a sound method to prove termination of recursive programs (with one-dimensional arrays), and (ii) is both sound and complete over recursive programs without array operations. Our second contribution is the synthesis of measure functions of specific forms in polynomial time. More precisely, we prove that (i) polynomial measure functions over recursive programs can be synthesized in polynomial time through Farkas’ Lemma and Handelman’s Theorem, and (ii) measure functions involving logarithm and exponentiation can be synthesized in polynomial time through abstraction of logarithmic or exponential terms and Handelman’s Theorem. A key application of our method is the worst-case analysis of recursive programs. While previous methods obtain worst-case polynomial bounds of the form O(n^k), where k is an integer, our polynomial time methods can synthesize bounds of the form O(n log n), as well as O(n^x), where x is not an integer. We show the applicability of our automated technique to obtain worst-case complexity of classical recursive algorithms such as (i) Merge-Sort, the divideand-
conquer algorithm for the Closest-Pair problem, where we obtain O(n log n) worst-case bound, and (ii) Karatsuba’s algorithm for polynomial multiplication and Strassen’s algorithm for matrix multiplication, where we obtain O(n^x) bound, where x is not an integer and close to the best-known bounds for the respective algorithms. Finally, we present experimental results to demonstrate the
effectiveness of our approach.
AU - Anonymous, 1
AU - Anonymous, 2
AU - Anonymous, 3
ID - 5446
SN - 2664-1690
TI - Termination and worst-case analysis of recursive programs
ER -
TY - GEN
AB - We consider the problem of developing automated techniques to aid the average-case complexity analysis of programs. Several classical textbook algorithms have quite efficient average-case complexity, whereas the corresponding worst-case bounds are either inefficient (e.g., QUICK-SORT), or completely ineffective (e.g., COUPONCOLLECTOR). Since the main focus of average-case analysis is to obtain efficient bounds, we consider bounds that are either logarithmic,
linear, or almost-linear (O(log n), O(n), O(n · log n),
respectively, where n represents the size of the input). Our main contribution is a sound approach for deriving such average-case bounds for randomized recursive programs. Our approach is efficient (a simple linear-time algorithm), and it is based on (a) the analysis of recurrence relations induced by randomized algorithms, and (b) a guess-and-check technique. Our approach can infer the asymptotically optimal average-case bounds for classical randomized algorithms, including RANDOMIZED-SEARCH, QUICKSORT, QUICK-SELECT, COUPON-COLLECTOR, where the worstcase
bounds are either inefficient (such as linear as compared to logarithmic of average-case, or quadratic as compared to linear or almost-linear of average-case), or ineffective. We have implemented our approach, and the experimental results show that we obtain the bounds efficiently for various classical algorithms.
AU - Anonymous, 1
AU - Anonymous, 2
AU - Anonymous, 3
ID - 5447
SN - 2664-1690
TI - Average-case analysis of programs: Automated recurrence analysis for almost-linear bounds
ER -
TY - GEN
AB - We present a new dynamic partial-order reduction method for stateless model checking of concurrent programs. A common approach for exploring program behaviors relies on enumerating the traces of the program, without storing the visited states (aka stateless exploration). As the number of distinct traces grows exponentially, dynamic partial-order reduction (DPOR) techniques have been successfully used to partition the space of traces into equivalence classes (Mazurkiewicz partitioning), with the goal of exploring only few representative traces from each class.
We introduce a new equivalence on traces under sequential consistency semantics, which we call the observation equivalence. Two traces are observationally equivalent if every read event observes the same write event in both traces. While the traditional Mazurkiewicz equivalence is control-centric, our new definition is data-centric. We show that our observation equivalence is coarser than the Mazurkiewicz equivalence, and in many cases even exponentially coarser. We devise a DPOR exploration of the trace space, called data-centric DPOR, based on the observation equivalence.
1. For acyclic architectures, our algorithm is guaranteed to explore exactly one representative trace from each observation class, while spending polynomial time per class. Hence, our algorithm is optimal wrt the observation equivalence, and in several cases explores exponentially fewer traces than any enumerative method based on the Mazurkiewicz equivalence.
2. For cyclic architectures, we consider an equivalence between traces which is finer than the observation equivalence; but coarser than the Mazurkiewicz equivalence, and in some cases is exponentially coarser. Our data-centric DPOR algorithm remains optimal under this trace equivalence.
Finally, we perform a basic experimental comparison between the existing Mazurkiewicz-based DPOR and our data-centric DPOR on a set of academic benchmarks. Our results show a significant reduction in both running time and the number of explored equivalence classes.
AU - Anonymous, 1
AU - Anonymous, 2
AU - Anonymous, 3
AU - Anonymous, 4
ID - 5448
SN - 2664-1690
TI - Data-centric dynamic partial order reduction
ER -