TY - CONF AB - A classical problem for Markov chains is determining their stationary (or steady-state) distribution. This problem has an equally classical solution based on eigenvectors and linear equation systems. However, this approach does not scale to large instances, and iterative solutions are desirable. It turns out that a naive approach, as used by current model checkers, may yield completely wrong results. We present a new approach, which utilizes recent advances in partial exploration and mean payoff computation to obtain a correct, converging approximation. AU - Meggendorfer, Tobias ID - 13139 SN - 0302-9743 T2 - TACAS 2023: Tools and Algorithms for the Construction and Analysis of Systems TI - Correct approximation of stationary distributions VL - 13993 ER - TY - GEN AB - The software artefact to evaluate the approximation of stationary distributions implementation. AU - Meggendorfer, Tobias ID - 14990 TI - Artefact for: Correct Approximation of Stationary Distributions ER - TY - CONF AB - Reinforcement learning has shown promising results in learning neural network policies for complicated control tasks. However, the lack of formal guarantees about the behavior of such policies remains an impediment to their deployment. We propose a novel method for learning a composition of neural network policies in stochastic environments, along with a formal certificate which guarantees that a specification over the policy's behavior is satisfied with the desired probability. Unlike prior work on verifiable RL, our approach leverages the compositional nature of logical specifications provided in SpectRL, to learn over graphs of probabilistic reach-avoid specifications. The formal guarantees are provided by learning neural network policies together with reach-avoid supermartingales (RASM) for the graph’s sub-tasks and then composing them into a global policy. We also derive a tighter lower bound compared to previous work on the probability of reach-avoidance implied by a RASM, which is required to find a compositional policy with an acceptable probabilistic threshold for complex tasks with multiple edge policies. We implement a prototype of our approach and evaluate it on a Stochastic Nine Rooms environment. AU - Zikelic, Dorde AU - Lechner, Mathias AU - Verma, Abhinav AU - Chatterjee, Krishnendu AU - Henzinger, Thomas A ID - 15023 T2 - 37th Conference on Neural Information Processing Systems TI - Compositional policy learning in stochastic control systems with formal guarantees ER - TY - CONF AB - Given a Markov chain M = (V, v_0, δ), with state space V and a starting state v_0, and a probability threshold ε, an ε-core is a subset C of states that is left with probability at most ε. More formally, C ⊆ V is an ε-core, iff ℙ[reach (V\C)] ≤ ε. Cores have been applied in a wide variety of verification problems over Markov chains, Markov decision processes, and probabilistic programs, as a means of discarding uninteresting and low-probability parts of a probabilistic system and instead being able to focus on the states that are likely to be encountered in a real-world run. In this work, we focus on the problem of computing a minimal ε-core in a Markov chain. Our contributions include both negative and positive results: (i) We show that the decision problem on the existence of an ε-core of a given size is NP-complete. This solves an open problem posed in [Jan Kretínský and Tobias Meggendorfer, 2020]. We additionally show that the problem remains NP-complete even when limited to acyclic Markov chains with bounded maximal vertex degree; (ii) We provide a polynomial time algorithm for computing a minimal ε-core on Markov chains over control-flow graphs of structured programs. A straightforward combination of our algorithm with standard branch prediction techniques allows one to apply the idea of cores to find a subset of program lines that are left with low probability and then focus any desired static analysis on this core subset. AU - Ahmadi, Ali AU - Chatterjee, Krishnendu AU - Goharshady, Amir Kafshdar AU - Meggendorfer, Tobias AU - Safavi Hemami, Roodabeh AU - Zikelic, Dorde ID - 12102 SN - 1868-8969 T2 - 42nd IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science TI - Algorithms and hardness results for computing cores of Markov chains VL - 250 ER - TY - CONF AB - Spatial games form a widely-studied class of games from biology and physics modeling the evolution of social behavior. Formally, such a game is defined by a square (d by d) payoff matrix M and an undirected graph G. Each vertex of G represents an individual, that initially follows some strategy i ∈ {1,2,…,d}. In each round of the game, every individual plays the matrix game with each of its neighbors: An individual following strategy i meeting a neighbor following strategy j receives a payoff equal to the entry (i,j) of M. Then, each individual updates its strategy to its neighbors' strategy with the highest sum of payoffs, and the next round starts. The basic computational problems consist of reachability between configurations and the average frequency of a strategy. For general spatial games and graphs, these problems are in PSPACE. In this paper, we examine restricted setting: the game is a prisoner’s dilemma; and G is a subgraph of grid. We prove that basic computational problems for spatial games with prisoner’s dilemma on a subgraph of a grid are PSPACE-hard. AU - Chatterjee, Krishnendu AU - Ibsen-Jensen, Rasmus AU - Jecker, Ismael R AU - Svoboda, Jakub ID - 12101 SN - 1868-8969 T2 - 42nd IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science TI - Complexity of spatial games VL - 250 ER - TY - CONF AB - We treat the problem of risk-aware control for stochastic shortest path (SSP) on Markov decision processes (MDP). Typically, expectation is considered for SSP, which however is oblivious to the incurred risk. We present an alternative view, instead optimizing conditional value-at-risk (CVaR), an established risk measure. We treat both Markov chains as well as MDP and introduce, through novel insights, two algorithms, based on linear programming and value iteration, respectively. Both algorithms offer precise and provably correct solutions. Evaluation of our prototype implementation shows that risk-aware control is feasible on several moderately sized models. AU - Meggendorfer, Tobias ID - 12568 IS - 9 SN - 1577358767 T2 - Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022 TI - Risk-aware stochastic shortest path VL - 36 ER - TY - JOUR AB - A matching is compatible to two or more labeled point sets of size n with labels {1, . . . , n} if its straight-line drawing on each of these point sets is crossing-free. We study the maximum number of edges in a matching compatible to two or more labeled point sets in general position in the plane. We show that for any two labeled sets of n points in convex position there exists a compatible matching with ⌊√2n + 1 − 1⌋ edges. More generally, for any ℓ labeled point sets we construct compatible matchings of size Ω(n1/ℓ). As a corresponding upper bound, we use probabilistic arguments to show that for any ℓ given sets of n points there exists a labeling of each set such that the largest compatible matching has O(n2/(ℓ+1)) edges. Finally, we show that Θ(log n) copies of any set of n points are necessary and sufficient for the existence of labelings of these point sets such that any compatible matching consists only of a single edge. AU - Aichholzer, Oswin AU - Arroyo Guevara, Alan M AU - Masárová, Zuzana AU - Parada, Irene AU - Perz, Daniel AU - Pilz, Alexander AU - Tkadlec, Josef AU - Vogtenhuber, Birgit ID - 11938 IS - 2 JF - Journal of Graph Algorithms and Applications SN - 1526-1719 TI - On compatible matchings VL - 26 ER - TY - GEN AB - In modern sample-driven Prophet Inequality, an adversary chooses a sequence of n items with values v1,v2,…,vn to be presented to a decision maker (DM). The process follows in two phases. In the first phase (sampling phase), some items, possibly selected at random, are revealed to the DM, but she can never accept them. In the second phase, the DM is presented with the other items in a random order and online fashion. For each item, she must make an irrevocable decision to either accept the item and stop the process or reject the item forever and proceed to the next item. The goal of the DM is to maximize the expected value as compared to a Prophet (or offline algorithm) that has access to all information. In this setting, the sampling phase has no cost and is not part of the optimization process. However, in many scenarios, the samples are obtained as part of the decision-making process. We model this aspect as a two-phase Prophet Inequality where an adversary chooses a sequence of 2n items with values v1,v2,…,v2n and the items are randomly ordered. Finally, there are two phases of the Prophet Inequality problem with the first n-items and the rest of the items, respectively. We show that some basic algorithms achieve a ratio of at most 0.450. We present an algorithm that achieves a ratio of at least 0.495. Finally, we show that for every algorithm the ratio it can achieve is at most 0.502. Hence our algorithm is near-optimal. AU - Chatterjee, Krishnendu AU - Mohammadi, Mona AU - Saona Urmeneta, Raimundo J ID - 12677 T2 - arXiv TI - Repeated prophet inequality with near-optimal bounds ER - TY - JOUR AB - Transforming ω-automata into parity automata is traditionally done using appearance records. We present an efficient variant of this idea, tailored to Rabin automata, and several optimizations applicable to all appearance records. We compare the methods experimentally and show that our method produces significantly smaller automata than previous approaches. AU - Kretinsky, Jan AU - Meggendorfer, Tobias AU - Waldmann, Clara AU - Weininger, Maximilian ID - 10602 JF - Acta Informatica KW - computer networks and communications KW - information systems KW - software SN - 0001-5903 TI - Index appearance record with preorders VL - 59 ER - TY - JOUR AB - Motivated by COVID-19, we develop and analyze a simple stochastic model for the spread of disease in human population. We track how the number of infected and critically ill people develops over time in order to estimate the demand that is imposed on the hospital system. To keep this demand under control, we consider a class of simple policies for slowing down and reopening society and we compare their efficiency in mitigating the spread of the virus from several different points of view. We find that in order to avoid overwhelming of the hospital system, a policy must impose a harsh lockdown or it must react swiftly (or both). While reacting swiftly is universally beneficial, being harsh pays off only when the country is patient about reopening and when the neighboring countries coordinate their mitigation efforts. Our work highlights the importance of acting decisively when closing down and the importance of patience and coordination between neighboring countries when reopening. AU - Svoboda, Jakub AU - Tkadlec, Josef AU - Pavlogiannis, Andreas AU - Chatterjee, Krishnendu AU - Nowak, Martin A. ID - 10731 IS - 1 JF - Scientific Reports TI - Infection dynamics of COVID-19 virus under lockdown and reopening VL - 12 ER - TY - CONF AB - We present a novel approach to differential cost analysis that, given a program revision, attempts to statically bound the difference in resource usage, or cost, between the two program versions. Differential cost analysis is particularly interesting because of the many compelling applications for it, such as detecting resource-use regressions at code-review time or proving the absence of certain side-channel vulnerabilities. One prior approach to differential cost analysis is to apply relational reasoning that conceptually constructs a product program on which one can over-approximate the difference in costs between the two program versions. However, a significant challenge in any relational approach is effectively aligning the program versions to get precise results. In this paper, our key insight is that we can avoid the need for and the limitations of program alignment if, instead, we bound the difference of two cost-bound summaries rather than directly bounding the concrete cost difference. In particular, our method computes a threshold value for the maximal difference in cost between two program versions simultaneously using two kinds of cost-bound summaries---a potential function that evaluates to an upper bound for the cost incurred in the first program and an anti-potential function that evaluates to a lower bound for the cost incurred in the second. Our method has a number of desirable properties: it can be fully automated, it allows optimizing the threshold value on relative cost, it is suitable for programs that are not syntactically similar, and it supports non-determinism. We have evaluated an implementation of our approach on a number of program pairs collected from the literature, and we find that our method computes tight threshold values on relative cost in most examples. AU - Zikelic, Dorde AU - Chang, Bor-Yuh Evan AU - Bolignano, Pauline AU - Raimondi, Franco ID - 11459 SN - 9781450392655 T2 - Proceedings of the 43rd ACM SIGPLAN International Conference on Programming Language Design and Implementation TI - Differential cost analysis with simultaneous potentials and anti-potentials ER - TY - JOUR AB - Structural balance theory is an established framework for studying social relationships of friendship and enmity. These relationships are modeled by a signed network whose energy potential measures the level of imbalance, while stochastic dynamics drives the network toward a state of minimum energy that captures social balance. It is known that this energy landscape has local minima that can trap socially aware dynamics, preventing it from reaching balance. Here we first study the robustness and attractor properties of these local minima. We show that a stochastic process can reach them from an abundance of initial states and that some local minima cannot be escaped by mild perturbations of the network. Motivated by these anomalies, we introduce best-edge dynamics (BED), a new plausible stochastic process. We prove that BED always reaches balance and that it does so fast in various interesting settings. AU - Chatterjee, Krishnendu AU - Svoboda, Jakub AU - Zikelic, Dorde AU - Pavlogiannis, Andreas AU - Tkadlec, Josef ID - 12257 IS - 3 JF - Physical Review E SN - 2470-0045 TI - Social balance on networks: Local minima and best-edge dynamics VL - 106 ER - TY - JOUR AB - In repeated interactions, players can use strategies that respond to the outcome of previous rounds. Much of the existing literature on direct reciprocity assumes that all competing individuals use the same strategy space. Here, we study both learning and evolutionary dynamics of players that differ in the strategy space they explore. We focus on the infinitely repeated donation game and compare three natural strategy spaces: memory-1 strategies, which consider the last moves of both players, reactive strategies, which respond to the last move of the co-player, and unconditional strategies. These three strategy spaces differ in the memory capacity that is needed. We compute the long term average payoff that is achieved in a pairwise learning process. We find that smaller strategy spaces can dominate larger ones. For weak selection, unconditional players dominate both reactive and memory-1 players. For intermediate selection, reactive players dominate memory-1 players. Only for strong selection and low cost-to-benefit ratio, memory-1 players dominate the others. We observe that the supergame between strategy spaces can be a social dilemma: maximum payoff is achieved if both players explore a larger strategy space, but smaller strategy spaces dominate. AU - Schmid, Laura AU - Hilbe, Christian AU - Chatterjee, Krishnendu AU - Nowak, Martin ID - 12280 IS - 6 JF - PLOS Computational Biology KW - Computational Theory and Mathematics KW - Cellular and Molecular Neuroscience KW - Genetics KW - Molecular Biology KW - Ecology KW - Modeling and Simulation KW - Ecology KW - Evolution KW - Behavior and Systematics TI - Direct reciprocity between individuals that use different strategy spaces VL - 18 ER - TY - JOUR AB - Partially observable Markov decision processes (POMDPs) are standard models for dynamic systems with probabilistic and nondeterministic behaviour in uncertain environments. We prove that in POMDPs with long-run average objective, the decision maker has approximately optimal strategies with finite memory. This implies notably that approximating the long-run value is recursively enumerable, as well as a weak continuity property of the value with respect to the transition function. AU - Chatterjee, Krishnendu AU - Saona Urmeneta, Raimundo J AU - Ziliotto, Bruno ID - 9311 IS - 1 JF - Mathematics of Operations Research KW - Management Science and Operations Research KW - General Mathematics KW - Computer Science Applications SN - 0364-765X TI - Finite-memory strategies in POMDPs with long-run average objectives VL - 47 ER - TY - CONF AB - We present PET, a specialized and highly optimized framework for partial exploration on probabilistic systems. Over the last decade, several significant advances in the analysis of Markov decision processes employed partial exploration. In a nutshell, this idea allows to focus computation on specific parts of the system, guided by heuristics, while maintaining correctness. In particular, only relevant parts of the system are constructed on demand, which in turn potentially allows to omit constructing large parts of the system. Depending on the model, this leads to dramatic speed-ups, in extreme cases even up to an arbitrary factor. PET unifies several previous implementations and provides a flexible framework to easily implement partial exploration for many further problems. Our experimental evaluation shows significant improvements compared to the previous implementations while vastly reducing the overhead required to add support for additional properties. AU - Meggendorfer, Tobias ID - 12170 SN - 0302-9743 T2 - 20th International Symposium on Automated Technology for Verification and Analysis TI - PET – A partial exploration tool for probabilistic verification VL - 13505 ER - TY - JOUR AB - Fixed-horizon planning considers a weighted graph and asks to construct a path that maximizes the sum of weights for a given time horizon T. However, in many scenarios, the time horizon is not fixed, but the stopping time is chosen according to some distribution such that the expected stopping time is T. If the stopping-time distribution is not known, then to ensure robustness, the distribution is chosen by an adversary as the worst-case scenario. A stationary plan for every vertex always chooses the same outgoing edge. For fixed horizon or fixed stopping-time distribution, stationary plans are not sufficient for optimality. Quite surprisingly we show that when an adversary chooses the stopping-time distribution with expected stopping-time T, then stationary plans are sufficient. While computing optimal stationary plans for fixed horizon is NP-complete, we show that computing optimal stationary plans under adversarial stopping-time distribution can be achieved in polynomial time. AU - Chatterjee, Krishnendu AU - Doyen, Laurent ID - 11402 JF - Journal of Computer and System Sciences SN - 0022-0000 TI - Graph planning with expected finite horizon VL - 129 ER - TY - CONF AB - We consider the problem of approximating the reachability probabilities in Markov decision processes (MDP) with uncountable (continuous) state and action spaces. While there are algorithms that, for special classes of such MDP, provide a sequence of approximations converging to the true value in the limit, our aim is to obtain an algorithm with guarantees on the precision of the approximation. As this problem is undecidable in general, assumptions on the MDP are necessary. Our main contribution is to identify sufficient assumptions that are as weak as possible, thus approaching the "boundary" of which systems can be correctly and reliably analyzed. To this end, we also argue why each of our assumptions is necessary for algorithms based on processing finitely many observations. We present two solution variants. The first one provides converging lower bounds under weaker assumptions than typical ones from previous works concerned with guarantees. The second one then utilizes stronger assumptions to additionally provide converging upper bounds. Altogether, we obtain an anytime algorithm, i.e. yielding a sequence of approximants with known and iteratively improving precision, converging to the true value in the limit. Besides, due to the generality of our assumptions, our algorithms are very general templates, readily allowing for various heuristics from literature in contrast to, e.g., a specific discretization algorithm. Our theoretical contribution thus paves the way for future practical improvements without sacrificing correctness guarantees. AU - Grover, Kush AU - Kretinsky, Jan AU - Meggendorfer, Tobias AU - Weininger, Maimilian ID - 12775 SN - 1868-8969 T2 - 33rd International Conference on Concurrency Theory TI - Anytime guarantees for reachability in uncountable Markov decision processes VL - 243 ER - TY - CONF AB - We consider the quantitative problem of obtaining lower-bounds on the probability of termination of a given non-deterministic probabilistic program. Specifically, given a non-termination threshold p∈[0,1], we aim for certificates proving that the program terminates with probability at least 1−p. The basic idea of our approach is to find a terminating stochastic invariant, i.e. a subset SI of program states such that (i) the probability of the program ever leaving SI is no more than p, and (ii) almost-surely, the program either leaves SI or terminates. While stochastic invariants are already well-known, we provide the first proof that the idea above is not only sound, but also complete for quantitative termination analysis. We then introduce a novel sound and complete characterization of stochastic invariants that enables template-based approaches for easy synthesis of quantitative termination certificates, especially in affine or polynomial forms. Finally, by combining this idea with the existing martingale-based methods that are relatively complete for qualitative termination analysis, we obtain the first automated, sound, and relatively complete algorithm for quantitative termination analysis. Notably, our completeness guarantees for quantitative termination analysis are as strong as the best-known methods for the qualitative variant. Our prototype implementation demonstrates the effectiveness of our approach on various probabilistic programs. We also demonstrate that our algorithm certifies lower bounds on termination probability for probabilistic programs that are beyond the reach of previous methods. AU - Chatterjee, Krishnendu AU - Goharshady, Amir Kafshdar AU - Meggendorfer, Tobias AU - Zikelic, Dorde ID - 12000 SN - 0302-9743 T2 - Proceedings of the 34th International Conference on Computer Aided Verification TI - Sound and complete certificates for auantitative termination analysis of probabilistic programs VL - 13371 ER - TY - JOUR AB - We consider the problem of formally verifying almost-sure (a.s.) asymptotic stability in discrete-time nonlinear stochastic control systems. While verifying stability in deterministic control systems is extensively studied in the literature, verifying stability in stochastic control systems is an open problem. The few existing works on this topic either consider only specialized forms of stochasticity or make restrictive assumptions on the system, rendering them inapplicable to learning algorithms with neural network policies. In this work, we present an approach for general nonlinear stochastic control problems with two novel aspects: (a) instead of classical stochastic extensions of Lyapunov functions, we use ranking supermartingales (RSMs) to certify a.s. asymptotic stability, and (b) we present a method for learning neural network RSMs. We prove that our approach guarantees a.s. asymptotic stability of the system and provides the first method to obtain bounds on the stabilization time, which stochastic Lyapunov functions do not. Finally, we validate our approach experimentally on a set of nonlinear stochastic reinforcement learning environments with neural network policies. AU - Lechner, Mathias AU - Zikelic, Dorde AU - Chatterjee, Krishnendu AU - Henzinger, Thomas A ID - 12511 IS - 7 JF - Proceedings of the AAAI Conference on Artificial Intelligence KW - General Medicine SN - 2159-5399 TI - Stability verification in stochastic control systems via neural network supermartingales VL - 36 ER - TY - GEN AB - In this work, we address the problem of learning provably stable neural network policies for stochastic control systems. While recent work has demonstrated the feasibility of certifying given policies using martingale theory, the problem of how to learn such policies is little explored. Here, we study the effectiveness of jointly learning a policy together with a martingale certificate that proves its stability using a single learning algorithm. We observe that the joint optimization problem becomes easily stuck in local minima when starting from a randomly initialized policy. Our results suggest that some form of pre-training of the policy is required for the joint optimization to repair and verify the policy successfully. AU - Zikelic, Dorde AU - Lechner, Mathias AU - Chatterjee, Krishnendu AU - Henzinger, Thomas A ID - 14601 T2 - arXiv TI - Learning stabilizing policies in stochastic control systems ER -