TY - JOUR AB - Systems should not only be correct but also robust in the sense that they behave reasonably in unexpected situations. This article addresses synthesis of robust reactive systems from temporal specifications. Existing methods allow arbitrary behavior if assumptions in the specification are violated. To overcome this, we define two robustness notions, combine them, and show how to enforce them in synthesis. The first notion applies to safety properties: If safety assumptions are violated temporarily, we require that the system recovers to normal operation with as few errors as possible. The second notion requires that, if liveness assumptions are violated, as many guarantees as possible should be fulfilled nevertheless. We present a synthesis procedure achieving this for the important class of GR(1) specifications, and establish complexity bounds. We also present an implementation of a special case of robustness, and show experimental results. AU - Bloem, Roderick AU - Chatterjee, Krishnendu AU - Greimel, Karin AU - Henzinger, Thomas A AU - Hofferek, Georg AU - Jobstmann, Barbara AU - Könighofer, Bettina AU - Könighofer, Robert ID - 2187 IS - 3-4 JF - Acta Informatica TI - Synthesizing robust systems VL - 51 ER - TY - CONF AB - We present a new algorithm to construct a (generalized) deterministic Rabin automaton for an LTL formula φ. The automaton is the product of a master automaton and an array of slave automata, one for each G-subformula of φ. The slave automaton for G ψ is in charge of recognizing whether FG ψ holds. As opposed to standard determinization procedures, the states of all our automata have a clear logical structure, which allows for various optimizations. Our construction subsumes former algorithms for fragments of LTL. Experimental results show improvement in the sizes of the resulting automata compared to existing methods. AU - Esparza, Javier AU - Kretinsky, Jan ID - 2190 TI - From LTL to deterministic automata: A safraless compositional approach VL - 8559 ER - TY - JOUR AB - We study Markov decision processes (MDPs) with multiple limit-average (or mean-payoff) functions. We consider two different objectives, namely, expectation and satisfaction objectives. Given an MDP with κ limit-average functions, in the expectation objective the goal is to maximize the expected limit-average value, and in the satisfaction objective the goal is to maximize the probability of runs such that the limit-average value stays above a given vector. We show that under the expectation objective, in contrast to the case of one limit-average function, both randomization and memory are necessary for strategies even for ε-approximation, and that finite-memory randomized strategies are sufficient for achieving Pareto optimal values. Under the satisfaction objective, in contrast to the case of one limit-average function, infinite memory is necessary for strategies achieving a specific value (i.e. randomized finite-memory strategies are not sufficient), whereas memoryless randomized strategies are sufficient for ε-approximation, for all ε > 0. We further prove that the decision problems for both expectation and satisfaction objectives can be solved in polynomial time and the trade-off curve (Pareto curve) can be ε-approximated in time polynomial in the size of the MDP and 1/ε, and exponential in the number of limit-average functions, for all ε > 0. Our analysis also reveals flaws in previous work for MDPs with multiple mean-payoff functions under the expectation objective, corrects the flaws, and allows us to obtain improved results. AU - Brázdil, Tomáš AU - Brožek, Václav AU - Chatterjee, Krishnendu AU - Forejt, Vojtěch AU - Kučera, Antonín ID - 2234 IS - 1 JF - Logical Methods in Computer Science SN - 18605974 TI - Markov decision processes with multiple long-run average objectives VL - 10 ER - TY - JOUR AB - Muller games are played by two players moving a token along a graph; the winner is determined by the set of vertices that occur infinitely often. The central algorithmic problem is to compute the winning regions for the players. Different classes and representations of Muller games lead to problems of varying computational complexity. One such class are parity games; these are of particular significance in computational complexity, as they remain one of the few combinatorial problems known to be in NP ∩ co-NP but not known to be in P. We show that winning regions for a Muller game can be determined from the alternating structure of its traps. To every Muller game we then associate a natural number that we call its trap depth; this parameter measures how complicated the trap structure is. We present algorithms for parity games that run in polynomial time for graphs of bounded trap depth, and in general run in time exponential in the trap depth. AU - Grinshpun, Andrey AU - Phalitnonkiat, Pakawat AU - Rubin, Sasha AU - Tarfulea, Andrei ID - 2246 JF - Theoretical Computer Science SN - 03043975 TI - Alternating traps in Muller and parity games VL - 521 ER - TY - JOUR AB - Multi-dimensional mean-payoff and energy games provide the mathematical foundation for the quantitative study of reactive systems, and play a central role in the emerging quantitative theory of verification and synthesis. In this work, we study the strategy synthesis problem for games with such multi-dimensional objectives along with a parity condition, a canonical way to express ω ω -regular conditions. While in general, the winning strategies in such games may require infinite memory, for synthesis the most relevant problem is the construction of a finite-memory winning strategy (if one exists). Our main contributions are as follows. First, we show a tight exponential bound (matching upper and lower bounds) on the memory required for finite-memory winning strategies in both multi-dimensional mean-payoff and energy games along with parity objectives. This significantly improves the triple exponential upper bound for multi energy games (without parity) that could be derived from results in literature for games on vector addition systems with states. Second, we present an optimal symbolic and incremental algorithm to compute a finite-memory winning strategy (if one exists) in such games. Finally, we give a complete characterization of when finite memory of strategies can be traded off for randomness. In particular, we show that for one-dimension mean-payoff parity games, randomized memoryless strategies are as powerful as their pure finite-memory counterparts. AU - Chatterjee, Krishnendu AU - Randour, Mickael AU - Raskin, Jean ID - 2716 IS - 3-4 JF - Acta Informatica TI - Strategy synthesis for multi-dimensional quantitative objectives VL - 51 ER - TY - JOUR AB - The classical (boolean) notion of refinement for behavioral interfaces of system components is the alternating refinement preorder. In this paper, we define a distance for interfaces, called interface simulation distance. It makes the alternating refinement preorder quantitative by, intuitively, tolerating errors (while counting them) in the alternating simulation game. We show that the interface simulation distance satisfies the triangle inequality, that the distance between two interfaces does not increase under parallel composition with a third interface, that the distance between two interfaces can be bounded from above and below by distances between abstractions of the two interfaces, and how to synthesize an interface from incompatible requirements. We illustrate the framework, and the properties of the distances under composition of interfaces, with two case studies. AU - Cerny, Pavol AU - Chmelik, Martin AU - Henzinger, Thomas A AU - Radhakrishna, Arjun ID - 1733 IS - 3 JF - Theoretical Computer Science TI - Interface simulation distances VL - 560 ER - TY - JOUR AB - The computation of the winning set for Büchi objectives in alternating games on graphs is a central problem in computer-aided verification with a large number of applications. The long-standing best known upper bound for solving the problem is Õ(n ⋅ m), where n is the number of vertices and m is the number of edges in the graph. We are the first to break the Õ(n ⋅ m) boundary by presenting a new technique that reduces the running time to O(n2). This bound also leads to O(n2)-time algorithms for computing the set of almost-sure winning vertices for Büchi objectives (1) in alternating games with probabilistic transitions (improving an earlier bound of Õ(n ⋅ m)), (2) in concurrent graph games with constant actions (improving an earlier bound of O(n3)), and (3) in Markov decision processes (improving for m>n4/3 an earlier bound of O(m ⋅ √m)). We then show how to maintain the winning set for Büchi objectives in alternating games under a sequence of edge insertions or a sequence of edge deletions in O(n) amortized time per operation. Our algorithms are the first dynamic algorithms for this problem. We then consider another core graph theoretic problem in verification of probabilistic systems, namely computing the maximal end-component decomposition of a graph. We present two improved static algorithms for the maximal end-component decomposition problem. Our first algorithm is an O(m ⋅ √m)-time algorithm, and our second algorithm is an O(n2)-time algorithm which is obtained using the same technique as for alternating Büchi games. Thus, we obtain an O(min &lcu;m ⋅ √m,n2})-time algorithm improving the long-standing O(n ⋅ m) time bound. Finally, we show how to maintain the maximal end-component decomposition of a graph under a sequence of edge insertions or a sequence of edge deletions in O(n) amortized time per edge deletion, and O(m) worst-case time per edge insertion. Again, our algorithms are the first dynamic algorithms for this problem. AU - Chatterjee, Krishnendu AU - Henzinger, Monika H ID - 2141 IS - 3 JF - Journal of the ACM TI - Efficient and dynamic algorithms for alternating Büchi games and maximal end-component decomposition VL - 61 ER - TY - CONF AB - We study two-player concurrent games on finite-state graphs played for an infinite number of rounds, where in each round, the two players (player 1 and player 2) choose their moves independently and simultaneously; the current state and the two moves determine the successor state. The objectives are ω-regular winning conditions specified as parity objectives. We consider the qualitative analysis problems: the computation of the almost-sure and limit-sure winning set of states, where player 1 can ensure to win with probability 1 and with probability arbitrarily close to 1, respectively. In general the almost-sure and limit-sure winning strategies require both infinite-memory as well as infinite-precision (to describe probabilities). While the qualitative analysis problem for concurrent parity games with infinite-memory, infinite-precision randomized strategies was studied before, we study the bounded-rationality problem for qualitative analysis of concurrent parity games, where the strategy set for player 1 is restricted to bounded-resource strategies. In terms of precision, strategies can be deterministic, uniform, finite-precision, or infinite-precision; and in terms of memory, strategies can be memoryless, finite-memory, or infinite-memory. We present a precise and complete characterization of the qualitative winning sets for all combinations of classes of strategies. In particular, we show that uniform memoryless strategies are as powerful as finite-precision infinite-memory strategies, and infinite-precision memoryless strategies are as powerful as infinite-precision finite-memory strategies. We show that the winning sets can be computed in (n2d+3) time, where n is the size of the game structure and 2d is the number of priorities (or colors), and our algorithms are symbolic. The membership problem of whether a state belongs to a winning set can be decided in NP ∩ coNP. Our symbolic algorithms are based on a characterization of the winning sets as μ-calculus formulas, however, our μ-calculus formulas are crucially different from the ones for concurrent parity games (without bounded rationality); and our memoryless witness strategy constructions are significantly different from the infinite-memory witness strategy constructions for concurrent parity games. AU - Chatterjee, Krishnendu ED - Baldan, Paolo ED - Gorla, Daniele ID - 2054 T2 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) TI - Qualitative concurrent parity games: Bounded rationality VL - 8704 ER - TY - CONF AB - First cycle games (FCG) are played on a finite graph by two players who push a token along the edges until a vertex is repeated, and a simple cycle is formed. The winner is determined by some fixed property Y of the sequence of labels of the edges (or nodes) forming this cycle. These games are traditionally of interest because of their connection with infinite-duration games such as parity and mean-payoff games. We study the memory requirements for winning strategies of FCGs and certain associated infinite duration games. We exhibit a simple FCG that is not memoryless determined (this corrects a mistake in Memoryless determinacy of parity and mean payoff games: a simple proof by Bj⋯orklund, Sandberg, Vorobyov (2004) that claims that FCGs for which Y is closed under cyclic permutations are memoryless determined). We show that θ (n)! memory (where n is the number of nodes in the graph), which is always sufficient, may be necessary to win some FCGs. On the other hand, we identify easy to check conditions on Y (i.e., Y is closed under cyclic permutations, and both Y and its complement are closed under concatenation) that are sufficient to ensure that the corresponding FCGs and their associated infinite duration games are memoryless determined. We demonstrate that many games considered in the literature, such as mean-payoff, parity, energy, etc., satisfy these conditions. On the complexity side, we show (for efficiently computable Y) that while solving FCGs is in PSPACE, solving some families of FCGs is PSPACE-hard. AU - Aminof, Benjamin AU - Rubin, Sasha ID - 475 T2 - Electronic Proceedings in Theoretical Computer Science, EPTCS TI - First cycle games VL - 146 ER - TY - CONF AB - We consider two-player zero-sum partial-observation stochastic games on graphs. Based on the information available to the players these games can be classified as follows: (a) general partial-observation (both players have partial view of the game); (b) one-sided partial-observation (one player has partial-observation and the other player has complete-observation); and (c) perfect-observation (both players have complete view of the game). The one-sided partial-observation games subsumes the important special case of one-player partial-observation stochastic games (or partial-observation Markov decision processes (POMDPs)). Based on the randomization available for the strategies, (a) the players may not be allowed to use randomization (pure strategies), or (b) they may choose a probability distribution over actions but the actual random choice is external and not visible to the player (actions invisible), or (c) they may use full randomization. We consider all these classes of games with reachability, and parity objectives that can express all ω-regular objectives. The analysis problems are classified into the qualitative analysis that asks for the existence of a strategy that ensures the objective with probability 1; and the quantitative analysis that asks for the existence of a strategy that ensures the objective with probability at least λ (0,1). In this talk we will cover a wide range of results: for perfect-observation games; for POMDPs; for one-sided partial-observation games; and for general partial-observation games. AU - Chatterjee, Krishnendu ID - 1903 IS - PART 1 TI - Partial-observation stochastic reachability and parity games VL - 8634 ER - TY - JOUR AB - In two-player finite-state stochastic games of partial observation on graphs, in every state of the graph, the players simultaneously choose an action, and their joint actions determine a probability distribution over the successor states. The game is played for infinitely many rounds and thus the players construct an infinite path in the graph. We consider reachability objectives where the first player tries to ensure a target state to be visited almost-surely (i.e., with probability 1) or positively (i.e., with positive probability), no matter the strategy of the second player. We classify such games according to the information and to the power of randomization available to the players. On the basis of information, the game can be one-sided with either (a) player 1, or (b) player 2 having partial observation (and the other player has perfect observation), or two-sided with (c) both players having partial observation. On the basis of randomization, (a) the players may not be allowed to use randomization (pure strategies), or (b) they may choose a probability distribution over actions but the actual random choice is external and not visible to the player (actions invisible), or (c) they may use full randomization. Our main results for pure strategies are as follows: (1) For one-sided games with player 2 having perfect observation we show that (in contrast to full randomized strategies) belief-based (subset-construction based) strategies are not sufficient, and we present an exponential upper bound on memory both for almost-sure and positive winning strategies; we show that the problem of deciding the existence of almost-sure and positive winning strategies for player 1 is EXPTIME-complete and present symbolic algorithms that avoid the explicit exponential construction. (2) For one-sided games with player 1 having perfect observation we show that nonelementarymemory is both necessary and sufficient for both almost-sure and positive winning strategies. (3) We show that for the general (two-sided) case finite-memory strategies are sufficient for both positive and almost-sure winning, and at least nonelementary memory is required. We establish the equivalence of the almost-sure winning problems for pure strategies and for randomized strategies with actions invisible. Our equivalence result exhibit serious flaws in previous results of the literature: we show a nonelementary memory lower bound for almost-sure winning whereas an exponential upper bound was previously claimed. AU - Chatterjee, Krishnendu AU - Doyen, Laurent ID - 2211 IS - 2 JF - ACM Transactions on Computational Logic (TOCL) TI - Partial-observation stochastic games: How to win when belief fails VL - 15 ER - TY - JOUR AB - Recently, there has been an effort to add quantitative objectives to formal verification and synthesis. We introduce and investigate the extension of temporal logics with quantitative atomic assertions. At the heart of quantitative objectives lies the accumulation of values along a computation. It is often the accumulated sum, as with energy objectives, or the accumulated average, as with mean-payoff objectives. We investigate the extension of temporal logics with the prefix-accumulation assertions Sum(v) ≥ c and Avg(v) ≥ c, where v is a numeric (or Boolean) variable of the system, c is a constant rational number, and Sum(v) and Avg(v) denote the accumulated sum and average of the values of v from the beginning of the computation up to the current point in time. We also allow the path-accumulation assertions LimInfAvg(v) ≥ c and LimSupAvg(v) ≥ c, referring to the average value along an entire infinite computation. We study the border of decidability for such quantitative extensions of various temporal logics. In particular, we show that extending the fragment of CTL that has only the EX, EF, AX, and AG temporal modalities with both prefix-accumulation assertions, or extending LTL with both path-accumulation assertions, results in temporal logics whose model-checking problem is decidable. Moreover, the prefix-accumulation assertions may be generalized with "controlled accumulation," allowing, for example, to specify constraints on the average waiting time between a request and a grant. On the negative side, we show that this branching-time logic is, in a sense, the maximal logic with one or both of the prefix-accumulation assertions that permits a decidable model-checking procedure. Extending a temporal logic that has the EG or EU modalities, such as CTL or LTL, makes the problem undecidable. AU - Boker, Udi AU - Chatterjee, Krishnendu AU - Henzinger, Thomas A AU - Kupferman, Orna ID - 2038 IS - 4 JF - ACM Transactions on Computational Logic (TOCL) TI - Temporal specifications with accumulative values VL - 15 ER - TY - CONF AB - We study two-player (zero-sum) concurrent mean-payoff games played on a finite-state graph. We focus on the important sub-class of ergodic games where all states are visited infinitely often with probability 1. The algorithmic study of ergodic games was initiated in a seminal work of Hoffman and Karp in 1966, but all basic complexity questions have remained unresolved. Our main results for ergodic games are as follows: We establish (1) an optimal exponential bound on the patience of stationary strategies (where patience of a distribution is the inverse of the smallest positive probability and represents a complexity measure of a stationary strategy); (2) the approximation problem lies in FNP; (3) the approximation problem is at least as hard as the decision problem for simple stochastic games (for which NP ∩ coNP is the long-standing best known bound). We present a variant of the strategy-iteration algorithm by Hoffman and Karp; show that both our algorithm and the classical value-iteration algorithm can approximate the value in exponential time; and identify a subclass where the value-iteration algorithm is a FPTAS. We also show that the exact value can be expressed in the existential theory of the reals, and establish square-root sum hardness for a related class of games. AU - Chatterjee, Krishnendu AU - Ibsen-Jensen, Rasmus ID - 2162 IS - Part 2 TI - The complexity of ergodic mean payoff games VL - 8573 ER - TY - CONF AB - We consider two-player partial-observation stochastic games on finitestate graphs where player 1 has partial observation and player 2 has perfect observation. The winning condition we study are ε-regular conditions specified as parity objectives. The qualitative-analysis problem given a partial-observation stochastic game and a parity objective asks whether there is a strategy to ensure that the objective is satisfied with probability 1 (resp. positive probability). These qualitative-analysis problems are known to be undecidable. However in many applications the relevant question is the existence of finite-memory strategies, and the qualitative-analysis problems under finite-memory strategies was recently shown to be decidable in 2EXPTIME.We improve the complexity and show that the qualitative-analysis problems for partial-observation stochastic parity games under finite-memory strategies are EXPTIME-complete; and also establish optimal (exponential) memory bounds for finite-memory strategies required for qualitative analysis. AU - Chatterjee, Krishnendu AU - Doyen, Laurent AU - Nain, Sumit AU - Vardi, Moshe ID - 2213 TI - The complexity of partial-observation stochastic parity games with finite-memory strategies VL - 8412 ER - TY - CONF AB - The theory of graph games is the foundation for modeling and synthesizing reactive processes. In the synthesis of stochastic processes, we use 2 1/2-player games where some transitions of the game graph are controlled by two adversarial players, the System and the Environment, and the other transitions are determined probabilistically. We consider 2 1/2-player games where the objective of the System is the conjunction of a qualitative objective (specified as a parity condition) and a quantitative objective (specified as a mean-payoff condition). We establish that the problem of deciding whether the System can ensure that the probability to satisfy the mean-payoff parity objective is at least a given threshold is in NP ∩ coNP, matching the best known bound in the special case of 2-player games (where all transitions are deterministic). We present an algorithm running in time O(d·n2d·MeanGame) to compute the set of almost-sure winning states from which the objective can be ensured with probability 1, where n is the number of states of the game, d the number of priorities of the parity objective, and MeanGame is the complexity to compute the set of almost-sure winning states in 2 1/2-player mean-payoff games. Our results are useful in the synthesis of stochastic reactive systems with both functional requirement (given as a qualitative objective) and performance requirement (given as a quantitative objective). AU - Chatterjee, Krishnendu AU - Doyen, Laurent AU - Gimbert, Hugo AU - Oualhadj, Youssouf ID - 2212 TI - Perfect-information stochastic mean-payoff parity games VL - 8412 ER - TY - CONF AB - The edit distance between two (untimed) traces is the minimum cost of a sequence of edit operations (insertion, deletion, or substitution) needed to transform one trace to the other. Edit distances have been extensively studied in the untimed setting, and form the basis for approximate matching of sequences in different domains such as coding theory, parsing, and speech recognition. In this paper, we lift the study of edit distances from untimed languages to the timed setting. We define an edit distance between timed words which incorporates both the edit distance between the untimed words and the absolute difference in time stamps. Our edit distance between two timed words is computable in polynomial time. Further, we show that the edit distance between a timed word and a timed language generated by a timed automaton, defined as the edit distance between the word and the closest word in the language, is PSPACE-complete. While computing the edit distance between two timed automata is undecidable, we show that the approximate version, where we decide if the edit distance between two timed automata is either less than a given parameter or more than δ away from the parameter, for δ > 0, can be solved in exponential space and is EXPSPACE-hard. Our definitions and techniques can be generalized to the setting of hybrid systems, and analogous decidability results hold for rectangular automata. AU - Chatterjee, Krishnendu AU - Ibsen-Jensen, Rasmus AU - Majumdar, Ritankar ID - 2216 TI - Edit distance for timed automata ER - TY - GEN AB - We consider Markov decision processes (MDPs) which are a standard model for probabilistic systems. We focus on qualitative properties for MDPs that can express that desired behaviors of the system arise almost-surely (with probability 1) or with positive probability. We introduce a new simulation relation to capture the refinement relation of MDPs with respect to qualitative properties, and present discrete graph theoretic algorithms with quadratic complexity to compute the simulation relation. We present an automated technique for assume-guarantee style reasoning for compositional analysis of MDPs with qualitative properties by giving a counter-example guided abstraction-refinement approach to compute our new simulation relation. We have implemented our algorithms and show that the compositional analysis leads to significant improvements. AU - Chatterjee, Krishnendu AU - Daca, Przemyslaw AU - Chmelik, Martin ID - 5413 SN - 2664-1690 TI - CEGAR for qualitative analysis of probabilistic systems ER - TY - GEN AB - We consider Markov decision processes (MDPs) which are a standard model for probabilistic systems. We focus on qualitative properties for MDPs that can express that desired behaviors of the system arise almost-surely (with probability 1) or with positive probability. We introduce a new simulation relation to capture the refinement relation of MDPs with respect to qualitative properties, and present discrete graph theoretic algorithms with quadratic complexity to compute the simulation relation. We present an automated technique for assume-guarantee style reasoning for compositional analysis of MDPs with qualitative properties by giving a counter-example guided abstraction-refinement approach to compute our new simulation relation. We have implemented our algorithms and show that the compositional analysis leads to significant improvements. AU - Chatterjee, Krishnendu AU - Daca, Przemyslaw AU - Chmelik, Martin ID - 5414 SN - 2664-1690 TI - CEGAR for qualitative analysis of probabilistic systems ER - TY - GEN AB - We consider Markov decision processes (MDPs) which are a standard model for probabilistic systems. We focus on qualitative properties for MDPs that can express that desired behaviors of the system arise almost-surely (with probability 1) or with positive probability. We introduce a new simulation relation to capture the refinement relation of MDPs with respect to qualitative properties, and present discrete graph theoretic algorithms with quadratic complexity to compute the simulation relation. We present an automated technique for assume-guarantee style reasoning for compositional analysis of MDPs with qualitative properties by giving a counter-example guided abstraction-refinement approach to compute our new simulation relation. We have implemented our algorithms and show that the compositional analysis leads to significant improvements. AU - Chatterjee, Krishnendu AU - Daca, Przemyslaw AU - Chmelik, Martin ID - 5412 SN - 2664-1690 TI - CEGAR for qualitative analysis of probabilistic systems ER - TY - CONF AB - We consider multi-player graph games with partial-observation and parity objective. While the decision problem for three-player games with a coalition of the first and second players against the third player is undecidable in general, we present a decidability result for partial-observation games where the first and third player are in a coalition against the second player, thus where the second player is adversarial but weaker due to partial-observation. We establish tight complexity bounds in the case where player 1 is less informed than player 2, namely 2-EXPTIME-completeness for parity objectives. The symmetric case of player 1 more informed than player 2 is much more complicated, and we show that already in the case where player 1 has perfect observation, memory of size non-elementary is necessary in general for reachability objectives, and the problem is decidable for safety and reachability objectives. From our results we derive new complexity results for partial-observation stochastic games. AU - Chatterjee, Krishnendu AU - Doyen, Laurent ID - 2163 IS - Part 2 T2 - Lecture Notes in Computer Science TI - Games with a weak adversary VL - 8573 ER -