TY - CONF
AB - Techniques such as verification condition generation, predicate abstraction, and expressive type systems reduce software verification to proving formulas in expressive logics. Programs and their specifications often make use of data structures such as sets, multisets, algebraic data types, or graphs. Consequently, formulas generated from verification also involve such data structures. To automate the proofs of such formulas we propose a logic (a “calculus”) of such data structures. We build the calculus by starting from decidable logics of individual data structures, and connecting them through functions and sets, in ways that go beyond the frameworks such as Nelson-Oppen. The result are new decidable logics that can simultaneously specify properties of different kinds of data structures and overcome the limitations of the individual logics. Several of our decidable logics include abstraction functions that map a data structure into its more abstract view (a tree into a multiset, a multiset into a set), into a numerical quantity (the size or the height), or into the truth value of a candidate data structure invariant (sortedness, or the heap property). For algebraic data types, we identify an asymptotic many-to-one condition on the abstraction function that guarantees the existence of a decision procedure. In addition to the combination based on abstraction functions, we can combine multiple data structure theories if they all reduce to the same data structure logic. As an instance of this approach, we describe a decidable logic whose formulas are propositional combinations of formulas in: weak monadic second-order logic of two successors, two-variable logic with counting, multiset algebra with Presburger arithmetic, the Bernays-Schönfinkel-Ramsey class of first-order logic, and the logic of algebraic data types with the set content function. The subformulas in this combination can share common variables that refer to sets of objects along with the common set algebra operations. Such sound and complete combination is possible because the relations on sets definable in the component logics are all expressible in Boolean Algebra with Presburger Arithmetic. Presburger arithmetic and its new extensions play an important role in our decidability results. In several cases, when we combine logics that belong to NP, we can prove the satisfiability for the combined logic is still in NP.
AU - Kuncak, Viktor
AU - Piskac, Ruzica
AU - Suter, Philippe
AU - Wies, Thomas
ED - Barthe, Gilles
ED - Hermenegildo, Manuel
ID - 4378
TI - Building a calculus of data structures
VL - 5944
ER -
TY - CONF
AB - Shape analysis is a promising technique to prove program properties about recursive data structures. The challenge is to automatically determine the data-structure type, and to supply the shape analysis with the necessary information about the data structure. We present a stepwise approach to the selection of instrumentation predicates for a TVLA-based shape analysis, which takes us a step closer towards the fully automatic verification of data structures. The approach uses two techniques to guide the refinement of shape abstractions: (1) during program exploration, an explicit heap analysis collects sample instances of the heap structures, which are used to identify the data structures that are manipulated by the program; and (2) during abstraction refinement along an infeasible error path, we consider different possible heap abstractions and choose the coarsest one that eliminates the infeasible path. We have implemented this combined approach for automatic shape refinement as an extension of the software model checker BLAST. Example programs from a data-structure library that manipulate doubly-linked lists and trees were successfully verified by our tool.
AU - Beyer, Dirk
AU - Henzinger, Thomas A
AU - Théoduloz, Grégory
AU - Zufferey, Damien
ED - Rosenblum, David
ED - Taenzer, Gabriele
ID - 4396
TI - Shape refinement through explicit heap analysis
VL - 6013
ER -
TY - CONF
AB - Cloud computing is an emerging paradigm aimed to offer users pay-per-use computing resources, while leaving the burden of managing the computing infrastructure to the cloud provider. We present a new programming and pricing model that gives the cloud user the flexibility of trading execution speed and price on a per-job basis. We discuss the scheduling and resource management challenges for the cloud provider that arise in the implementation of this model. We argue that techniques from real-time and embedded software can be useful in this context.
AU - Henzinger, Thomas A
AU - Tomar, Anmol
AU - Singh, Vasu
AU - Wies, Thomas
AU - Zufferey, Damien
ID - 4380
TI - A marketplace for cloud resources
ER -
TY - CONF
AB - Cloud computing aims to give users virtually unlimited pay-per-use computing resources without the burden of managing the underlying infrastructure. We claim that, in order to realize the full potential of cloud computing, the user must be presented with a pricing model that offers flexibility at the requirements level, such as a choice between different degrees of execution speed and the cloud provider must be presented with a programming model that offers flexibility at the execution level, such as a choice between different scheduling policies. In such a flexible framework, with each job, the user purchases a virtual computer with the desired speed and cost characteristics, and the cloud provider can optimize the utilization of resources across a stream of jobs from different users. We designed a flexible framework to test our hypothesis, which is called FlexPRICE (Flexible Provisioning of Resources in a Cloud Environment) and works as follows. A user presents a job to the cloud. The cloud finds different schedules to execute the job and presents a set of quotes to the user in terms of price and duration for the execution. The user then chooses a particular quote and the cloud is obliged to execute the job according to the chosen quote. FlexPRICE thus hides the complexity of the actual scheduling decisions from the user, but still provides enough flexibility to meet the users actual demands. We implemented FlexPRICE in a simulator called PRICES that allows us to experiment with our framework. We observe that FlexPRICE provides a wide range of execution options-from fast and expensive to slow and cheap-- for the whole spectrum of data-intensive and computation-intensive jobs. We also observe that the set of quotes computed by FlexPRICE do not vary as the number of simultaneous jobs increases.
AU - Henzinger, Thomas A
AU - Tomar, Anmol
AU - Singh, Vasu
AU - Wies, Thomas
AU - Zufferey, Damien
ID - 4381
TI - FlexPRICE: Flexible provisioning of resources in a cloud environment
ER -
TY - CONF
AB - Transactional memory (TM) has shown potential to simplify the task of writing concurrent programs. Inspired by classical work on databases, formal definitions of the semantics of TM executions have been proposed. Many of these definitions assumed that accesses to shared data are solely performed through transactions. In practice, due to legacy code and concurrency libraries, transactions in a TM have to share data with non-transactional operations. The semantics of such interaction, while widely discussed by practitioners, lacks a clear formal specification. Those interactions can vary, sometimes in subtle ways, between TM implementations and underlying memory models. We propose a correctness condition for TMs, parametrized opacity, to formally capture the now folklore notion of strong atomicity by stipulating the two following intuitive requirements: first, every transaction appears as if it is executed instantaneously with respect to other transactions and non-transactional operations, and second, non-transactional operations conform to the given underlying memory model. We investigate the inherent cost of implementing parametrized opacity. We first prove that parametrized opacity requires either instrumenting non-transactional operations (for most memory models) or writing to memory by transactions using potentially expensive read-modify-write instructions (such as compare-and-swap). Then, we show that for a class of practical relaxed memory models, parametrized opacity can indeed be implemented with constant-time instrumentation of non-transactional writes and no instrumentation of non-transactional reads. We show that, in practice, parametrizing the notion of correctness allows developing more efficient TM implementations.
AU - Guerraoui, Rachid
AU - Henzinger, Thomas A
AU - Kapalka, Michal
AU - Singh, Vasu
ID - 4382
TI - Transactions in the jungle
ER -
TY - CONF
AB - Digital components play a central role in the design of complex embedded systems. These components are interconnected with other, possibly analog, devices and the physical environment. This environment cannot be entirely captured and can provide inaccurate input data to the component. It is thus important for digital components to have a robust behavior, i.e. the presence of a small change in the input sequences should not result in a drastic change in the output sequences. In this paper, we study a notion of robustness for sequential circuits. However, since sequential circuits may have parts that are naturally discontinuous (e.g., digital controllers with switching behavior), we need a flexible framework that accommodates this fact and leaves discontinuous parts of the circuit out from the robustness analysis. As a consequence, we consider sequential circuits that have their input variables partitioned into two disjoint sets: control and disturbance variables. Our contributions are (1) a definition of robustness for sequential circuits as a form of continuity with respect to disturbance variables, (2) the characterization of the exact class of sequential circuits that are robust according to our definition, (3) an algorithm to decide whether a sequential circuit is robust or not.
AU - Doyen, Laurent
AU - Henzinger, Thomas A
AU - Legay, Axel
AU - Nickovic, Dejan
ID - 4389
TI - Robustness of sequential circuits
ER -
TY - CHAP
AB - While a boolean notion of correctness is given by a preorder on systems and properties, a quantitative notion of correctness is defined by a distance function on systems and properties, where the distance between a system and a property provides a measure of “fit” or “desirability.” In this article, we explore several ways how the simulation preorder can be generalized to a distance function. This is done by equipping the classical simulation game between a system and a property with quantitative objectives. In particular, for systems that satisfy a property, a quantitative simulation game can measure the “robustness” of the satisfaction, that is, how much the system can deviate from its nominal behavior while still satisfying the property. For systems that violate a property, a quantitative simulation game can measure the “seriousness” of the violation, that is, how much the property has to be modified so that it is satisfied by the system. These distances can be computed in polynomial time, since the computation reduces to the value problem in limit average games with constant weights. Finally, we demonstrate how the robustness distance can be used to measure how many transmission errors are tolerated by error correcting codes.
AU - Cerny, Pavol
AU - Henzinger, Thomas A
AU - Radhakrishna, Arjun
ED - Manna, Zohar
ED - Peled, Doron
ID - 4392
T2 - Time For Verification: Essays in Memory of Amir Pnueli
TI - Quantitative Simulation Games
VL - 6200
ER -
TY - CONF
AB - Streaming string transducers [1] define (partial) functions from input strings to output strings. A streaming string transducer makes a single pass through the input string and uses a finite set of variables that range over strings from the output alphabet. At every step, the transducer processes an input symbol, and updates all the variables in parallel using assignments whose right-hand-sides are concatenations of output symbols and variables with the restriction that a variable can be used at most once in a right-hand-side expression. It has been shown that streaming string transducers operating on strings over infinite data domains are of interest in algorithmic verification of list-processing programs, as they lead to PSPACE decision procedures for checking pre/post conditions and for checking semantic equivalence, for a well-defined class of heap-manipulating programs. In order to understand the theoretical expressiveness of streaming transducers, we focus on streaming transducers processing strings over finite alphabets, given the existence of a robust and well-studied class of "regular" transductions for this case. Such regular transductions can be defined either by two-way deterministic finite-state transducers, or using a logical MSO-based characterization. Our main result is that the expressiveness of streaming string transducers coincides exactly with this class of regular transductions.
AU - Alur, Rajeev
AU - Cerny, Pavol
ID - 488
TI - Expressiveness of streaming string transducers
VL - 8
ER -
TY - JOUR
AB - Any programming error that can be revealed before compiling a program saves precious time for the programmer. While integrated development environments already do a good job by detecting, e.g., data-flow abnormalities, current static analysis tools suffer from false positives ("noise") or require strong user interaction. We propose to avoid this deficiency by defining a new class of errors. A program fragment is doomed if its execution will inevitably fail, regardless of which state it is started in. We use a formal verification method to identify such errors fully automatically and, most significantly, without producing noise. We report on experiments with a prototype tool.
AU - Hoenicke, Jochen
AU - Leino, Kari
AU - Podelski, Andreas
AU - Schäf, Martin
AU - Wies, Thomas
ID - 533
IS - 2-3
JF - Formal Methods in System Design
TI - Doomed program points
VL - 37
ER -
TY - GEN
AB - Concurrent data structures with fine-grained synchronization are notoriously difficult to implement correctly. The difficulty of reasoning about these implementations does not stem from the number of variables or the program size, but rather from the large number of possible interleavings. These implementations are therefore prime candidates for model checking. We introduce an algorithm for verifying linearizability of singly-linked heap-based concurrent data structures. We consider a model consisting of an unbounded heap where each node consists an element from an unbounded data domain, with a restricted set of operations for testing and updating pointers and data elements. Our main result is that linearizability is decidable for programs that invoke a fixed number of methods, possibly in parallel. This decidable fragment covers many of the common implementation techniques — fine-grained locking, lazy synchronization, and lock-free synchronization. We also show how the technique can be used to verify optimistic implementations with the help of programmer annotations. We developed a verification tool CoLT and evaluated it on a representative sample of Java implementations of the concurrent set data structure. The tool verified linearizability of a number of implementations, found a known error in a lock-free imple- mentation and proved that the corrected version is linearizable.
AU - Cerny, Pavol
AU - Radhakrishna, Arjun
AU - Zufferey, Damien
AU - Chaudhuri, Swarat
AU - Alur, Rajeev
ID - 5391
SN - 2664-1690
TI - Model checking of linearizability of concurrent list implementations
ER -
TY - CONF
AB - Boolean notions of correctness are formalized by preorders on systems. Quantitative measures of correctness can be formalized by real-valued distance functions between systems, where the distance between implementation and specification provides a measure of “fit” or “desirability.” We extend the simulation preorder to the quantitative setting, by making each player of a simulation game pay a certain price for her choices. We use the resulting games with quantitative objectives to define three different simulation distances. The correctness distance measures how much the specification must be changed in order to be satisfied by the implementation. The coverage distance measures how much the implementation restricts the degrees of freedom offered by the specification. The robustness distance measures how much a system can deviate from the implementation description without violating the specification. We consider these distances for safety as well as liveness specifications. The distances can be computed in polynomial time for safety specifications, and for liveness specifications given by weak fairness constraints. We show that the distance functions satisfy the triangle inequality, that the distance between two systems does not increase under parallel composition with a third system, and that the distance between two systems can be bounded from above and below by distances between abstractions of the two systems. These properties suggest that our simulation distances provide an appropriate basis for a quantitative theory of discrete systems. We also demonstrate how the robustness distance can be used to measure how many transmission errors are tolerated by error correcting codes.
AU - Cerny, Pavol
AU - Henzinger, Thomas A
AU - Radhakrishna, Arjun
ID - 4393
TI - Simulation distances
VL - 6269
ER -
TY - CONF
AB - Concurrent data structures with fine-grained synchronization are notoriously difficult to implement correctly. The difficulty of reasoning about these implementations does not stem from the number of variables or the program size, but rather from the large number of possible interleavings. These implementations are therefore prime candidates for model checking. We introduce an algorithm for verifying linearizability of singly-linked heap-based concurrent data structures. We consider a model consisting of an unbounded heap where each vertex stores an element from an unbounded data domain, with a restricted set of operations for testing and updating pointers and data elements. Our main result is that linearizability is decidable for programs that invoke a fixed number of methods, possibly in parallel. This decidable fragment covers many of the common implementation techniques — fine-grained locking, lazy synchronization, and lock-free synchronization. We also show how the technique can be used to verify optimistic implementations with the help of programmer annotations. We developed a verification tool CoLT and evaluated it on a representative sample of Java implementations of the concurrent set data structure. The tool verified linearizability of a number of implementations, found a known error in a lock-free implementation and proved that the corrected version is linearizable.
AU - Cerny, Pavol
AU - Radhakrishna, Arjun
AU - Zufferey, Damien
AU - Chaudhuri, Swarat
AU - Alur, Rajeev
ID - 4390
TI - Model checking of linearizability of concurrent list implementations
VL - 6174
ER -
TY - GEN
AB - We present an algorithmic method for the synthesis of concurrent programs that are optimal with respect to quantitative performance measures. The input consists of a sequential sketch, that is, a program that does not contain synchronization constructs, and of a parametric performance model that assigns costs to actions such as locking, context switching, and idling. The quantitative synthesis problem is to automatically introduce synchronization constructs into the sequential sketch so that both correctness is guaranteed and worst-case (or average-case) performance is optimized. Correctness is formalized as race freedom or linearizability.
We show that for worst-case performance, the problem can be modeled
as a 2-player graph game with quantitative (limit-average) objectives, and
for average-case performance, as a 2 1/2 -player graph game (with probabilistic transitions). In both cases, the optimal correct program is derived from an optimal strategy in the corresponding quantitative game. We prove that the respective game problems are computationally expensive (NP-complete), and present several techniques that overcome the theoretical difficulty in cases of concurrent programs of practical interest.
We have implemented a prototype tool and used it for the automatic syn- thesis of programs that access a concurrent list. For certain parameter val- ues, our method automatically synthesizes various classical synchronization schemes for implementing a concurrent list, such as fine-grained locking or a lazy algorithm. For other parameter values, a new, hybrid synchronization style is synthesized, which uses both the lazy approach and coarse-grained locks (instead of standard fine-grained locks). The trade-off occurs because while fine-grained locking tends to decrease the cost that is due to waiting for locks, it increases cache size requirements.
AU - Chatterjee, Krishnendu
AU - Cerny, Pavol
AU - Henzinger, Thomas A
AU - Radhakrishna, Arjun
AU - Singh, Rohit
ID - 5388
SN - 2664-1690
TI - Quantitative synthesis for concurrent programs
ER -
TY - GEN
AB - Boolean notions of correctness are formalized by preorders on systems. Quantitative measures of correctness can be formalized by real-valued distance functions between systems, where the distance between implementation and specification provides a measure of “fit” or “desirability.” We extend the simulation preorder to the quantitative setting, by making each player of a simulation game pay a certain price for her choices. We use the resulting games with quantitative objectives to define three different simulation distances. The correctness distance measures how much the specification must be changed in order to be satisfied by the implementation. The coverage distance measures how much the im- plementation restricts the degrees of freedom offered by the specification. The robustness distance measures how much a system can deviate from the implementation description without violating the specification. We consider these distances for safety as well as liveness specifications. The distances can be computed in polynomial time for safety specifications, and for liveness specifications given by weak fairness constraints. We show that the distance functions satisfy the triangle inequality, that the distance between two systems does not increase under parallel composition with a third system, and that the distance between two systems can be bounded from above and below by distances between abstractions of the two systems. These properties suggest that our simulation distances provide an appropriate basis for a quantitative theory of discrete systems. We also demonstrate how the robustness distance can be used to measure how many transmission errors are tolerated by error correcting codes.
AU - Cerny, Pavol
AU - Henzinger, Thomas A
AU - Radhakrishna, Arjun
ID - 5389
SN - 2664-1690
TI - Simulation distances
ER -
TY - CONF
AB - We study observation-based strategies for partially-observable Markov decision processes (POMDPs) with parity objectives. An observation-based strategy relies on partial information about the history of a play, namely, on the past sequence of observations. We consider qualitative analysis problems: given a POMDP with a parity objective, decide whether there exists an observation-based strategy to achieve the objective with probability 1 (almost-sure winning), or with positive probability (positive winning). Our main results are twofold. First, we present a complete picture of the computational complexity of the qualitative analysis problem for POMDPs with parity objectives and its subclasses: safety, reachability, Büchi, and coBüchi objectives. We establish several upper and lower bounds that were not known in the literature. Second, we give optimal bounds (matching upper and lower bounds) for the memory required by pure and randomized observation-based strategies for each class of objectives.
AU - Chatterjee, Krishnendu
AU - Doyen, Laurent
AU - Henzinger, Thomas A
ID - 3855
TI - Qualitative analysis of partially-observable Markov Decision Processes
VL - 6281
ER -
TY - CONF
AB - GIST is a tool that (a) solves the qualitative analysis problem of turn-based probabilistic games with ω-regular objectives; and (b) synthesizes reasonable environment assumptions for synthesis of unrealizable specifications. Our tool provides the first and efficient implementations of several reduction-based techniques to solve turn-based probabilistic games, and uses the analysis of turn-based probabilistic games for synthesizing environment assumptions for unrealizable specifications.
AU - Chatterjee, Krishnendu
AU - Henzinger, Thomas A
AU - Jobstmann, Barbara
AU - Radhakrishna, Arjun
ID - 4388
TI - GIST: A solver for probabilistic games
VL - 6174
ER -
TY - CONF
AB - We consider probabilistic automata on infinite words with acceptance defined by safety, reachability, Büchi, coBüchi, and limit-average conditions. We consider quantitative and qualitative decision problems. We present extensions and adaptations of proofs for probabilistic finite automata and present an almost complete characterization of the decidability and undecidability frontier of the quantitative and qualitative decision problems for probabilistic automata on infinite words.
AU - Chatterjee, Krishnendu
AU - Henzinger, Thomas A
ID - 3857
TI - Probabilistic Automata on infinite words: decidability and undecidability results
VL - 6252
ER -
TY - CONF
AB - In this paper we extend the work of Alfaro, Henzinger et al. on interface theories for component-based design. Existing interface theories often fail to capture functional relations between the inputs and outputs of an interface. For example, a simple synchronous interface that takes as input a number n ≥ 0 and returns, at the same time, as output n + 1, cannot be expressed in existing theories. In this paper we provide a theory of relational interfaces, where such input-output relations can be captured. Our theory supports synchronous interfaces, both stateless and stateful. It includes explicit notions of environments and pluggability, and satisfies fundamental properties such as preservation of refinement by composition, and characterization of pluggability by refinement. We achieve these properties by making reasonable restrictions on feedback loops in interface compositions.
AU - Tripakis, Stavros
AU - Lickly, Ben
AU - Henzinger, Thomas A
AU - Lee, Edward
ID - 3837
T2 - EMSOFT '09 Proceedings of the seventh ACM international conference on Embedded software
TI - On relational interfaces
ER -
TY - CONF
AB - We compare several languages for specifying Markovian population models such as queuing networks and chemical reaction networks. These languages —matrix descriptions, stochastic Petri nets, stoichiometric equations, stochastic process algebras, and guarded command models— all describe continuous-time Markov chains, but they differ according to important properties, such as compositionality, expressiveness and succinctness, executability, ease of use, and the support they provide for checking the well-formedness of a model and for analyzing a model.
AU - Henzinger, Thomas A
AU - Jobstmann, Barbara
AU - Wolf, Verena
ID - 3841
TI - Formalisms for specifying Markovian population models
VL - 5797
ER -
TY - CONF
AB - Within systems biology there is an increasing interest in the stochastic behavior of biochemical reaction networks. An appropriate stochastic description is provided by the chemical master equation, which represents a continuous- time Markov chain (CTMC).
Standard Uniformization (SU) is an efficient method for the transient analysis of CTMCs. For systems with very different time scales, such as biochemical reaction networks, SU is computationally expensive. In these cases, a variant of SU, called adaptive uniformization (AU), is known to reduce the large number of iterations needed by SU. The additional difficulty of AU is that it requires the solution of a birth process.
In this paper we present an on-the-fly variant of AU, where we improve the original algorithm for AU at the cost of a small approximation error. By means of several examples, we show that our approach is particularly well-suited for biochemical reaction networks.
AU - Didier, Frédéric
AU - Henzinger, Thomas A
AU - Mateescu, Maria
AU - Wolf, Verena
ID - 3843
IS - 6
TI - Fast adaptive uniformization of the chemical master equation
VL - 4
ER -
TY - CONF
AB - The Hierarchical Timing Language (HTL) is a real-time coordination language for distributed control systems. HTL programs must be checked for well-formedness, race freedom, transmission safety (schedulability of inter-host communication), and time safety (schedulability of host computation). We present a modular abstract syntax and semantics for HTL, modular checks of well-formedness, race freedom, and transmission safety, and modular code distribution. Our contributions here complement previous results on HTL time safety and modular code generation. Modularity in HTL can be utilized in easy program composition as well as fast program analysis and code generation, but also in so-called runtime patching, where program components may be modified at runtime.
AU - Henzinger, Thomas A
AU - Kirsch, Christoph
AU - Marques, Eduardo
AU - Sokolova, Ana
ID - 3844
TI - Distributed, modular HTL
ER -
TY - GEN
AB - Gist is a tool that (a) solves the qualitative analysis problem of turn-based probabilistic games with ω-regular objectives; and (b) synthesizes reasonable environment assumptions for synthesis of unrealizable specifications. Our tool provides efficient implementations of several reduction based techniques to solve turn-based probabilistic games, and uses the analysis of turn-based probabilistic games for synthesizing environment assumptions for unrealizable specifications.
AU - Chatterjee, Krishnendu
AU - Henzinger, Thomas A
AU - Jobstmann, Barbara
AU - Radhakrishna, Arjun
ID - 5393
SN - 2664-1690
TI - Gist: A solver for probabilistic games
ER -
TY - GEN
AB - We consider two-player games played on graphs with request-response and finitary Streett objectives. We show these games are PSPACE-hard, improving the previous known NP-hardness. We also improve the lower bounds on memory required by the winning strategies for the players.
AU - Chatterjee, Krishnendu
AU - Henzinger, Thomas A
AU - Horn, Florian
ID - 5394
SN - 2664-1690
TI - Improved lower bounds for request-response and finitary Streett games
ER -
TY - GEN
AB - We study observation-based strategies for partially-observable Markov decision processes (POMDPs) with omega-regular objectives. An observation-based strategy relies on partial information about the history of a play, namely, on the past sequence of observa- tions. We consider the qualitative analysis problem: given a POMDP with an omega-regular objective, whether there is an observation-based strategy to achieve the objective with probability 1 (almost-sure winning), or with positive probability (positive winning). Our main results are twofold. First, we present a complete picture of the computational complexity of the qualitative analysis of POMDPs with parity objectives (a canonical form to express omega-regular objectives) and its subclasses. Our contribution consists in establishing several upper and lower bounds that were not known in literature. Second, we present optimal bounds (matching upper and lower bounds) on the memory required by pure and randomized observation-based strategies for the qualitative analysis of POMDPs with parity objectives and its subclasses.
AU - Chatterjee, Krishnendu
AU - Doyen, Laurent
AU - Henzinger, Thomas A
ID - 5395
SN - 2664-1690
TI - Qualitative analysis of partially-observable Markov decision processes
ER -