---
res:
bibo_abstract:
- We consider Markov Decision Processes (MDPs) with mean-payoff parity and energy
parity objectives. In system design, the parity objective is used to encode ω-regular
specifications, and the mean-payoff and energy objectives can be used to model
quantitative resource constraints. The energy condition re- quires that the resource
level never drops below 0, and the mean-payoff condi- tion requires that the limit-average
value of the resource consumption is within a threshold. While these two (energy
and mean-payoff) classical conditions are equivalent for two-player games, we
show that they differ for MDPs. We show that the problem of deciding whether a
state is almost-sure winning (i.e., winning with probability 1) in energy parity
MDPs is in NP ∩ coNP, while for mean- payoff parity MDPs, the problem is solvable
in polynomial time, improving a recent PSPACE bound.@eng
bibo_authorlist:
- foaf_Person:
foaf_givenName: Krishnendu
foaf_name: Chatterjee, Krishnendu
foaf_surname: Chatterjee
foaf_workInfoHomepage: http://www.librecat.org/personId=2E5DCA20-F248-11E8-B48F-1D18A9856A87
orcid: 0000-0002-4561-241X
- foaf_Person:
foaf_givenName: Laurent
foaf_name: Doyen, Laurent
foaf_surname: Doyen
bibo_doi: 10.1007/978-3-642-22993-0_21
bibo_volume: 6907
dct_date: 2011^xs_gYear
dct_language: eng
dct_publisher: Springer@
dct_title: Energy and mean-payoff parity Markov Decision Processes@
...