---
res:
bibo_abstract:
- "We consider Markov decision processes (MDPs) with multiple limit-average (or
mean-payoff) objectives. \r\nThere have been two different views: (i) the expectation
semantics, where the goal is to optimize the expected mean-payoff objective, and
(ii) the satisfaction semantics, where the goal is to maximize the probability
of runs such that the mean-payoff value stays above a given vector. \r\nWe consider
the problem where the goal is to optimize the expectation under the constraint
that the satisfaction semantics is ensured, and thus consider a generalization
that unifies the existing semantics.\r\nOur problem captures the notion of optimization
with respect to strategies that are risk-averse (i.e., ensures certain probabilistic
guarantee).\r\nOur main results are algorithms for the decision problem which
are always polynomial in the size of the MDP. We also show that an approximation
of the Pareto-curve can be computed in time polynomial in the size of the MDP,
and the approximation factor, but exponential in the number of dimensions.\r\nFinally,
we present a complete characterization of the strategy complexity (in terms of
memory bounds and randomization) required to solve our problem.@eng"
bibo_authorlist:
- foaf_Person:
foaf_givenName: Krishnendu
foaf_name: Chatterjee, Krishnendu
foaf_surname: Chatterjee
foaf_workInfoHomepage: http://www.librecat.org/personId=2E5DCA20-F248-11E8-B48F-1D18A9856A87
orcid: 0000-0002-4561-241X
- foaf_Person:
foaf_givenName: Zuzana
foaf_name: Komarkova, Zuzana
foaf_surname: Komarkova
- foaf_Person:
foaf_givenName: Jan
foaf_name: Kretinsky, Jan
foaf_surname: Kretinsky
foaf_workInfoHomepage: http://www.librecat.org/personId=44CEF464-F248-11E8-B48F-1D18A9856A87
orcid: 0000-0002-8122-2881
bibo_doi: 10.15479/AT:IST-2015-318-v1-1
dct_date: 2015^xs_gYear
dct_isPartOf:
- http://id.crossref.org/issn/2664-1690
dct_language: eng
dct_publisher: IST Austria@
dct_title: Unifying two views on multiple mean-payoff objectives in Markov decision
processes@
...