---
res:
bibo_abstract:
- A standard objective in partially-observable Markov decision processes (POMDPs)
is to find a policy that maximizes the expected discounted-sum payoff. However,
such policies may still permit unlikely but highly undesirable outcomes, which
is problematic especially in safety-critical applications. Recently, there has
been a surge of interest in POMDPs where the goal is to maximize the probability
to ensure that the payoff is at least a given threshold, but these approaches
do not consider any optimization beyond satisfying this threshold constraint.
In this work we go beyond both the “expectation” and “threshold” approaches and
consider a “guaranteed payoff optimization (GPO)” problem for POMDPs, where we
are given a threshold t and the objective is to find a policy σ such that a) each
possible outcome of σ yields a discounted-sum payoff of at least t, and b) the
expected discounted-sum payoff of σ is optimal (or near-optimal) among all policies
satisfying a). We present a practical approach to tackle the GPO problem and evaluate
it on standard POMDP benchmarks.@eng
bibo_authorlist:
- foaf_Person:
foaf_givenName: Krishnendu
foaf_name: Chatterjee, Krishnendu
foaf_surname: Chatterjee
foaf_workInfoHomepage: http://www.librecat.org/personId=2E5DCA20-F248-11E8-B48F-1D18A9856A87
orcid: 0000-0002-4561-241X
- foaf_Person:
foaf_givenName: Petr
foaf_name: Novotny, Petr
foaf_surname: Novotny
foaf_workInfoHomepage: http://www.librecat.org/personId=3CC3B868-F248-11E8-B48F-1D18A9856A87
- foaf_Person:
foaf_givenName: Guillermo
foaf_name: Pérez, Guillermo
foaf_surname: Pérez
- foaf_Person:
foaf_givenName: Jean
foaf_name: Raskin, Jean
foaf_surname: Raskin
- foaf_Person:
foaf_givenName: Djordje
foaf_name: Zikelic, Djordje
foaf_surname: Zikelic
bibo_volume: 5
dct_date: 2017^xs_gYear
dct_language: eng
dct_publisher: AAAI Press@
dct_title: Optimizing expectation with guarantees in POMDPs@
...