Optimizing expectation with guarantees in POMDPs
Chatterjee, Krishnendu
Novotny, Petr
Pérez, Guillermo
Raskin, Jean
Zikelic, Djordje
A standard objective in partially-observable Markov decision processes (POMDPs) is to find a policy that maximizes the expected discounted-sum payoff. However, such policies may still permit unlikely but highly undesirable outcomes, which is problematic especially in safety-critical applications. Recently, there has been a surge of interest in POMDPs where the goal is to maximize the probability to ensure that the payoff is at least a given threshold, but these approaches do not consider any optimization beyond satisfying this threshold constraint. In this work we go beyond both the “expectation” and “threshold” approaches and consider a “guaranteed payoff optimization (GPO)” problem for POMDPs, where we are given a threshold t and the objective is to find a policy σ such that a) each possible outcome of σ yields a discounted-sum payoff of at least t, and b) the expected discounted-sum payoff of σ is optimal (or near-optimal) among all policies satisfying a). We present a practical approach to tackle the GPO problem and evaluate it on standard POMDP benchmarks.
AAAI Press
2017
info:eu-repo/semantics/conferenceObject
doc-type:conferenceObject
text
http://purl.org/coar/resource_type/c_5794
https://research-explorer.app.ist.ac.at/record/1009
Chatterjee K, Novotny P, Pérez G, Raskin J, Zikelic D. Optimizing expectation with guarantees in POMDPs. In: <i>Proceedings of the 31st AAAI Conference on Artificial Intelligence</i>. Vol 5. AAAI Press; 2017:3725-3732.
eng
info:eu-repo/grantAgreement/FWF//S11407
info:eu-repo/grantAgreement/EC/FP7/279307
info:eu-repo/grantAgreement/EC/FP7/291734
info:eu-repo/grantAgreement/FWF//ICT15-003
info:eu-repo/semantics/openAccess