Planning in adversarial and uncertain environments can be modeled as the problem of devising strategies in stochastic perfect information games. These games are generalizations of Markov decision processes (MDPs): there are two (adversarial) players, and a source of randomness. The main practical obstacle to computing winning strategies in such games is the size of the state space. In practice therefore, one typically works with abstractions of the model. The diffculty is to come up with an abstraction that is neither too coarse to remove all winning strategies (plans), nor too fine to be intractable. In verification, the paradigm of counterexample-guided abstraction refinement has been successful to construct useful but parsimonious abstractions automatically. We extend this paradigm to probabilistic models (namely, perfect information games and, as a special case, MDPs). This allows us to apply the counterexample-guided abstraction paradigm to the AI planning problem. As special cases, we get planning algorithms for MDPs and deterministic systems that automatically construct system abstractions.
104 - 111
UAI: Uncertainty in Artificial Intelligence
Chatterjee K, Henzinger TA, Jhala R, Majumdar R. Counterexample-guided planning. In: AUAI Press; 2005:104-111.
Chatterjee, K., Henzinger, T. A., Jhala, R., & Majumdar, R. (2005). Counterexample-guided planning (pp. 104–111). Presented at the UAI: Uncertainty in Artificial Intelligence, AUAI Press.
Chatterjee, Krishnendu, Thomas A Henzinger, Ranjit Jhala, and Ritankar Majumdar. “Counterexample-Guided Planning,” 104–11. AUAI Press, 2005.
K. Chatterjee, T. A. Henzinger, R. Jhala, and R. Majumdar, “Counterexample-guided planning,” presented at the UAI: Uncertainty in Artificial Intelligence, 2005, pp. 104–111.
Chatterjee K, Henzinger TA, Jhala R, Majumdar R. 2005. Counterexample-guided planning. UAI: Uncertainty in Artificial Intelligence 104–111.
Chatterjee, Krishnendu, et al. Counterexample-Guided Planning. AUAI Press, 2005, pp. 104–11.
Link(s) to Main File(s)