{"_id":"3339","oa":1,"type":"preprint","page":"17","publication_status":"published","language":[{"iso":"eng"}],"publication":"arXiv","status":"public","external_id":{"arxiv":["1107.2132"]},"date_created":"2018-12-11T12:02:46Z","author":[{"full_name":"Chatterjee, Krishnendu","id":"2E5DCA20-F248-11E8-B48F-1D18A9856A87","orcid":"0000-0002-4561-241X","first_name":"Krishnendu","last_name":"Chatterjee"},{"first_name":"Luca","last_name":"De Alfaro","full_name":"De Alfaro, Luca"},{"full_name":"Pritam, Roy","first_name":"Roy","last_name":"Pritam"}],"oa_version":"Preprint","abstract":[{"text":"Turn-based stochastic games and its important subclass Markov decision processes (MDPs) provide models for systems with both probabilistic and nondeterministic behaviors. We consider turn-based stochastic games with two classical quantitative objectives: discounted-sum and long-run average objectives. The game models and the quantitative objectives are widely used in probabilistic verification, planning, optimal inventory control, network protocol and performance analysis. Games and MDPs that model realistic systems often have very large state spaces, and probabilistic abstraction techniques are necessary to handle the state-space explosion. The commonly used full-abstraction techniques do not yield space-savings for systems that have many states with similar value, but does not necessarily have similar transition structure. A semi-abstraction technique, namely Magnifying-lens abstractions (MLA), that clusters states based on value only, disregarding differences in their transition relation was proposed for qualitative objectives (reachability and safety objectives). In this paper we extend the MLA technique to solve stochastic games with discounted-sum and long-run average objectives. We present the MLA technique based abstraction-refinement algorithm for stochastic games and MDPs with discounted-sum objectives. For long-run average objectives, our solution works for all MDPs and a sub-class of stochastic games where every state has the same value. ","lang":"eng"}],"publist_id":"3286","date_updated":"2021-01-12T07:42:46Z","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","month":"07","year":"2011","day":"11","publisher":"ArXiv","department":[{"_id":"KrCh"}],"title":"Magnifying lens abstraction for stochastic games with discounted and long-run average objectives","main_file_link":[{"open_access":"1","url":"http://arxiv.org/abs/1107.2132"}],"date_published":"2011-07-11T00:00:00Z","citation":{"chicago":"Chatterjee, Krishnendu, Luca De Alfaro, and Roy Pritam. “Magnifying Lens Abstraction for Stochastic Games with Discounted and Long-Run Average Objectives.” ArXiv. ArXiv, 2011.","apa":"Chatterjee, K., De Alfaro, L., & Pritam, R. (2011). Magnifying lens abstraction for stochastic games with discounted and long-run average objectives. arXiv. ArXiv.","ista":"Chatterjee K, De Alfaro L, Pritam R. 2011. Magnifying lens abstraction for stochastic games with discounted and long-run average objectives. arXiv, .","short":"K. Chatterjee, L. De Alfaro, R. Pritam, ArXiv (2011).","mla":"Chatterjee, Krishnendu, et al. “Magnifying Lens Abstraction for Stochastic Games with Discounted and Long-Run Average Objectives.” ArXiv, ArXiv, 2011.","ama":"Chatterjee K, De Alfaro L, Pritam R. Magnifying lens abstraction for stochastic games with discounted and long-run average objectives. arXiv. 2011.","ieee":"K. Chatterjee, L. De Alfaro, and R. Pritam, “Magnifying lens abstraction for stochastic games with discounted and long-run average objectives,” arXiv. ArXiv, 2011."}}