{"citation":{"ieee":"K. Chatterjee, R. J. Saona Urmeneta, and B. Ziliotto, “Finite-memory strategies in POMDPs with long-run average objectives,” Mathematics of Operations Research, vol. 47, no. 1. Institute for Operations Research and the Management Sciences, pp. 100–119, 2022.","short":"K. Chatterjee, R.J. Saona Urmeneta, B. Ziliotto, Mathematics of Operations Research 47 (2022) 100–119.","chicago":"Chatterjee, Krishnendu, Raimundo J Saona Urmeneta, and Bruno Ziliotto. “Finite-Memory Strategies in POMDPs with Long-Run Average Objectives.” Mathematics of Operations Research. Institute for Operations Research and the Management Sciences, 2022. https://doi.org/10.1287/moor.2020.1116.","ista":"Chatterjee K, Saona Urmeneta RJ, Ziliotto B. 2022. Finite-memory strategies in POMDPs with long-run average objectives. Mathematics of Operations Research. 47(1), 100–119.","ama":"Chatterjee K, Saona Urmeneta RJ, Ziliotto B. Finite-memory strategies in POMDPs with long-run average objectives. Mathematics of Operations Research. 2022;47(1):100-119. doi:10.1287/moor.2020.1116","apa":"Chatterjee, K., Saona Urmeneta, R. J., & Ziliotto, B. (2022). Finite-memory strategies in POMDPs with long-run average objectives. Mathematics of Operations Research. Institute for Operations Research and the Management Sciences. https://doi.org/10.1287/moor.2020.1116","mla":"Chatterjee, Krishnendu, et al. “Finite-Memory Strategies in POMDPs with Long-Run Average Objectives.” Mathematics of Operations Research, vol. 47, no. 1, Institute for Operations Research and the Management Sciences, 2022, pp. 100–19, doi:10.1287/moor.2020.1116."},"date_published":"2022-02-01T00:00:00Z","year":"2022","department":[{"_id":"GradSch"},{"_id":"KrCh"}],"keyword":["Management Science and Operations Research","General Mathematics","Computer Science Applications"],"publication_status":"published","acknowledgement":"Partially supported by Austrian Science Fund (FWF) NFN Grant No RiSE/SHiNE S11407, by CONICYT Chile through grant PII 20150140, and by ECOS-CONICYT through grant C15E03.\r\n","article_type":"original","language":[{"iso":"eng"}],"title":"Finite-memory strategies in POMDPs with long-run average objectives","date_updated":"2023-09-05T13:16:11Z","intvolume":" 47","page":"100-119","month":"02","project":[{"call_identifier":"FWF","name":"Game Theory","grant_number":"S11407","_id":"25863FF4-B435-11E9-9278-68D0E5697425"}],"publication_identifier":{"issn":["0364-765X"],"eissn":["1526-5471"]},"main_file_link":[{"url":"https://arxiv.org/abs/1904.13360","open_access":"1"}],"doi":"10.1287/moor.2020.1116","publication":"Mathematics of Operations Research","date_created":"2021-04-08T09:33:31Z","type":"journal_article","external_id":{"arxiv":["1904.13360"],"isi":["000731918100001"]},"abstract":[{"lang":"eng","text":"Partially observable Markov decision processes (POMDPs) are standard models for dynamic systems with probabilistic and nondeterministic behaviour in uncertain environments. We prove that in POMDPs with long-run average objective, the decision maker has approximately optimal strategies with finite memory. This implies notably that approximating the long-run value is recursively enumerable, as well as a weak continuity property of the value with respect to the transition function. "}],"issue":"1","day":"01","_id":"9311","user_id":"c635000d-4b10-11ee-a964-aac5a93f6ac1","scopus_import":"1","article_processing_charge":"No","quality_controlled":"1","isi":1,"author":[{"full_name":"Chatterjee, Krishnendu","orcid":"0000-0002-4561-241X","first_name":"Krishnendu","last_name":"Chatterjee","id":"2E5DCA20-F248-11E8-B48F-1D18A9856A87"},{"full_name":"Saona Urmeneta, Raimundo J","orcid":"0000-0001-5103-038X","first_name":"Raimundo J","last_name":"Saona Urmeneta","id":"BD1DF4C4-D767-11E9-B658-BC13E6697425"},{"first_name":"Bruno","last_name":"Ziliotto","full_name":"Ziliotto, Bruno"}],"volume":47,"oa_version":"Preprint","oa":1,"publisher":"Institute for Operations Research and the Management Sciences","status":"public"}