{"isi":1,"publication_status":"published","ddc":["000"],"volume":11561,"scopus_import":"1","month":"07","publication":"31st International Conference on Computer-Aided Verification","citation":{"ista":"Avni G, Bloem R, Chatterjee K, Henzinger TA, Konighofer B, Pranger S. 2019. Run-time optimization for learned controllers through quantitative games. 31st International Conference on Computer-Aided Verification. CAV: Computer Aided Verification, LNCS, vol. 11561, 630–649.","mla":"Avni, Guy, et al. “Run-Time Optimization for Learned Controllers through Quantitative Games.” 31st International Conference on Computer-Aided Verification, vol. 11561, Springer, 2019, pp. 630–49, doi:10.1007/978-3-030-25540-4_36.","chicago":"Avni, Guy, Roderick Bloem, Krishnendu Chatterjee, Thomas A Henzinger, Bettina Konighofer, and Stefan Pranger. “Run-Time Optimization for Learned Controllers through Quantitative Games.” In 31st International Conference on Computer-Aided Verification, 11561:630–49. Springer, 2019. https://doi.org/10.1007/978-3-030-25540-4_36.","ama":"Avni G, Bloem R, Chatterjee K, Henzinger TA, Konighofer B, Pranger S. Run-time optimization for learned controllers through quantitative games. In: 31st International Conference on Computer-Aided Verification. Vol 11561. Springer; 2019:630-649. doi:10.1007/978-3-030-25540-4_36","apa":"Avni, G., Bloem, R., Chatterjee, K., Henzinger, T. A., Konighofer, B., & Pranger, S. (2019). Run-time optimization for learned controllers through quantitative games. In 31st International Conference on Computer-Aided Verification (Vol. 11561, pp. 630–649). New York, NY, United States: Springer. https://doi.org/10.1007/978-3-030-25540-4_36","ieee":"G. Avni, R. Bloem, K. Chatterjee, T. A. Henzinger, B. Konighofer, and S. Pranger, “Run-time optimization for learned controllers through quantitative games,” in 31st International Conference on Computer-Aided Verification, New York, NY, United States, 2019, vol. 11561, pp. 630–649.","short":"G. Avni, R. Bloem, K. Chatterjee, T.A. Henzinger, B. Konighofer, S. Pranger, in:, 31st International Conference on Computer-Aided Verification, Springer, 2019, pp. 630–649."},"status":"public","external_id":{"isi":["000491468000036"]},"author":[{"full_name":"Avni, Guy","first_name":"Guy","last_name":"Avni","id":"463C8BC2-F248-11E8-B48F-1D18A9856A87","orcid":"0000-0001-5588-8287"},{"last_name":"Bloem","first_name":"Roderick","full_name":"Bloem, Roderick"},{"first_name":"Krishnendu","orcid":"0000-0002-4561-241X","last_name":"Chatterjee","id":"2E5DCA20-F248-11E8-B48F-1D18A9856A87","full_name":"Chatterjee, Krishnendu"},{"first_name":"Thomas A","orcid":"0000−0002−2985−7724","last_name":"Henzinger","id":"40876CD8-F248-11E8-B48F-1D18A9856A87","full_name":"Henzinger, Thomas A"},{"last_name":"Konighofer","first_name":"Bettina","full_name":"Konighofer, Bettina"},{"last_name":"Pranger","first_name":"Stefan","full_name":"Pranger, Stefan"}],"has_accepted_license":"1","date_updated":"2023-08-25T10:33:27Z","user_id":"4359f0d1-fa6c-11eb-b949-802e58b17ae8","year":"2019","_id":"6462","doi":"10.1007/978-3-030-25540-4_36","quality_controlled":"1","alternative_title":["LNCS"],"type":"conference","tmp":{"legal_code_url":"https://creativecommons.org/licenses/by/4.0/legalcode","image":"/images/cc_by.png","short":"CC BY (4.0)","name":"Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)"},"file":[{"file_id":"6816","access_level":"open_access","date_created":"2019-08-14T09:35:24Z","file_size":659766,"date_updated":"2020-07-14T12:47:31Z","checksum":"c231579f2485c6fd4df17c9443a4d80b","creator":"dernst","relation":"main_file","file_name":"2019_CAV_Avni.pdf","content_type":"application/pdf"}],"publisher":"Springer","department":[{"_id":"ToHe"},{"_id":"KrCh"}],"oa":1,"abstract":[{"text":"A controller is a device that interacts with a plant. At each time point,it reads the plant’s state and issues commands with the goal that the plant oper-ates optimally. Constructing optimal controllers is a fundamental and challengingproblem. Machine learning techniques have recently been successfully applied totrain controllers, yet they have limitations. Learned controllers are monolithic andhard to reason about. In particular, it is difficult to add features without retraining,to guarantee any level of performance, and to achieve acceptable performancewhen encountering untrained scenarios. These limitations can be addressed bydeploying quantitative run-timeshieldsthat serve as a proxy for the controller.At each time point, the shield reads the command issued by the controller andmay choose to alter it before passing it on to the plant. We show how optimalshields that interfere as little as possible while guaranteeing a desired level ofcontroller performance, can be generated systematically and automatically usingreactive synthesis. First, we abstract the plant by building a stochastic model.Second, we consider the learned controller to be a black box. Third, we mea-surecontroller performanceandshield interferenceby two quantitative run-timemeasures that are formally defined using weighted automata. Then, the problemof constructing a shield that guarantees maximal performance with minimal inter-ference is the problem of finding an optimal strategy in a stochastic2-player game“controller versus shield” played on the abstract state space of the plant with aquantitative objective obtained from combining the performance and interferencemeasures. We illustrate the effectiveness of our approach by automatically con-structing lightweight shields for learned traffic-light controllers in various roadnetworks. The shields we generate avoid liveness bugs, improve controller per-formance in untrained and changing traffic situations, and add features to learnedcontrollers, such as giving priority to emergency vehicles.","lang":"eng"}],"conference":{"name":"CAV: Computer Aided Verification","location":"New York, NY, United States","start_date":"2019-07-13","end_date":"2019-07-18"},"date_published":"2019-07-12T00:00:00Z","oa_version":"Published Version","file_date_updated":"2020-07-14T12:47:31Z","publication_identifier":{"isbn":["9783030255398"],"issn":["0302-9743"]},"date_created":"2019-05-16T11:22:30Z","language":[{"iso":"eng"}],"article_processing_charge":"No","title":"Run-time optimization for learned controllers through quantitative games","page":"630-649","day":"12","project":[{"call_identifier":"FWF","grant_number":"M02369","_id":"264B3912-B435-11E9-9278-68D0E5697425","name":"Formal Methods meets Algorithmic Game Theory"},{"call_identifier":"FWF","name":"The Wittgenstein Prize","grant_number":"Z211","_id":"25F42A32-B435-11E9-9278-68D0E5697425"},{"call_identifier":"FWF","_id":"25832EC2-B435-11E9-9278-68D0E5697425","grant_number":"S 11407_N23","name":"Rigorous Systems Engineering"}],"intvolume":" 11561"}