{"ec_funded":1,"oa_version":"Preprint","quality_controlled":"1","date_created":"2021-12-16T12:41:20Z","title":"One-sided Frank-Wolfe algorithms for saddle problems","publication_status":"published","year":"2021","oa":1,"publication":"38th International Conference on Machine Learning","article_processing_charge":"No","date_published":"2021-07-01T00:00:00Z","_id":"10552","main_file_link":[{"url":"https://arxiv.org/abs/2101.12617","open_access":"1"}],"conference":{"name":"ICML: International Conference on Machine Learning","end_date":"2021-07-24","location":"Virtual","start_date":"2021-07-18"},"language":[{"iso":"eng"}],"month":"07","abstract":[{"lang":"eng","text":"We study a class of convex-concave saddle-point problems of the form minxmaxy⟨Kx,y⟩+fP(x)−h∗(y) where K is a linear operator, fP is the sum of a convex function f with a Lipschitz-continuous gradient and the indicator function of a bounded convex polytope P, and h∗ is a convex (possibly nonsmooth) function. Such problem arises, for example, as a Lagrangian relaxation of various discrete optimization problems. Our main assumptions are the existence of an efficient linear minimization oracle (lmo) for fP and an efficient proximal map for h∗ which motivate the solution via a blend of proximal primal-dual algorithms and Frank-Wolfe algorithms. In case h∗ is the indicator function of a linear constraint and function f is quadratic, we show a O(1/n2) convergence rate on the dual objective, requiring O(nlogn) calls of lmo. If the problem comes from the constrained optimization problem minx∈Rd{fP(x)|Ax−b=0} then we additionally get bound O(1/n2) both on the primal gap and on the infeasibility gap. In the most general case, we show a O(1/n) convergence rate of the primal-dual gap again requiring O(nlogn) calls of lmo. To the best of our knowledge, this improves on the known convergence rates for the considered class of saddle-point problems. We show applications to labeling problems frequently appearing in machine learning and computer vision."}],"project":[{"grant_number":"616160","call_identifier":"FP7","_id":"25FBA906-B435-11E9-9278-68D0E5697425","name":"Discrete Optimization in Computer Vision: Theory and Practice"}],"citation":{"mla":"Kolmogorov, Vladimir, and Thomas Pock. “One-Sided Frank-Wolfe Algorithms for Saddle Problems.” 38th International Conference on Machine Learning, 2021.","apa":"Kolmogorov, V., & Pock, T. (2021). One-sided Frank-Wolfe algorithms for saddle problems. In 38th International Conference on Machine Learning. Virtual.","ieee":"V. Kolmogorov and T. Pock, “One-sided Frank-Wolfe algorithms for saddle problems,” in 38th International Conference on Machine Learning, Virtual, 2021.","ista":"Kolmogorov V, Pock T. 2021. One-sided Frank-Wolfe algorithms for saddle problems. 38th International Conference on Machine Learning. ICML: International Conference on Machine Learning.","ama":"Kolmogorov V, Pock T. One-sided Frank-Wolfe algorithms for saddle problems. In: 38th International Conference on Machine Learning. ; 2021.","short":"V. Kolmogorov, T. Pock, in:, 38th International Conference on Machine Learning, 2021.","chicago":"Kolmogorov, Vladimir, and Thomas Pock. “One-Sided Frank-Wolfe Algorithms for Saddle Problems.” In 38th International Conference on Machine Learning, 2021."},"status":"public","department":[{"_id":"VlKo"}],"day":"01","date_updated":"2021-12-17T09:06:46Z","author":[{"id":"3D50B0BA-F248-11E8-B48F-1D18A9856A87","last_name":"Kolmogorov","first_name":"Vladimir","full_name":"Kolmogorov, Vladimir"},{"first_name":"Thomas","full_name":"Pock, Thomas","last_name":"Pock"}],"acknowledgement":"Vladimir Kolmogorov was supported by the European Research Council under the European Unions Seventh Framework Programme (FP7/2007-2013)/ERC grant agreement no 616160. Thomas Pock acknowledges support by an ERC grant HOMOVIS, no 640156.","user_id":"8b945eb4-e2f2-11eb-945a-df72226e66a9","type":"conference","external_id":{"arxiv":["2101.12617"]}}