{"month":"01","page":"33-49","date_updated":"2023-09-07T13:16:18Z","date_created":"2020-07-05T22:00:46Z","department":[{"_id":"ChLa"}],"_id":"8092","title":"XGAN: Unsupervised image-to-image translation for many-to-many mappings","author":[{"orcid":"0000-0002-8407-0705","full_name":"Royer, Amélie","last_name":"Royer","id":"3811D890-F248-11E8-B48F-1D18A9856A87","first_name":"Amélie"},{"first_name":"Konstantinos","last_name":"Bousmalis","full_name":"Bousmalis, Konstantinos"},{"full_name":"Gouws, Stephan","last_name":"Gouws","first_name":"Stephan"},{"first_name":"Fred","last_name":"Bertsch","full_name":"Bertsch, Fred"},{"first_name":"Inbar","last_name":"Mosseri","full_name":"Mosseri, Inbar"},{"first_name":"Forrester","last_name":"Cole","full_name":"Cole, Forrester"},{"first_name":"Kevin","full_name":"Murphy, Kevin","last_name":"Murphy"}],"type":"book_chapter","year":"2020","publication_status":"published","editor":[{"full_name":"Singh, Richa","last_name":"Singh","first_name":"Richa"},{"full_name":"Vatsa, Mayank","last_name":"Vatsa","first_name":"Mayank"},{"first_name":"Vishal M.","full_name":"Patel, Vishal M.","last_name":"Patel"},{"full_name":"Ratha, Nalini","last_name":"Ratha","first_name":"Nalini"}],"oa_version":"Preprint","oa":1,"publication":"Domain Adaptation for Visual Understanding","external_id":{"arxiv":["1711.05139"]},"quality_controlled":"1","main_file_link":[{"open_access":"1","url":"https://arxiv.org/abs/1711.05139"}],"publication_identifier":{"isbn":["9783030306717"]},"date_published":"2020-01-08T00:00:00Z","day":"08","status":"public","citation":{"chicago":"Royer, Amélie, Konstantinos Bousmalis, Stephan Gouws, Fred Bertsch, Inbar Mosseri, Forrester Cole, and Kevin Murphy. “XGAN: Unsupervised Image-to-Image Translation for Many-to-Many Mappings.” In Domain Adaptation for Visual Understanding, edited by Richa Singh, Mayank Vatsa, Vishal M. Patel, and Nalini Ratha, 33–49. Springer Nature, 2020. https://doi.org/10.1007/978-3-030-30671-7_3.","ieee":"A. Royer et al., “XGAN: Unsupervised image-to-image translation for many-to-many mappings,” in Domain Adaptation for Visual Understanding, R. Singh, M. Vatsa, V. M. Patel, and N. Ratha, Eds. Springer Nature, 2020, pp. 33–49.","ama":"Royer A, Bousmalis K, Gouws S, et al. XGAN: Unsupervised image-to-image translation for many-to-many mappings. In: Singh R, Vatsa M, Patel VM, Ratha N, eds. Domain Adaptation for Visual Understanding. Springer Nature; 2020:33-49. doi:10.1007/978-3-030-30671-7_3","short":"A. Royer, K. Bousmalis, S. Gouws, F. Bertsch, I. Mosseri, F. Cole, K. Murphy, in:, R. Singh, M. Vatsa, V.M. Patel, N. Ratha (Eds.), Domain Adaptation for Visual Understanding, Springer Nature, 2020, pp. 33–49.","apa":"Royer, A., Bousmalis, K., Gouws, S., Bertsch, F., Mosseri, I., Cole, F., & Murphy, K. (2020). XGAN: Unsupervised image-to-image translation for many-to-many mappings. In R. Singh, M. Vatsa, V. M. Patel, & N. Ratha (Eds.), Domain Adaptation for Visual Understanding (pp. 33–49). Springer Nature. https://doi.org/10.1007/978-3-030-30671-7_3","ista":"Royer A, Bousmalis K, Gouws S, Bertsch F, Mosseri I, Cole F, Murphy K. 2020.XGAN: Unsupervised image-to-image translation for many-to-many mappings. In: Domain Adaptation for Visual Understanding. , 33–49.","mla":"Royer, Amélie, et al. “XGAN: Unsupervised Image-to-Image Translation for Many-to-Many Mappings.” Domain Adaptation for Visual Understanding, edited by Richa Singh et al., Springer Nature, 2020, pp. 33–49, doi:10.1007/978-3-030-30671-7_3."},"doi":"10.1007/978-3-030-30671-7_3","scopus_import":"1","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","related_material":{"record":[{"status":"deleted","id":"8331","relation":"dissertation_contains"},{"relation":"dissertation_contains","status":"public","id":"8390"}]},"article_processing_charge":"No","language":[{"iso":"eng"}],"abstract":[{"lang":"eng","text":"Image translation refers to the task of mapping images from a visual domain to another. Given two unpaired collections of images, we aim to learn a mapping between the corpus-level style of each collection, while preserving semantic content shared across the two domains. We introduce xgan, a dual adversarial auto-encoder, which captures a shared representation of the common domain semantic content in an unsupervised way, while jointly learning the domain-to-domain image translations in both directions. We exploit ideas from the domain adaptation literature and define a semantic consistency loss which encourages the learned embedding to preserve semantics shared across domains. We report promising qualitative results for the task of face-to-cartoon translation. The cartoon dataset we collected for this purpose, “CartoonSet”, is also publicly available as a new benchmark for semantic style transfer at https://google.github.io/cartoonset/index.html."}],"publisher":"Springer Nature"}