{"user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","author":[{"first_name":"Paul M","id":"13C09E74-18D9-11E9-8878-32CFE5697425","last_name":"Henderson","orcid":"0000-0002-5198-7445","full_name":"Henderson, Paul M"},{"first_name":"Vagia","last_name":"Tsiminaki","full_name":"Tsiminaki, Vagia"},{"first_name":"Christoph","orcid":"0000-0001-8622-7887","id":"40C20FD2-F248-11E8-B48F-1D18A9856A87","last_name":"Lampert","full_name":"Lampert, Christoph"}],"external_id":{"arxiv":["2004.04180"]},"date_updated":"2023-10-17T07:37:11Z","has_accepted_license":"1","doi":"10.1109/CVPR42600.2020.00752","quality_controlled":"1","_id":"8186","year":"2020","main_file_link":[{"open_access":"1","url":"https://openaccess.thecvf.com/content_CVPR_2020/papers/Henderson_Leveraging_2D_Data_to_Learn_Textured_3D_Mesh_Generation_CVPR_2020_paper.pdf"}],"scopus_import":"1","ddc":["004"],"citation":{"mla":"Henderson, Paul M., et al. “Leveraging 2D Data to Learn Textured 3D Mesh Generation.” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, 2020, pp. 7498–507, doi:10.1109/CVPR42600.2020.00752.","ista":"Henderson PM, Tsiminaki V, Lampert C. 2020. Leveraging 2D data to learn textured 3D mesh generation. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. CVPR: Conference on Computer Vision and Pattern Recognition, 7498–7507.","short":"P.M. Henderson, V. Tsiminaki, C. Lampert, in:, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, 2020, pp. 7498–7507.","ieee":"P. M. Henderson, V. Tsiminaki, and C. Lampert, “Leveraging 2D data to learn textured 3D mesh generation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual, 2020, pp. 7498–7507.","apa":"Henderson, P. M., Tsiminaki, V., & Lampert, C. (2020). Leveraging 2D data to learn textured 3D mesh generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 7498–7507). Virtual: IEEE. https://doi.org/10.1109/CVPR42600.2020.00752","chicago":"Henderson, Paul M, Vagia Tsiminaki, and Christoph Lampert. “Leveraging 2D Data to Learn Textured 3D Mesh Generation.” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7498–7507. IEEE, 2020. https://doi.org/10.1109/CVPR42600.2020.00752.","ama":"Henderson PM, Tsiminaki V, Lampert C. Leveraging 2D data to learn textured 3D mesh generation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE; 2020:7498-7507. doi:10.1109/CVPR42600.2020.00752"},"publication":"Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition","month":"07","publication_status":"published","status":"public","title":"Leveraging 2D data to learn textured 3D mesh generation","article_processing_charge":"No","publication_identifier":{"eissn":["2575-7075"],"eisbn":["9781728171685"]},"file_date_updated":"2020-07-31T16:57:12Z","language":[{"iso":"eng"}],"date_created":"2020-07-31T16:53:49Z","page":"7498-7507","day":"01","file":[{"file_name":"paper.pdf","relation":"main_file","content_type":"application/pdf","date_updated":"2020-07-31T16:57:12Z","date_created":"2020-07-31T16:57:12Z","file_size":10262773,"creator":"phenders","success":1,"access_level":"open_access","file_id":"8187"}],"publisher":"IEEE","type":"conference","abstract":[{"lang":"eng","text":"Numerous methods have been proposed for probabilistic generative modelling of\r\n3D objects. However, none of these is able to produce textured objects, which\r\nrenders them of limited use for practical tasks. In this work, we present the\r\nfirst generative model of textured 3D meshes. Training such a model would\r\ntraditionally require a large dataset of textured meshes, but unfortunately,\r\nexisting datasets of meshes lack detailed textures. We instead propose a new\r\ntraining methodology that allows learning from collections of 2D images without\r\nany 3D information. To do so, we train our model to explain a distribution of\r\nimages by modelling each image as a 3D foreground object placed in front of a\r\n2D background. Thus, it learns to generate meshes that when rendered, produce\r\nimages similar to those in its training set.\r\n A well-known problem when generating meshes with deep networks is the\r\nemergence of self-intersections, which are problematic for many use-cases. As a\r\nsecond contribution we therefore introduce a new generation process for 3D\r\nmeshes that guarantees no self-intersections arise, based on the physical\r\nintuition that faces should push one another out of the way as they move.\r\n We conduct extensive experiments on our approach, reporting quantitative and\r\nqualitative results on both synthetic data and natural images. These show our\r\nmethod successfully learns to generate plausible and diverse textured 3D\r\nsamples for five challenging object classes."}],"oa":1,"date_published":"2020-07-01T00:00:00Z","oa_version":"Submitted Version","conference":{"location":"Virtual","end_date":"2020-06-19","start_date":"2020-06-14","name":"CVPR: Conference on Computer Vision and Pattern Recognition"},"department":[{"_id":"ChLa"}]}