--- res: bibo_abstract: - 'Fine-tuning is a popular way of exploiting knowledge contained in a pre-trained convolutional network for a new visual recognition task. However, the orthogonal setting of transferring knowledge from a pretrained network to a visually different yet semantically close source is rarely considered: This commonly happens with real-life data, which is not necessarily as clean as the training source (noise, geometric transformations, different modalities, etc.).To tackle such scenarios, we introduce a new, generalized form of fine-tuning, called flex-tuning, in which any individual unit (e.g. layer) of a network can be tuned, and the most promising one is chosen automatically. In order to make the method appealing for practical use, we propose two lightweight and faster selection procedures that prove to be good approximations in practice. We study these selection criteria empirically across a variety of domain shifts and data scarcity scenarios, and show that fine-tuning individual units, despite its simplicity, yields very good results as an adaptation technique. As it turns out, in contrast to common practice, rather than the last fully-connected unit it is best to tune an intermediate or early one in many domain- shift scenarios, which is accurately detected by flex-tuning.@eng' bibo_authorlist: - foaf_Person: foaf_givenName: Amélie foaf_name: Royer, Amélie foaf_surname: Royer foaf_workInfoHomepage: http://www.librecat.org/personId=3811D890-F248-11E8-B48F-1D18A9856A87 orcid: 0000-0002-8407-0705 - foaf_Person: foaf_givenName: Christoph foaf_name: Lampert, Christoph foaf_surname: Lampert foaf_workInfoHomepage: http://www.librecat.org/personId=40C20FD2-F248-11E8-B48F-1D18A9856A87 orcid: 0000-0001-8622-7887 bibo_doi: 10.1109/WACV45572.2020.9093635 dct_date: 2020^xs_gYear dct_isPartOf: - http://id.crossref.org/issn/9781728165530 dct_language: eng dct_publisher: IEEE@ dct_title: A flexible selection scheme for minimum-effort transfer learning@ ...