---
_id: '10803'
abstract:
- lang: eng
text: Given the abundance of applications of ranking in recent years, addressing
fairness concerns around automated ranking systems becomes necessary for increasing
the trust among end-users. Previous work on fair ranking has mostly focused on
application-specific fairness notions, often tailored to online advertising, and
it rarely considers learning as part of the process. In this work, we show how
to transfer numerous fairness notions from binary classification to a learning
to rank setting. Our formalism allows us to design methods for incorporating fairness
objectives with provable generalization guarantees. An extensive experimental
evaluation shows that our method can improve ranking fairness substantially with
no or only little loss of model quality.
article_number: '2102.05996'
article_processing_charge: No
author:
- first_name: Nikola H
full_name: Konstantinov, Nikola H
id: 4B9D76E4-F248-11E8-B48F-1D18A9856A87
last_name: Konstantinov
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0002-4561-241X
citation:
ama: Konstantinov NH, Lampert C. Fairness through regularization for learning to
rank. arXiv. doi:10.48550/arXiv.2102.05996
apa: Konstantinov, N. H., & Lampert, C. (n.d.). Fairness through regularization
for learning to rank. arXiv. https://doi.org/10.48550/arXiv.2102.05996
chicago: Konstantinov, Nikola H, and Christoph Lampert. “Fairness through Regularization
for Learning to Rank.” ArXiv, n.d. https://doi.org/10.48550/arXiv.2102.05996.
ieee: N. H. Konstantinov and C. Lampert, “Fairness through regularization for learning
to rank,” arXiv. .
ista: Konstantinov NH, Lampert C. Fairness through regularization for learning to
rank. arXiv, 2102.05996.
mla: Konstantinov, Nikola H., and Christoph Lampert. “Fairness through Regularization
for Learning to Rank.” ArXiv, 2102.05996, doi:10.48550/arXiv.2102.05996.
short: N.H. Konstantinov, C. Lampert, ArXiv (n.d.).
date_created: 2022-02-28T14:13:59Z
date_published: 2021-06-07T00:00:00Z
date_updated: 2023-09-07T13:42:08Z
day: '07'
department:
- _id: ChLa
doi: 10.48550/arXiv.2102.05996
external_id:
arxiv:
- '2102.05996'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/2102.05996
month: '06'
oa: 1
oa_version: Preprint
publication: arXiv
publication_status: submitted
related_material:
record:
- id: '10799'
relation: dissertation_contains
status: public
status: public
title: Fairness through regularization for learning to rank
type: preprint
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2021'
...
---
_id: '9418'
abstract:
- lang: eng
text: "Deep learning is best known for its empirical success across a wide range
of applications\r\nspanning computer vision, natural language processing and speech.
Of equal significance,\r\nthough perhaps less known, are its ramifications for
learning theory: deep networks have\r\nbeen observed to perform surprisingly well
in the high-capacity regime, aka the overfitting\r\nor underspecified regime.
Classically, this regime on the far right of the bias-variance curve\r\nis associated
with poor generalisation; however, recent experiments with deep networks\r\nchallenge
this view.\r\n\r\nThis thesis is devoted to investigating various aspects of underspecification
in deep learning.\r\nFirst, we argue that deep learning models are underspecified
on two levels: a) any given\r\ntraining dataset can be fit by many different functions,
and b) any given function can be\r\nexpressed by many different parameter configurations.
We refer to the second kind of\r\nunderspecification as parameterisation redundancy
and we precisely characterise its extent.\r\nSecond, we characterise the implicit
criteria (the inductive bias) that guide learning in the\r\nunderspecified regime.
Specifically, we consider a nonlinear but tractable classification\r\nsetting,
and show that given the choice, neural networks learn classifiers with a large
margin.\r\nThird, we consider learning scenarios where the inductive bias is not
by itself sufficient to\r\ndeal with underspecification. We then study different
ways of ‘tightening the specification’: i)\r\nIn the setting of representation
learning with variational autoencoders, we propose a hand-\r\ncrafted regulariser
based on mutual information. ii) In the setting of binary classification, we\r\nconsider
soft-label (real-valued) supervision. We derive a generalisation bound for linear\r\nnetworks
supervised in this way and verify that soft labels facilitate fast learning. Finally,
we\r\nexplore an application of soft-label supervision to the training of multi-exit
models."
acknowledged_ssus:
- _id: ScienComp
- _id: CampIT
- _id: E-Lib
alternative_title:
- ISTA Thesis
article_processing_charge: No
author:
- first_name: Phuong
full_name: Bui Thi Mai, Phuong
id: 3EC6EE64-F248-11E8-B48F-1D18A9856A87
last_name: Bui Thi Mai
citation:
ama: Phuong M. Underspecification in deep learning. 2021. doi:10.15479/AT:ISTA:9418
apa: Phuong, M. (2021). Underspecification in deep learning. Institute of
Science and Technology Austria. https://doi.org/10.15479/AT:ISTA:9418
chicago: Phuong, Mary. “Underspecification in Deep Learning.” Institute of Science
and Technology Austria, 2021. https://doi.org/10.15479/AT:ISTA:9418.
ieee: M. Phuong, “Underspecification in deep learning,” Institute of Science and
Technology Austria, 2021.
ista: Phuong M. 2021. Underspecification in deep learning. Institute of Science
and Technology Austria.
mla: Phuong, Mary. Underspecification in Deep Learning. Institute of Science
and Technology Austria, 2021, doi:10.15479/AT:ISTA:9418.
short: M. Phuong, Underspecification in Deep Learning, Institute of Science and
Technology Austria, 2021.
date_created: 2021-05-24T13:06:23Z
date_published: 2021-05-30T00:00:00Z
date_updated: 2023-09-08T11:11:12Z
day: '30'
ddc:
- '000'
degree_awarded: PhD
department:
- _id: GradSch
- _id: ChLa
doi: 10.15479/AT:ISTA:9418
file:
- access_level: open_access
checksum: 4f0abe64114cfed264f9d36e8d1197e3
content_type: application/pdf
creator: bphuong
date_created: 2021-05-24T11:22:29Z
date_updated: 2021-05-24T11:22:29Z
file_id: '9419'
file_name: mph-thesis-v519-pdfimages.pdf
file_size: 2673905
relation: main_file
success: 1
- access_level: closed
checksum: f5699e876bc770a9b0df8345a77720a2
content_type: application/zip
creator: bphuong
date_created: 2021-05-24T11:56:02Z
date_updated: 2021-05-24T11:56:02Z
file_id: '9420'
file_name: thesis.zip
file_size: 92995100
relation: source_file
file_date_updated: 2021-05-24T11:56:02Z
has_accepted_license: '1'
language:
- iso: eng
month: '05'
oa: 1
oa_version: Published Version
page: '125'
publication_identifier:
issn:
- 2663-337X
publication_status: published
publisher: Institute of Science and Technology Austria
related_material:
record:
- id: '7435'
relation: part_of_dissertation
status: deleted
- id: '7481'
relation: part_of_dissertation
status: public
- id: '9416'
relation: part_of_dissertation
status: public
- id: '7479'
relation: part_of_dissertation
status: public
status: public
supervisor:
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
title: Underspecification in deep learning
type: dissertation
user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1
year: '2021'
...
---
_id: '14987'
abstract:
- lang: eng
text: "The goal of zero-shot learning is to construct a classifier that can identify
object classes for which no training examples are available. When training data
for some of the object classes is available but not for others, the name generalized
zero-shot learning is commonly used.\r\nIn a wider sense, the phrase zero-shot
is also used to describe other machine learning-based approaches that require
no training data from the problem of interest, such as zero-shot action recognition
or zero-shot machine translation."
article_processing_charge: No
author:
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Lampert C. Zero-Shot Learning. In: Ikeuchi K, ed. Computer Vision.
2nd ed. Cham: Springer; 2021:1395-1397. doi:10.1007/978-3-030-63416-2_874'
apa: 'Lampert, C. (2021). Zero-Shot Learning. In K. Ikeuchi (Ed.), Computer Vision
(2nd ed., pp. 1395–1397). Cham: Springer. https://doi.org/10.1007/978-3-030-63416-2_874'
chicago: 'Lampert, Christoph. “Zero-Shot Learning.” In Computer Vision, edited
by Katsushi Ikeuchi, 2nd ed., 1395–97. Cham: Springer, 2021. https://doi.org/10.1007/978-3-030-63416-2_874.'
ieee: 'C. Lampert, “Zero-Shot Learning,” in Computer Vision, 2nd ed., K.
Ikeuchi, Ed. Cham: Springer, 2021, pp. 1395–1397.'
ista: 'Lampert C. 2021.Zero-Shot Learning. In: Computer Vision. , 1395–1397.'
mla: Lampert, Christoph. “Zero-Shot Learning.” Computer Vision, edited by
Katsushi Ikeuchi, 2nd ed., Springer, 2021, pp. 1395–97, doi:10.1007/978-3-030-63416-2_874.
short: C. Lampert, in:, K. Ikeuchi (Ed.), Computer Vision, 2nd ed., Springer, Cham,
2021, pp. 1395–1397.
date_created: 2024-02-14T14:05:32Z
date_published: 2021-10-13T00:00:00Z
date_updated: 2024-02-19T10:59:04Z
day: '13'
department:
- _id: ChLa
doi: 10.1007/978-3-030-63416-2_874
edition: '2'
editor:
- first_name: Katsushi
full_name: Ikeuchi, Katsushi
last_name: Ikeuchi
language:
- iso: eng
month: '10'
oa_version: None
page: 1395-1397
place: Cham
publication: Computer Vision
publication_identifier:
eisbn:
- '9783030634162'
isbn:
- '9783030634155'
publication_status: published
publisher: Springer
quality_controlled: '1'
status: public
title: Zero-Shot Learning
type: book_chapter
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2021'
...
---
_id: '8063'
abstract:
- lang: eng
text: "We present a generative model of images that explicitly reasons over the
set\r\nof objects they show. Our model learns a structured latent representation
that\r\nseparates objects from each other and from the background; unlike prior
works,\r\nit explicitly represents the 2D position and depth of each object, as
well as\r\nan embedding of its segmentation mask and appearance. The model can
be trained\r\nfrom images alone in a purely unsupervised fashion without the need
for object\r\nmasks or depth information. Moreover, it always generates complete
objects,\r\neven though a significant fraction of training images contain occlusions.\r\nFinally,
we show that our model can infer decompositions of novel images into\r\ntheir
constituent objects, including accurate prediction of depth ordering and\r\nsegmentation
of occluded parts."
article_number: '2004.00642'
article_processing_charge: No
author:
- first_name: Titas
full_name: Anciukevicius, Titas
last_name: Anciukevicius
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
- first_name: Paul M
full_name: Henderson, Paul M
id: 13C09E74-18D9-11E9-8878-32CFE5697425
last_name: Henderson
orcid: 0000-0002-5198-7445
citation:
ama: Anciukevicius T, Lampert C, Henderson PM. Object-centric image generation with
factored depths, locations, and appearances. arXiv.
apa: Anciukevicius, T., Lampert, C., & Henderson, P. M. (n.d.). Object-centric
image generation with factored depths, locations, and appearances. arXiv.
chicago: Anciukevicius, Titas, Christoph Lampert, and Paul M Henderson. “Object-Centric
Image Generation with Factored Depths, Locations, and Appearances.” ArXiv,
n.d.
ieee: T. Anciukevicius, C. Lampert, and P. M. Henderson, “Object-centric image generation
with factored depths, locations, and appearances,” arXiv. .
ista: Anciukevicius T, Lampert C, Henderson PM. Object-centric image generation
with factored depths, locations, and appearances. arXiv, 2004.00642.
mla: Anciukevicius, Titas, et al. “Object-Centric Image Generation with Factored
Depths, Locations, and Appearances.” ArXiv, 2004.00642.
short: T. Anciukevicius, C. Lampert, P.M. Henderson, ArXiv (n.d.).
date_created: 2020-06-29T23:55:23Z
date_published: 2020-04-01T00:00:00Z
date_updated: 2021-01-12T08:16:44Z
day: '01'
ddc:
- '004'
department:
- _id: ChLa
external_id:
arxiv:
- '2004.00642'
language:
- iso: eng
license: https://creativecommons.org/licenses/by-sa/4.0/
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/2004.00642
month: '04'
oa: 1
oa_version: Preprint
publication: arXiv
publication_status: submitted
status: public
title: Object-centric image generation with factored depths, locations, and appearances
tmp:
image: /images/cc_by_sa.png
legal_code_url: https://creativecommons.org/licenses/by-sa/4.0/legalcode
name: Creative Commons Attribution-ShareAlike 4.0 International Public License (CC
BY-SA 4.0)
short: CC BY-SA (4.0)
type: preprint
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2020'
...
---
_id: '8188'
abstract:
- lang: eng
text: "A natural approach to generative modeling of videos is to represent them
as a composition of moving objects. Recent works model a set of 2D sprites over
a slowly-varying background, but without considering the underlying 3D scene that\r\ngives
rise to them. We instead propose to model a video as the view seen while moving
through a scene with multiple 3D objects and a 3D background. Our model is trained
from monocular videos without any supervision, yet learns to\r\ngenerate coherent
3D scenes containing several moving objects. We conduct detailed experiments on
two datasets, going beyond the visual complexity supported by state-of-the-art
generative approaches. We evaluate our method on\r\ndepth-prediction and 3D object
detection---tasks which cannot be addressed by those earlier works---and show
it out-performs them even on 2D instance segmentation and tracking."
acknowledged_ssus:
- _id: ScienComp
acknowledgement: "This research was supported by the Scientific Service Units (SSU)
of IST Austria through resources\r\nprovided by Scientific Computing (SciComp).
PH is employed part-time by Blackford Analysis, but\r\nthey did not support this
project in any way."
article_processing_charge: No
author:
- first_name: Paul M
full_name: Henderson, Paul M
id: 13C09E74-18D9-11E9-8878-32CFE5697425
last_name: Henderson
orcid: 0000-0002-5198-7445
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Henderson PM, Lampert C. Unsupervised object-centric video generation and
decomposition in 3D. In: 34th Conference on Neural Information Processing Systems.
Vol 33. Curran Associates; 2020:3106–3117.'
apa: 'Henderson, P. M., & Lampert, C. (2020). Unsupervised object-centric video
generation and decomposition in 3D. In 34th Conference on Neural Information
Processing Systems (Vol. 33, pp. 3106–3117). Vancouver, Canada: Curran Associates.'
chicago: Henderson, Paul M, and Christoph Lampert. “Unsupervised Object-Centric
Video Generation and Decomposition in 3D.” In 34th Conference on Neural Information
Processing Systems, 33:3106–3117. Curran Associates, 2020.
ieee: P. M. Henderson and C. Lampert, “Unsupervised object-centric video generation
and decomposition in 3D,” in 34th Conference on Neural Information Processing
Systems, Vancouver, Canada, 2020, vol. 33, pp. 3106–3117.
ista: 'Henderson PM, Lampert C. 2020. Unsupervised object-centric video generation
and decomposition in 3D. 34th Conference on Neural Information Processing Systems.
NeurIPS: Neural Information Processing Systems vol. 33, 3106–3117.'
mla: Henderson, Paul M., and Christoph Lampert. “Unsupervised Object-Centric Video
Generation and Decomposition in 3D.” 34th Conference on Neural Information
Processing Systems, vol. 33, Curran Associates, 2020, pp. 3106–3117.
short: P.M. Henderson, C. Lampert, in:, 34th Conference on Neural Information Processing
Systems, Curran Associates, 2020, pp. 3106–3117.
conference:
end_date: 2020-12-12
location: Vancouver, Canada
name: 'NeurIPS: Neural Information Processing Systems'
start_date: 2020-12-06
date_created: 2020-07-31T16:59:19Z
date_published: 2020-07-07T00:00:00Z
date_updated: 2023-04-25T09:49:58Z
day: '07'
department:
- _id: ChLa
external_id:
arxiv:
- '2007.06705'
intvolume: ' 33'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/2007.06705
month: '07'
oa: 1
oa_version: Preprint
page: 3106–3117
publication: 34th Conference on Neural Information Processing Systems
publication_identifier:
isbn:
- '9781713829546'
publication_status: published
publisher: Curran Associates
quality_controlled: '1'
status: public
title: Unsupervised object-centric video generation and decomposition in 3D
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 33
year: '2020'
...
---
_id: '6952'
abstract:
- lang: eng
text: 'We present a unified framework tackling two problems: class-specific 3D reconstruction
from a single image, and generation of new 3D shape samples. These tasks have
received considerable attention recently; however, most existing approaches rely
on 3D supervision, annotation of 2D images with keypoints or poses, and/or training
with multiple views of each object instance. Our framework is very general: it
can be trained in similar settings to existing approaches, while also supporting
weaker supervision. Importantly, it can be trained purely from 2D images, without
pose annotations, and with only a single view per instance. We employ meshes as
an output representation, instead of voxels used in most prior work. This allows
us to reason over lighting parameters and exploit shading information during training,
which previous 2D-supervised methods cannot. Thus, our method can learn to generate
and reconstruct concave object classes. We evaluate our approach in various settings,
showing that: (i) it learns to disentangle shape from pose and lighting; (ii)
using shading in the loss improves performance compared to just silhouettes; (iii)
when using a standard single white light, our model outperforms state-of-the-art
2D-supervised methods, both with and without pose supervision, thanks to exploiting
shading cues; (iv) performance improves further when using multiple coloured lights,
even approaching that of state-of-the-art 3D-supervised methods; (v) shapes produced
by our model capture smooth surfaces and fine details better than voxel-based
approaches; and (vi) our approach supports concave classes such as bathtubs and
sofas, which methods based on silhouettes cannot learn.'
acknowledgement: Open access funding provided by Institute of Science and Technology
(IST Austria).
article_processing_charge: Yes (via OA deal)
article_type: original
author:
- first_name: Paul M
full_name: Henderson, Paul M
id: 13C09E74-18D9-11E9-8878-32CFE5697425
last_name: Henderson
orcid: 0000-0002-5198-7445
- first_name: Vittorio
full_name: Ferrari, Vittorio
last_name: Ferrari
citation:
ama: Henderson PM, Ferrari V. Learning single-image 3D reconstruction by generative
modelling of shape, pose and shading. International Journal of Computer Vision.
2020;128:835-854. doi:10.1007/s11263-019-01219-8
apa: Henderson, P. M., & Ferrari, V. (2020). Learning single-image 3D reconstruction
by generative modelling of shape, pose and shading. International Journal of
Computer Vision. Springer Nature. https://doi.org/10.1007/s11263-019-01219-8
chicago: Henderson, Paul M, and Vittorio Ferrari. “Learning Single-Image 3D Reconstruction
by Generative Modelling of Shape, Pose and Shading.” International Journal
of Computer Vision. Springer Nature, 2020. https://doi.org/10.1007/s11263-019-01219-8.
ieee: P. M. Henderson and V. Ferrari, “Learning single-image 3D reconstruction by
generative modelling of shape, pose and shading,” International Journal of
Computer Vision, vol. 128. Springer Nature, pp. 835–854, 2020.
ista: Henderson PM, Ferrari V. 2020. Learning single-image 3D reconstruction by
generative modelling of shape, pose and shading. International Journal of Computer
Vision. 128, 835–854.
mla: Henderson, Paul M., and Vittorio Ferrari. “Learning Single-Image 3D Reconstruction
by Generative Modelling of Shape, Pose and Shading.” International Journal
of Computer Vision, vol. 128, Springer Nature, 2020, pp. 835–54, doi:10.1007/s11263-019-01219-8.
short: P.M. Henderson, V. Ferrari, International Journal of Computer Vision 128
(2020) 835–854.
date_created: 2019-10-17T13:38:20Z
date_published: 2020-04-01T00:00:00Z
date_updated: 2023-08-17T14:01:16Z
day: '01'
ddc:
- '004'
department:
- _id: ChLa
doi: 10.1007/s11263-019-01219-8
external_id:
arxiv:
- '1901.06447'
isi:
- '000491042100002'
file:
- access_level: open_access
checksum: a0f05dd4f5f64e4f713d8d9d4b5b1e3f
content_type: application/pdf
creator: dernst
date_created: 2019-10-25T10:28:29Z
date_updated: 2020-07-14T12:47:46Z
file_id: '6973'
file_name: 2019_CompVision_Henderson.pdf
file_size: 2243134
relation: main_file
file_date_updated: 2020-07-14T12:47:46Z
has_accepted_license: '1'
intvolume: ' 128'
isi: 1
language:
- iso: eng
license: https://creativecommons.org/licenses/by/4.0/
month: '04'
oa: 1
oa_version: Published Version
page: 835-854
project:
- _id: B67AFEDC-15C9-11EA-A837-991A96BB2854
name: IST Austria Open Access Fund
publication: International Journal of Computer Vision
publication_identifier:
eissn:
- 1573-1405
issn:
- 0920-5691
publication_status: published
publisher: Springer Nature
quality_controlled: '1'
scopus_import: '1'
status: public
title: Learning single-image 3D reconstruction by generative modelling of shape, pose
and shading
tmp:
image: /images/cc_by.png
legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
short: CC BY (4.0)
type: journal_article
user_id: 4359f0d1-fa6c-11eb-b949-802e58b17ae8
volume: 128
year: '2020'
...
---
_id: '7936'
abstract:
- lang: eng
text: 'State-of-the-art detection systems are generally evaluated on their ability
to exhaustively retrieve objects densely distributed in the image, across a wide
variety of appearances and semantic categories. Orthogonal to this, many real-life
object detection applications, for example in remote sensing, instead require
dealing with large images that contain only a few small objects of a single class,
scattered heterogeneously across the space. In addition, they are often subject
to strict computational constraints, such as limited battery capacity and computing
power.To tackle these more practical scenarios, we propose a novel flexible detection
scheme that efficiently adapts to variable object sizes and densities: We rely
on a sequence of detection stages, each of which has the ability to predict groups
of objects as well as individuals. Similar to a detection cascade, this multi-stage
architecture spares computational effort by discarding large irrelevant regions
of the image early during the detection process. The ability to group objects
provides further computational and memory savings, as it allows working with lower
image resolutions in early stages, where groups are more easily detected than
individuals, as they are more salient. We report experimental results on two aerial
image datasets, and show that the proposed method is as accurate yet computationally
more efficient than standard single-shot detectors, consistently across three
different backbone architectures.'
article_number: 1716-1725
article_processing_charge: No
author:
- first_name: Amélie
full_name: Royer, Amélie
id: 3811D890-F248-11E8-B48F-1D18A9856A87
last_name: Royer
orcid: 0000-0002-8407-0705
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Royer A, Lampert C. Localizing grouped instances for efficient detection in
low-resource scenarios. In: IEEE Winter Conference on Applications of Computer
Vision. IEEE; 2020. doi:10.1109/WACV45572.2020.9093288'
apa: 'Royer, A., & Lampert, C. (2020). Localizing grouped instances for efficient
detection in low-resource scenarios. In IEEE Winter Conference on Applications
of Computer Vision. Snowmass Village, CO, United States: IEEE. https://doi.org/10.1109/WACV45572.2020.9093288'
chicago: Royer, Amélie, and Christoph Lampert. “Localizing Grouped Instances for
Efficient Detection in Low-Resource Scenarios.” In IEEE Winter Conference on
Applications of Computer Vision. IEEE, 2020. https://doi.org/10.1109/WACV45572.2020.9093288.
ieee: A. Royer and C. Lampert, “Localizing grouped instances for efficient detection
in low-resource scenarios,” in IEEE Winter Conference on Applications of Computer
Vision, Snowmass Village, CO, United States, 2020.
ista: 'Royer A, Lampert C. 2020. Localizing grouped instances for efficient detection
in low-resource scenarios. IEEE Winter Conference on Applications of Computer
Vision. WACV: Winter Conference on Applications of Computer Vision, 1716–1725.'
mla: Royer, Amélie, and Christoph Lampert. “Localizing Grouped Instances for Efficient
Detection in Low-Resource Scenarios.” IEEE Winter Conference on Applications
of Computer Vision, 1716–1725, IEEE, 2020, doi:10.1109/WACV45572.2020.9093288.
short: A. Royer, C. Lampert, in:, IEEE Winter Conference on Applications of Computer
Vision, IEEE, 2020.
conference:
end_date: 2020-03-05
location: ' Snowmass Village, CO, United States'
name: 'WACV: Winter Conference on Applications of Computer Vision'
start_date: 2020-03-01
date_created: 2020-06-07T22:00:53Z
date_published: 2020-03-01T00:00:00Z
date_updated: 2023-09-07T13:16:17Z
day: '01'
department:
- _id: ChLa
doi: 10.1109/WACV45572.2020.9093288
external_id:
arxiv:
- '2004.12623'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/2004.12623
month: '03'
oa: 1
oa_version: Preprint
publication: IEEE Winter Conference on Applications of Computer Vision
publication_identifier:
isbn:
- '9781728165530'
publication_status: published
publisher: IEEE
quality_controlled: '1'
related_material:
record:
- id: '8331'
relation: dissertation_contains
status: deleted
- id: '8390'
relation: dissertation_contains
status: public
scopus_import: 1
status: public
title: Localizing grouped instances for efficient detection in low-resource scenarios
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2020'
...
---
_id: '7937'
abstract:
- lang: eng
text: 'Fine-tuning is a popular way of exploiting knowledge contained in a pre-trained
convolutional network for a new visual recognition task. However, the orthogonal
setting of transferring knowledge from a pretrained network to a visually different
yet semantically close source is rarely considered: This commonly happens with
real-life data, which is not necessarily as clean as the training source (noise,
geometric transformations, different modalities, etc.).To tackle such scenarios,
we introduce a new, generalized form of fine-tuning, called flex-tuning, in which
any individual unit (e.g. layer) of a network can be tuned, and the most promising
one is chosen automatically. In order to make the method appealing for practical
use, we propose two lightweight and faster selection procedures that prove to
be good approximations in practice. We study these selection criteria empirically
across a variety of domain shifts and data scarcity scenarios, and show that fine-tuning
individual units, despite its simplicity, yields very good results as an adaptation
technique. As it turns out, in contrast to common practice, rather than the last
fully-connected unit it is best to tune an intermediate or early one in many domain-
shift scenarios, which is accurately detected by flex-tuning.'
article_number: 2180-2189
article_processing_charge: No
author:
- first_name: Amélie
full_name: Royer, Amélie
id: 3811D890-F248-11E8-B48F-1D18A9856A87
last_name: Royer
orcid: 0000-0002-8407-0705
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Royer A, Lampert C. A flexible selection scheme for minimum-effort transfer
learning. In: 2020 IEEE Winter Conference on Applications of Computer Vision.
IEEE; 2020. doi:10.1109/WACV45572.2020.9093635'
apa: 'Royer, A., & Lampert, C. (2020). A flexible selection scheme for minimum-effort
transfer learning. In 2020 IEEE Winter Conference on Applications of Computer
Vision. Snowmass Village, CO, United States: IEEE. https://doi.org/10.1109/WACV45572.2020.9093635'
chicago: Royer, Amélie, and Christoph Lampert. “A Flexible Selection Scheme for
Minimum-Effort Transfer Learning.” In 2020 IEEE Winter Conference on Applications
of Computer Vision. IEEE, 2020. https://doi.org/10.1109/WACV45572.2020.9093635.
ieee: A. Royer and C. Lampert, “A flexible selection scheme for minimum-effort transfer
learning,” in 2020 IEEE Winter Conference on Applications of Computer Vision,
Snowmass Village, CO, United States, 2020.
ista: 'Royer A, Lampert C. 2020. A flexible selection scheme for minimum-effort
transfer learning. 2020 IEEE Winter Conference on Applications of Computer Vision.
WACV: Winter Conference on Applications of Computer Vision, 2180–2189.'
mla: Royer, Amélie, and Christoph Lampert. “A Flexible Selection Scheme for Minimum-Effort
Transfer Learning.” 2020 IEEE Winter Conference on Applications of Computer
Vision, 2180–2189, IEEE, 2020, doi:10.1109/WACV45572.2020.9093635.
short: A. Royer, C. Lampert, in:, 2020 IEEE Winter Conference on Applications of
Computer Vision, IEEE, 2020.
conference:
end_date: 2020-03-05
location: Snowmass Village, CO, United States
name: 'WACV: Winter Conference on Applications of Computer Vision'
start_date: 2020-03-01
date_created: 2020-06-07T22:00:53Z
date_published: 2020-03-01T00:00:00Z
date_updated: 2023-09-07T13:16:17Z
day: '01'
department:
- _id: ChLa
doi: 10.1109/WACV45572.2020.9093635
external_id:
arxiv:
- '2008.11995'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: http://arxiv.org/abs/2008.11995
month: '03'
oa: 1
oa_version: Preprint
publication: 2020 IEEE Winter Conference on Applications of Computer Vision
publication_identifier:
isbn:
- '9781728165530'
publication_status: published
publisher: IEEE
quality_controlled: '1'
related_material:
record:
- id: '8331'
relation: dissertation_contains
status: deleted
- id: '8390'
relation: dissertation_contains
status: public
scopus_import: '1'
status: public
title: A flexible selection scheme for minimum-effort transfer learning
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2020'
...
---
_id: '8092'
abstract:
- lang: eng
text: Image translation refers to the task of mapping images from a visual domain
to another. Given two unpaired collections of images, we aim to learn a mapping
between the corpus-level style of each collection, while preserving semantic content
shared across the two domains. We introduce xgan, a dual adversarial auto-encoder,
which captures a shared representation of the common domain semantic content in
an unsupervised way, while jointly learning the domain-to-domain image translations
in both directions. We exploit ideas from the domain adaptation literature and
define a semantic consistency loss which encourages the learned embedding to preserve
semantics shared across domains. We report promising qualitative results for the
task of face-to-cartoon translation. The cartoon dataset we collected for this
purpose, “CartoonSet”, is also publicly available as a new benchmark for semantic
style transfer at https://google.github.io/cartoonset/index.html.
article_processing_charge: No
author:
- first_name: Amélie
full_name: Royer, Amélie
id: 3811D890-F248-11E8-B48F-1D18A9856A87
last_name: Royer
orcid: 0000-0002-8407-0705
- first_name: Konstantinos
full_name: Bousmalis, Konstantinos
last_name: Bousmalis
- first_name: Stephan
full_name: Gouws, Stephan
last_name: Gouws
- first_name: Fred
full_name: Bertsch, Fred
last_name: Bertsch
- first_name: Inbar
full_name: Mosseri, Inbar
last_name: Mosseri
- first_name: Forrester
full_name: Cole, Forrester
last_name: Cole
- first_name: Kevin
full_name: Murphy, Kevin
last_name: Murphy
citation:
ama: 'Royer A, Bousmalis K, Gouws S, et al. XGAN: Unsupervised image-to-image translation
for many-to-many mappings. In: Singh R, Vatsa M, Patel VM, Ratha N, eds. Domain
Adaptation for Visual Understanding. Springer Nature; 2020:33-49. doi:10.1007/978-3-030-30671-7_3'
apa: 'Royer, A., Bousmalis, K., Gouws, S., Bertsch, F., Mosseri, I., Cole, F., &
Murphy, K. (2020). XGAN: Unsupervised image-to-image translation for many-to-many
mappings. In R. Singh, M. Vatsa, V. M. Patel, & N. Ratha (Eds.), Domain
Adaptation for Visual Understanding (pp. 33–49). Springer Nature. https://doi.org/10.1007/978-3-030-30671-7_3'
chicago: 'Royer, Amélie, Konstantinos Bousmalis, Stephan Gouws, Fred Bertsch, Inbar
Mosseri, Forrester Cole, and Kevin Murphy. “XGAN: Unsupervised Image-to-Image
Translation for Many-to-Many Mappings.” In Domain Adaptation for Visual Understanding,
edited by Richa Singh, Mayank Vatsa, Vishal M. Patel, and Nalini Ratha, 33–49.
Springer Nature, 2020. https://doi.org/10.1007/978-3-030-30671-7_3.'
ieee: 'A. Royer et al., “XGAN: Unsupervised image-to-image translation for
many-to-many mappings,” in Domain Adaptation for Visual Understanding,
R. Singh, M. Vatsa, V. M. Patel, and N. Ratha, Eds. Springer Nature, 2020, pp.
33–49.'
ista: 'Royer A, Bousmalis K, Gouws S, Bertsch F, Mosseri I, Cole F, Murphy K. 2020.XGAN:
Unsupervised image-to-image translation for many-to-many mappings. In: Domain
Adaptation for Visual Understanding. , 33–49.'
mla: 'Royer, Amélie, et al. “XGAN: Unsupervised Image-to-Image Translation for Many-to-Many
Mappings.” Domain Adaptation for Visual Understanding, edited by Richa
Singh et al., Springer Nature, 2020, pp. 33–49, doi:10.1007/978-3-030-30671-7_3.'
short: A. Royer, K. Bousmalis, S. Gouws, F. Bertsch, I. Mosseri, F. Cole, K. Murphy,
in:, R. Singh, M. Vatsa, V.M. Patel, N. Ratha (Eds.), Domain Adaptation for Visual
Understanding, Springer Nature, 2020, pp. 33–49.
date_created: 2020-07-05T22:00:46Z
date_published: 2020-01-08T00:00:00Z
date_updated: 2023-09-07T13:16:18Z
day: '08'
department:
- _id: ChLa
doi: 10.1007/978-3-030-30671-7_3
editor:
- first_name: Richa
full_name: Singh, Richa
last_name: Singh
- first_name: Mayank
full_name: Vatsa, Mayank
last_name: Vatsa
- first_name: Vishal M.
full_name: Patel, Vishal M.
last_name: Patel
- first_name: Nalini
full_name: Ratha, Nalini
last_name: Ratha
external_id:
arxiv:
- '1711.05139'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/1711.05139
month: '01'
oa: 1
oa_version: Preprint
page: 33-49
publication: Domain Adaptation for Visual Understanding
publication_identifier:
isbn:
- '9783030306717'
publication_status: published
publisher: Springer Nature
quality_controlled: '1'
related_material:
record:
- id: '8331'
relation: dissertation_contains
status: deleted
- id: '8390'
relation: dissertation_contains
status: public
scopus_import: '1'
status: public
title: 'XGAN: Unsupervised image-to-image translation for many-to-many mappings'
type: book_chapter
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2020'
...
---
_id: '7481'
abstract:
- lang: eng
text: 'We address the following question: How redundant is the parameterisation
of ReLU networks? Specifically, we consider transformations of the weight space
which leave the function implemented by the network intact. Two such transformations
are known for feed-forward architectures: permutation of neurons within a layer,
and positive scaling of all incoming weights of a neuron coupled with inverse
scaling of its outgoing weights. In this work, we show for architectures with
non-increasing widths that permutation and scaling are in fact the only function-preserving
weight transformations. For any eligible architecture we give an explicit construction
of a neural network such that any other network that implements the same function
can be obtained from the original one by the application of permutations and rescaling. The
proof relies on a geometric understanding of boundaries between linear regions
of ReLU networks, and we hope the developed mathematical tools are of independent
interest.'
article_processing_charge: No
author:
- first_name: Phuong
full_name: Bui Thi Mai, Phuong
id: 3EC6EE64-F248-11E8-B48F-1D18A9856A87
last_name: Bui Thi Mai
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Phuong M, Lampert C. Functional vs. parametric equivalence of ReLU networks.
In: 8th International Conference on Learning Representations. ; 2020.'
apa: Phuong, M., & Lampert, C. (2020). Functional vs. parametric equivalence
of ReLU networks. In 8th International Conference on Learning Representations.
Online.
chicago: Phuong, Mary, and Christoph Lampert. “Functional vs. Parametric Equivalence
of ReLU Networks.” In 8th International Conference on Learning Representations,
2020.
ieee: M. Phuong and C. Lampert, “Functional vs. parametric equivalence of ReLU networks,”
in 8th International Conference on Learning Representations, Online, 2020.
ista: 'Phuong M, Lampert C. 2020. Functional vs. parametric equivalence of ReLU
networks. 8th International Conference on Learning Representations. ICLR: International
Conference on Learning Representations.'
mla: Phuong, Mary, and Christoph Lampert. “Functional vs. Parametric Equivalence
of ReLU Networks.” 8th International Conference on Learning Representations,
2020.
short: M. Phuong, C. Lampert, in:, 8th International Conference on Learning Representations,
2020.
conference:
end_date: 2020-04-30
location: Online
name: 'ICLR: International Conference on Learning Representations'
start_date: 2020-04-27
date_created: 2020-02-11T09:07:37Z
date_published: 2020-04-26T00:00:00Z
date_updated: 2023-09-07T13:29:50Z
day: '26'
ddc:
- '000'
department:
- _id: ChLa
file:
- access_level: open_access
checksum: 8d372ea5defd8cb8fdc430111ed754a9
content_type: application/pdf
creator: bphuong
date_created: 2020-02-11T09:07:27Z
date_updated: 2020-07-14T12:47:59Z
file_id: '7482'
file_name: main.pdf
file_size: 405469
relation: main_file
file_date_updated: 2020-07-14T12:47:59Z
has_accepted_license: '1'
language:
- iso: eng
month: '04'
oa: 1
oa_version: Published Version
publication: 8th International Conference on Learning Representations
publication_status: published
quality_controlled: '1'
related_material:
link:
- relation: supplementary_material
url: https://iclr.cc/virtual_2020/poster_Bylx-TNKvH.html
record:
- id: '9418'
relation: dissertation_contains
status: public
status: public
title: Functional vs. parametric equivalence of ReLU networks
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2020'
...
---
_id: '8724'
abstract:
- lang: eng
text: "We study the problem of learning from multiple untrusted data sources, a
scenario of increasing practical relevance given the recent emergence of crowdsourcing
and collaborative learning paradigms. Specifically, we analyze the situation in
which a learning system obtains datasets from multiple sources, some of which
might be biased or even adversarially perturbed. It is\r\nknown that in the single-source
case, an adversary with the power to corrupt a fixed fraction of the training
data can prevent PAC-learnability, that is, even in the limit of infinitely much
training data, no learning system can approach the optimal test error. In this
work we show that, surprisingly, the same is not true in the multi-source setting,
where the adversary can arbitrarily\r\ncorrupt a fixed fraction of the data sources.
Our main results are a generalization bound that provides finite-sample guarantees
for this learning setting, as well as corresponding lower bounds. Besides establishing
PAC-learnability our results also show that in a cooperative learning setting
sharing data with other parties has provable benefits, even if some\r\nparticipants
are malicious. "
acknowledged_ssus:
- _id: ScienComp
acknowledgement: Dan Alistarh is supported in part by the European Research Council
(ERC) under the European Union’s Horizon 2020 research and innovation programme
(grant agreement No 805223 ScaleML). This research was supported by the Scientific
Service Units (SSU) of IST Austria through resources provided by Scientific Computing
(SciComp).
article_processing_charge: No
author:
- first_name: Nikola H
full_name: Konstantinov, Nikola H
id: 4B9D76E4-F248-11E8-B48F-1D18A9856A87
last_name: Konstantinov
- first_name: Elias
full_name: Frantar, Elias
id: 09a8f98d-ec99-11ea-ae11-c063a7b7fe5f
last_name: Frantar
- first_name: Dan-Adrian
full_name: Alistarh, Dan-Adrian
id: 4A899BFC-F248-11E8-B48F-1D18A9856A87
last_name: Alistarh
orcid: 0000-0003-3650-940X
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Konstantinov NH, Frantar E, Alistarh D-A, Lampert C. On the sample complexity
of adversarial multi-source PAC learning. In: Proceedings of the 37th International
Conference on Machine Learning. Vol 119. ML Research Press; 2020:5416-5425.'
apa: 'Konstantinov, N. H., Frantar, E., Alistarh, D.-A., & Lampert, C. (2020).
On the sample complexity of adversarial multi-source PAC learning. In Proceedings
of the 37th International Conference on Machine Learning (Vol. 119, pp. 5416–5425).
Online: ML Research Press.'
chicago: Konstantinov, Nikola H, Elias Frantar, Dan-Adrian Alistarh, and Christoph
Lampert. “On the Sample Complexity of Adversarial Multi-Source PAC Learning.”
In Proceedings of the 37th International Conference on Machine Learning,
119:5416–25. ML Research Press, 2020.
ieee: N. H. Konstantinov, E. Frantar, D.-A. Alistarh, and C. Lampert, “On the sample
complexity of adversarial multi-source PAC learning,” in Proceedings of the
37th International Conference on Machine Learning, Online, 2020, vol. 119,
pp. 5416–5425.
ista: 'Konstantinov NH, Frantar E, Alistarh D-A, Lampert C. 2020. On the sample
complexity of adversarial multi-source PAC learning. Proceedings of the 37th International
Conference on Machine Learning. ICML: International Conference on Machine Learning
vol. 119, 5416–5425.'
mla: Konstantinov, Nikola H., et al. “On the Sample Complexity of Adversarial Multi-Source
PAC Learning.” Proceedings of the 37th International Conference on Machine
Learning, vol. 119, ML Research Press, 2020, pp. 5416–25.
short: N.H. Konstantinov, E. Frantar, D.-A. Alistarh, C. Lampert, in:, Proceedings
of the 37th International Conference on Machine Learning, ML Research Press, 2020,
pp. 5416–5425.
conference:
end_date: 2020-07-18
location: Online
name: 'ICML: International Conference on Machine Learning'
start_date: 2020-07-12
date_created: 2020-11-05T15:25:58Z
date_published: 2020-07-12T00:00:00Z
date_updated: 2023-09-07T13:42:08Z
day: '12'
ddc:
- '000'
department:
- _id: DaAl
- _id: ChLa
ec_funded: 1
external_id:
arxiv:
- '2002.10384'
file:
- access_level: open_access
checksum: cc755d0054bc4b2be778ea7aa7884d2f
content_type: application/pdf
creator: dernst
date_created: 2021-02-15T09:00:01Z
date_updated: 2021-02-15T09:00:01Z
file_id: '9120'
file_name: 2020_PMLR_Konstantinov.pdf
file_size: 281286
relation: main_file
success: 1
file_date_updated: 2021-02-15T09:00:01Z
has_accepted_license: '1'
intvolume: ' 119'
language:
- iso: eng
month: '07'
oa: 1
oa_version: Published Version
page: 5416-5425
project:
- _id: 268A44D6-B435-11E9-9278-68D0E5697425
call_identifier: H2020
grant_number: '805223'
name: Elastic Coordination for Scalable Machine Learning
publication: Proceedings of the 37th International Conference on Machine Learning
publication_identifier:
issn:
- 2640-3498
publication_status: published
publisher: ML Research Press
quality_controlled: '1'
related_material:
link:
- relation: supplementary_material
url: http://proceedings.mlr.press/v119/konstantinov20a/konstantinov20a-supp.pdf
record:
- id: '10799'
relation: dissertation_contains
status: public
scopus_import: '1'
status: public
title: On the sample complexity of adversarial multi-source PAC learning
type: conference
user_id: 3E5EF7F0-F248-11E8-B48F-1D18A9856A87
volume: 119
year: '2020'
...
---
_id: '8390'
abstract:
- lang: eng
text: "Deep neural networks have established a new standard for data-dependent feature
extraction pipelines in the Computer Vision literature. Despite their remarkable
performance in the standard supervised learning scenario, i.e. when models are
trained with labeled data and tested on samples that follow a similar distribution,
neural networks have been shown to struggle with more advanced generalization
abilities, such as transferring knowledge across visually different domains, or
generalizing to new unseen combinations of known concepts. In this thesis we argue
that, in contrast to the usual black-box behavior of neural networks, leveraging
more structured internal representations is a promising direction\r\nfor tackling
such problems. In particular, we focus on two forms of structure. First, we tackle
modularity: We show that (i) compositional architectures are a natural tool for
modeling reasoning tasks, in that they efficiently capture their combinatorial
nature, which is key for generalizing beyond the compositions seen during training.
We investigate how to to learn such models, both formally and experimentally,
for the task of abstract visual reasoning. Then, we show that (ii) in some settings,
modularity allows us to efficiently break down complex tasks into smaller, easier,
modules, thereby improving computational efficiency; We study this behavior in
the context of generative models for colorization, as well as for small objects
detection. Secondly, we investigate the inherently layered structure of representations
learned by neural networks, and analyze its role in the context of transfer learning
and domain adaptation across visually\r\ndissimilar domains. "
acknowledged_ssus:
- _id: CampIT
- _id: ScienComp
acknowledgement: Last but not least, I would like to acknowledge the support of the
IST IT and scientific computing team for helping provide a great work environment.
alternative_title:
- ISTA Thesis
article_processing_charge: No
author:
- first_name: Amélie
full_name: Royer, Amélie
id: 3811D890-F248-11E8-B48F-1D18A9856A87
last_name: Royer
orcid: 0000-0002-8407-0705
citation:
ama: Royer A. Leveraging structure in Computer Vision tasks for flexible Deep Learning
models. 2020. doi:10.15479/AT:ISTA:8390
apa: Royer, A. (2020). Leveraging structure in Computer Vision tasks for flexible
Deep Learning models. Institute of Science and Technology Austria. https://doi.org/10.15479/AT:ISTA:8390
chicago: Royer, Amélie. “Leveraging Structure in Computer Vision Tasks for Flexible
Deep Learning Models.” Institute of Science and Technology Austria, 2020. https://doi.org/10.15479/AT:ISTA:8390.
ieee: A. Royer, “Leveraging structure in Computer Vision tasks for flexible Deep
Learning models,” Institute of Science and Technology Austria, 2020.
ista: Royer A. 2020. Leveraging structure in Computer Vision tasks for flexible
Deep Learning models. Institute of Science and Technology Austria.
mla: Royer, Amélie. Leveraging Structure in Computer Vision Tasks for Flexible
Deep Learning Models. Institute of Science and Technology Austria, 2020, doi:10.15479/AT:ISTA:8390.
short: A. Royer, Leveraging Structure in Computer Vision Tasks for Flexible Deep
Learning Models, Institute of Science and Technology Austria, 2020.
date_created: 2020-09-14T13:42:09Z
date_published: 2020-09-14T00:00:00Z
date_updated: 2023-10-16T10:04:02Z
day: '14'
ddc:
- '000'
degree_awarded: PhD
department:
- _id: ChLa
doi: 10.15479/AT:ISTA:8390
file:
- access_level: open_access
checksum: c914d2f88846032f3d8507734861b6ee
content_type: application/pdf
creator: dernst
date_created: 2020-09-14T13:39:14Z
date_updated: 2020-09-14T13:39:14Z
file_id: '8391'
file_name: 2020_Thesis_Royer.pdf
file_size: 30224591
relation: main_file
success: 1
- access_level: closed
checksum: ae98fb35d912cff84a89035ae5794d3c
content_type: application/x-zip-compressed
creator: dernst
date_created: 2020-09-14T13:39:17Z
date_updated: 2020-09-14T13:39:17Z
file_id: '8392'
file_name: thesis_sources.zip
file_size: 74227627
relation: main_file
file_date_updated: 2020-09-14T13:39:17Z
has_accepted_license: '1'
language:
- iso: eng
license: https://creativecommons.org/licenses/by-nc-sa/4.0/
month: '09'
oa: 1
oa_version: Published Version
page: '197'
publication_identifier:
isbn:
- 978-3-99078-007-7
issn:
- 2663-337X
publication_status: published
publisher: Institute of Science and Technology Austria
related_material:
record:
- id: '7936'
relation: part_of_dissertation
status: public
- id: '7937'
relation: part_of_dissertation
status: public
- id: '8193'
relation: part_of_dissertation
status: public
- id: '8092'
relation: part_of_dissertation
status: public
- id: '911'
relation: part_of_dissertation
status: public
status: public
supervisor:
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
title: Leveraging structure in Computer Vision tasks for flexible Deep Learning models
tmp:
image: /images/cc_by_nc_sa.png
legal_code_url: https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode
name: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC
BY-NC-SA 4.0)
short: CC BY-NC-SA (4.0)
type: dissertation
user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1
year: '2020'
...
---
_id: '8186'
abstract:
- lang: eng
text: "Numerous methods have been proposed for probabilistic generative modelling
of\r\n3D objects. However, none of these is able to produce textured objects,
which\r\nrenders them of limited use for practical tasks. In this work, we present
the\r\nfirst generative model of textured 3D meshes. Training such a model would\r\ntraditionally
require a large dataset of textured meshes, but unfortunately,\r\nexisting datasets
of meshes lack detailed textures. We instead propose a new\r\ntraining methodology
that allows learning from collections of 2D images without\r\nany 3D information.
To do so, we train our model to explain a distribution of\r\nimages by modelling
each image as a 3D foreground object placed in front of a\r\n2D background. Thus,
it learns to generate meshes that when rendered, produce\r\nimages similar to
those in its training set.\r\n A well-known problem when generating meshes with
deep networks is the\r\nemergence of self-intersections, which are problematic
for many use-cases. As a\r\nsecond contribution we therefore introduce a new generation
process for 3D\r\nmeshes that guarantees no self-intersections arise, based on
the physical\r\nintuition that faces should push one another out of the way as
they move.\r\n We conduct extensive experiments on our approach, reporting quantitative
and\r\nqualitative results on both synthetic data and natural images. These show
our\r\nmethod successfully learns to generate plausible and diverse textured 3D\r\nsamples
for five challenging object classes."
article_processing_charge: No
author:
- first_name: Paul M
full_name: Henderson, Paul M
id: 13C09E74-18D9-11E9-8878-32CFE5697425
last_name: Henderson
orcid: 0000-0002-5198-7445
- first_name: Vagia
full_name: Tsiminaki, Vagia
last_name: Tsiminaki
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Henderson PM, Tsiminaki V, Lampert C. Leveraging 2D data to learn textured
3D mesh generation. In: Proceedings of the IEEE/CVF Conference on Computer
Vision and Pattern Recognition. IEEE; 2020:7498-7507. doi:10.1109/CVPR42600.2020.00752'
apa: 'Henderson, P. M., Tsiminaki, V., & Lampert, C. (2020). Leveraging 2D data
to learn textured 3D mesh generation. In Proceedings of the IEEE/CVF Conference
on Computer Vision and Pattern Recognition (pp. 7498–7507). Virtual: IEEE.
https://doi.org/10.1109/CVPR42600.2020.00752'
chicago: Henderson, Paul M, Vagia Tsiminaki, and Christoph Lampert. “Leveraging
2D Data to Learn Textured 3D Mesh Generation.” In Proceedings of the IEEE/CVF
Conference on Computer Vision and Pattern Recognition, 7498–7507. IEEE, 2020.
https://doi.org/10.1109/CVPR42600.2020.00752.
ieee: P. M. Henderson, V. Tsiminaki, and C. Lampert, “Leveraging 2D data to learn
textured 3D mesh generation,” in Proceedings of the IEEE/CVF Conference on
Computer Vision and Pattern Recognition, Virtual, 2020, pp. 7498–7507.
ista: 'Henderson PM, Tsiminaki V, Lampert C. 2020. Leveraging 2D data to learn textured
3D mesh generation. Proceedings of the IEEE/CVF Conference on Computer Vision
and Pattern Recognition. CVPR: Conference on Computer Vision and Pattern Recognition,
7498–7507.'
mla: Henderson, Paul M., et al. “Leveraging 2D Data to Learn Textured 3D Mesh Generation.”
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,
IEEE, 2020, pp. 7498–507, doi:10.1109/CVPR42600.2020.00752.
short: P.M. Henderson, V. Tsiminaki, C. Lampert, in:, Proceedings of the IEEE/CVF
Conference on Computer Vision and Pattern Recognition, IEEE, 2020, pp. 7498–7507.
conference:
end_date: 2020-06-19
location: Virtual
name: 'CVPR: Conference on Computer Vision and Pattern Recognition'
start_date: 2020-06-14
date_created: 2020-07-31T16:53:49Z
date_published: 2020-07-01T00:00:00Z
date_updated: 2023-10-17T07:37:11Z
day: '01'
ddc:
- '004'
department:
- _id: ChLa
doi: 10.1109/CVPR42600.2020.00752
external_id:
arxiv:
- '2004.04180'
file:
- access_level: open_access
content_type: application/pdf
creator: phenders
date_created: 2020-07-31T16:57:12Z
date_updated: 2020-07-31T16:57:12Z
file_id: '8187'
file_name: paper.pdf
file_size: 10262773
relation: main_file
success: 1
file_date_updated: 2020-07-31T16:57:12Z
has_accepted_license: '1'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://openaccess.thecvf.com/content_CVPR_2020/papers/Henderson_Leveraging_2D_Data_to_Learn_Textured_3D_Mesh_Generation_CVPR_2020_paper.pdf
month: '07'
oa: 1
oa_version: Submitted Version
page: 7498-7507
publication: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern
Recognition
publication_identifier:
eisbn:
- '9781728171685'
eissn:
- 2575-7075
publication_status: published
publisher: IEEE
quality_controlled: '1'
scopus_import: '1'
status: public
title: Leveraging 2D data to learn textured 3D mesh generation
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2020'
...
---
_id: '6944'
abstract:
- lang: eng
text: 'We study the problem of automatically detecting if a given multi-class classifier
operates outside of its specifications (out-of-specs), i.e. on input data from
a different distribution than what it was trained for. This is an important problem
to solve on the road towards creating reliable computer vision systems for real-world
applications, because the quality of a classifier’s predictions cannot be guaranteed
if it operates out-of-specs. Previously proposed methods for out-of-specs detection
make decisions on the level of single inputs. This, however, is insufficient to
achieve low false positive rate and high false negative rates at the same time.
In this work, we describe a new procedure named KS(conf), based on statistical
reasoning. Its main component is a classical Kolmogorov–Smirnov test that is applied
to the set of predicted confidence values for batches of samples. Working with
batches instead of single samples allows increasing the true positive rate without
negatively affecting the false positive rate, thereby overcoming a crucial limitation
of single sample tests. We show by extensive experiments using a variety of convolutional
network architectures and datasets that KS(conf) reliably detects out-of-specs
situations even under conditions where other tests fail. It furthermore has a
number of properties that make it an excellent candidate for practical deployment:
it is easy to implement, adds almost no overhead to the system, works with any
classifier that outputs confidence scores, and requires no a priori knowledge
about how the data distribution could change.'
article_processing_charge: Yes (via OA deal)
article_type: original
author:
- first_name: Rémy
full_name: Sun, Rémy
last_name: Sun
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Sun R, Lampert C. KS(conf): A light-weight test if a multiclass classifier
operates outside of its specifications. International Journal of Computer Vision.
2020;128(4):970-995. doi:10.1007/s11263-019-01232-x'
apa: 'Sun, R., & Lampert, C. (2020). KS(conf): A light-weight test if a multiclass
classifier operates outside of its specifications. International Journal of
Computer Vision. Springer Nature. https://doi.org/10.1007/s11263-019-01232-x'
chicago: 'Sun, Rémy, and Christoph Lampert. “KS(Conf): A Light-Weight Test If a
Multiclass Classifier Operates Outside of Its Specifications.” International
Journal of Computer Vision. Springer Nature, 2020. https://doi.org/10.1007/s11263-019-01232-x.'
ieee: 'R. Sun and C. Lampert, “KS(conf): A light-weight test if a multiclass classifier
operates outside of its specifications,” International Journal of Computer
Vision, vol. 128, no. 4. Springer Nature, pp. 970–995, 2020.'
ista: 'Sun R, Lampert C. 2020. KS(conf): A light-weight test if a multiclass classifier
operates outside of its specifications. International Journal of Computer Vision.
128(4), 970–995.'
mla: 'Sun, Rémy, and Christoph Lampert. “KS(Conf): A Light-Weight Test If a Multiclass
Classifier Operates Outside of Its Specifications.” International Journal of
Computer Vision, vol. 128, no. 4, Springer Nature, 2020, pp. 970–95, doi:10.1007/s11263-019-01232-x.'
short: R. Sun, C. Lampert, International Journal of Computer Vision 128 (2020) 970–995.
date_created: 2019-10-14T09:14:28Z
date_published: 2020-04-01T00:00:00Z
date_updated: 2024-02-22T14:57:30Z
day: '01'
ddc:
- '004'
department:
- _id: ChLa
doi: 10.1007/s11263-019-01232-x
ec_funded: 1
external_id:
isi:
- '000494406800001'
file:
- access_level: open_access
checksum: 155e63edf664dcacb3bdc1c2223e606f
content_type: application/pdf
creator: dernst
date_created: 2019-11-26T10:30:02Z
date_updated: 2020-07-14T12:47:45Z
file_id: '7110'
file_name: 2019_IJCV_Sun.pdf
file_size: 1715072
relation: main_file
file_date_updated: 2020-07-14T12:47:45Z
has_accepted_license: '1'
intvolume: ' 128'
isi: 1
issue: '4'
language:
- iso: eng
month: '04'
oa: 1
oa_version: Published Version
page: 970-995
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '308036'
name: Lifelong Learning of Visual Scene Understanding
- _id: B67AFEDC-15C9-11EA-A837-991A96BB2854
name: IST Austria Open Access Fund
publication: International Journal of Computer Vision
publication_identifier:
eissn:
- 1573-1405
issn:
- 0920-5691
publication_status: published
publisher: Springer Nature
quality_controlled: '1'
related_material:
link:
- relation: erratum
url: https://doi.org/10.1007/s11263-019-01262-5
record:
- id: '6482'
relation: earlier_version
status: public
scopus_import: '1'
status: public
title: 'KS(conf): A light-weight test if a multiclass classifier operates outside
of its specifications'
tmp:
image: /images/cc_by.png
legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
short: CC BY (4.0)
type: journal_article
user_id: 3E5EF7F0-F248-11E8-B48F-1D18A9856A87
volume: 128
year: '2020'
...
---
_id: '7171'
abstract:
- lang: ger
text: "Wissen Sie, was sich hinter künstlicher Intelligenz und maschinellem Lernen
verbirgt? \r\nDieses Sachbuch erklärt Ihnen leicht verständlich und ohne komplizierte
Formeln die grundlegenden Methoden und Vorgehensweisen des maschinellen Lernens.
Mathematisches Vorwissen ist dafür nicht nötig. Kurzweilig und informativ illustriert
Lisa, die Protagonistin des Buches, diese anhand von Alltagssituationen. \r\nEin
Buch für alle, die in Diskussionen über Chancen und Risiken der aktuellen Entwicklung
der künstlichen Intelligenz und des maschinellen Lernens mit Faktenwissen punkten
möchten. Auch für Schülerinnen und Schüler geeignet!"
article_processing_charge: No
citation:
ama: 'Kersting K, Lampert C, Rothkopf C, eds. Wie Maschinen Lernen: Künstliche
Intelligenz Verständlich Erklärt. 1st ed. Wiesbaden: Springer Nature; 2019.
doi:10.1007/978-3-658-26763-6'
apa: 'Kersting, K., Lampert, C., & Rothkopf, C. (Eds.). (2019). Wie Maschinen
Lernen: Künstliche Intelligenz Verständlich Erklärt (1st ed.). Wiesbaden:
Springer Nature. https://doi.org/10.1007/978-3-658-26763-6'
chicago: 'Kersting, Kristian, Christoph Lampert, and Constantin Rothkopf, eds. Wie
Maschinen Lernen: Künstliche Intelligenz Verständlich Erklärt. 1st ed. Wiesbaden:
Springer Nature, 2019. https://doi.org/10.1007/978-3-658-26763-6.'
ieee: 'K. Kersting, C. Lampert, and C. Rothkopf, Eds., Wie Maschinen Lernen:
Künstliche Intelligenz Verständlich Erklärt, 1st ed. Wiesbaden: Springer Nature,
2019.'
ista: 'Kersting K, Lampert C, Rothkopf C eds. 2019. Wie Maschinen Lernen: Künstliche
Intelligenz Verständlich Erklärt 1st ed., Wiesbaden: Springer Nature, XIV, 245p.'
mla: 'Kersting, Kristian, et al., editors. Wie Maschinen Lernen: Künstliche Intelligenz
Verständlich Erklärt. 1st ed., Springer Nature, 2019, doi:10.1007/978-3-658-26763-6.'
short: 'K. Kersting, C. Lampert, C. Rothkopf, eds., Wie Maschinen Lernen: Künstliche
Intelligenz Verständlich Erklärt, 1st ed., Springer Nature, Wiesbaden, 2019.'
date_created: 2019-12-11T14:15:56Z
date_published: 2019-10-30T00:00:00Z
date_updated: 2021-12-22T14:40:58Z
day: '30'
department:
- _id: ChLa
doi: 10.1007/978-3-658-26763-6
edition: '1'
editor:
- first_name: Kristian
full_name: Kersting, Kristian
last_name: Kersting
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
- first_name: Constantin
full_name: Rothkopf, Constantin
last_name: Rothkopf
language:
- iso: ger
month: '10'
oa_version: None
page: XIV, 245
place: Wiesbaden
publication_identifier:
eisbn:
- 978-3-658-26763-6
isbn:
- 978-3-658-26762-9
publication_status: published
publisher: Springer Nature
quality_controlled: '1'
related_material:
link:
- description: News on IST Website
relation: press_release
url: https://ist.ac.at/en/news/book-release-how-machines-learn/
status: public
title: 'Wie Maschinen Lernen: Künstliche Intelligenz Verständlich Erklärt'
type: book_editor
user_id: 8b945eb4-e2f2-11eb-945a-df72226e66a9
year: '2019'
...
---
_id: '6942'
abstract:
- lang: eng
text: "Graph games and Markov decision processes (MDPs) are standard models in reactive
synthesis and verification of probabilistic systems with nondeterminism. The class
of \U0001D714 -regular winning conditions; e.g., safety, reachability, liveness,
parity conditions; provides a robust and expressive specification formalism for
properties that arise in analysis of reactive systems. The resolutions of nondeterminism
in games and MDPs are represented as strategies, and we consider succinct representation
of such strategies. The decision-tree data structure from machine learning retains
the flavor of decisions of strategies and allows entropy-based minimization to
obtain succinct trees. However, in contrast to traditional machine-learning problems
where small errors are allowed, for winning strategies in graph games and MDPs
no error is allowed, and the decision tree must represent the entire strategy.
In this work we propose decision trees with linear classifiers for representation
of strategies in graph games and MDPs. We have implemented strategy representation
using this data structure and we present experimental results for problems on
graph games and MDPs, which show that this new data structure presents a much
more efficient strategy representation as compared to standard decision trees."
alternative_title:
- LNCS
article_processing_charge: No
author:
- first_name: Pranav
full_name: Ashok, Pranav
last_name: Ashok
- first_name: Tomáš
full_name: Brázdil, Tomáš
last_name: Brázdil
- first_name: Krishnendu
full_name: Chatterjee, Krishnendu
id: 2E5DCA20-F248-11E8-B48F-1D18A9856A87
last_name: Chatterjee
orcid: 0000-0002-4561-241X
- first_name: Jan
full_name: Křetínský, Jan
last_name: Křetínský
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
- first_name: Viktor
full_name: Toman, Viktor
id: 3AF3DA7C-F248-11E8-B48F-1D18A9856A87
last_name: Toman
orcid: 0000-0001-9036-063X
citation:
ama: 'Ashok P, Brázdil T, Chatterjee K, Křetínský J, Lampert C, Toman V. Strategy
representation by decision trees with linear classifiers. In: 16th International
Conference on Quantitative Evaluation of Systems. Vol 11785. Springer Nature;
2019:109-128. doi:10.1007/978-3-030-30281-8_7'
apa: 'Ashok, P., Brázdil, T., Chatterjee, K., Křetínský, J., Lampert, C., &
Toman, V. (2019). Strategy representation by decision trees with linear classifiers.
In 16th International Conference on Quantitative Evaluation of Systems
(Vol. 11785, pp. 109–128). Glasgow, United Kingdom: Springer Nature. https://doi.org/10.1007/978-3-030-30281-8_7'
chicago: Ashok, Pranav, Tomáš Brázdil, Krishnendu Chatterjee, Jan Křetínský, Christoph
Lampert, and Viktor Toman. “Strategy Representation by Decision Trees with Linear
Classifiers.” In 16th International Conference on Quantitative Evaluation of
Systems, 11785:109–28. Springer Nature, 2019. https://doi.org/10.1007/978-3-030-30281-8_7.
ieee: P. Ashok, T. Brázdil, K. Chatterjee, J. Křetínský, C. Lampert, and V. Toman,
“Strategy representation by decision trees with linear classifiers,” in 16th
International Conference on Quantitative Evaluation of Systems, Glasgow, United
Kingdom, 2019, vol. 11785, pp. 109–128.
ista: 'Ashok P, Brázdil T, Chatterjee K, Křetínský J, Lampert C, Toman V. 2019.
Strategy representation by decision trees with linear classifiers. 16th International
Conference on Quantitative Evaluation of Systems. QEST: Quantitative Evaluation
of Systems, LNCS, vol. 11785, 109–128.'
mla: Ashok, Pranav, et al. “Strategy Representation by Decision Trees with Linear
Classifiers.” 16th International Conference on Quantitative Evaluation of Systems,
vol. 11785, Springer Nature, 2019, pp. 109–28, doi:10.1007/978-3-030-30281-8_7.
short: P. Ashok, T. Brázdil, K. Chatterjee, J. Křetínský, C. Lampert, V. Toman,
in:, 16th International Conference on Quantitative Evaluation of Systems, Springer
Nature, 2019, pp. 109–128.
conference:
end_date: 2019-09-12
location: Glasgow, United Kingdom
name: 'QEST: Quantitative Evaluation of Systems'
start_date: 2019-09-10
date_created: 2019-10-14T06:57:49Z
date_published: 2019-09-04T00:00:00Z
date_updated: 2023-08-30T06:59:36Z
day: '04'
department:
- _id: KrCh
- _id: ChLa
doi: 10.1007/978-3-030-30281-8_7
external_id:
arxiv:
- '1906.08178'
isi:
- '000679281300007'
intvolume: ' 11785'
isi: 1
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/1906.08178
month: '09'
oa: 1
oa_version: Preprint
page: 109-128
project:
- _id: 25863FF4-B435-11E9-9278-68D0E5697425
call_identifier: FWF
grant_number: S11407
name: Game Theory
- _id: 25F2ACDE-B435-11E9-9278-68D0E5697425
call_identifier: FWF
grant_number: S11402-N23
name: Rigorous Systems Engineering
- _id: 25892FC0-B435-11E9-9278-68D0E5697425
grant_number: ICT15-003
name: Efficient Algorithms for Computer Aided Verification
publication: 16th International Conference on Quantitative Evaluation of Systems
publication_identifier:
eisbn:
- '9783030302818'
isbn:
- '9783030302801'
issn:
- 0302-9743
publication_status: published
publisher: Springer Nature
quality_controlled: '1'
scopus_import: '1'
status: public
title: Strategy representation by decision trees with linear classifiers
type: conference
user_id: 4359f0d1-fa6c-11eb-b949-802e58b17ae8
volume: 11785
year: '2019'
...
---
_id: '6554'
abstract:
- lang: eng
text: Due to the importance of zero-shot learning, i.e. classifying images where
there is a lack of labeled training data, the number of proposed approaches has
recently increased steadily. We argue that it is time to take a step back and
to analyze the status quo of the area. The purpose of this paper is three-fold.
First, given the fact that there is no agreed upon zero-shot learning benchmark,
we first define a new benchmark by unifying both the evaluation protocols and
data splits of publicly available datasets used for this task. This is an important
contribution as published results are often not comparable and sometimes even
flawed due to, e.g. pre-training on zero-shot test classes. Moreover, we propose
a new zero-shot learning dataset, the Animals with Attributes 2 (AWA2) dataset
which we make publicly available both in terms of image features and the images
themselves. Second, we compare and analyze a significant number of the state-of-the-art
methods in depth, both in the classic zero-shot setting but also in the more realistic
generalized zero-shot setting. Finally, we discuss in detail the limitations of
the current status of the area which can be taken as a basis for advancing it.
article_processing_charge: No
article_type: original
author:
- first_name: Yongqin
full_name: Xian, Yongqin
last_name: Xian
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0002-4561-241X
- first_name: Bernt
full_name: Schiele, Bernt
last_name: Schiele
- first_name: Zeynep
full_name: Akata, Zeynep
last_name: Akata
citation:
ama: Xian Y, Lampert C, Schiele B, Akata Z. Zero-shot learning - A comprehensive
evaluation of the good, the bad and the ugly. IEEE Transactions on Pattern
Analysis and Machine Intelligence. 2019;41(9):2251-2265. doi:10.1109/tpami.2018.2857768
apa: Xian, Y., Lampert, C., Schiele, B., & Akata, Z. (2019). Zero-shot learning
- A comprehensive evaluation of the good, the bad and the ugly. IEEE Transactions
on Pattern Analysis and Machine Intelligence. Institute of Electrical and
Electronics Engineers (IEEE). https://doi.org/10.1109/tpami.2018.2857768
chicago: Xian, Yongqin, Christoph Lampert, Bernt Schiele, and Zeynep Akata. “Zero-Shot
Learning - A Comprehensive Evaluation of the Good, the Bad and the Ugly.” IEEE
Transactions on Pattern Analysis and Machine Intelligence. Institute of Electrical
and Electronics Engineers (IEEE), 2019. https://doi.org/10.1109/tpami.2018.2857768.
ieee: Y. Xian, C. Lampert, B. Schiele, and Z. Akata, “Zero-shot learning - A comprehensive
evaluation of the good, the bad and the ugly,” IEEE Transactions on Pattern
Analysis and Machine Intelligence, vol. 41, no. 9. Institute of Electrical
and Electronics Engineers (IEEE), pp. 2251–2265, 2019.
ista: Xian Y, Lampert C, Schiele B, Akata Z. 2019. Zero-shot learning - A comprehensive
evaluation of the good, the bad and the ugly. IEEE Transactions on Pattern Analysis
and Machine Intelligence. 41(9), 2251–2265.
mla: Xian, Yongqin, et al. “Zero-Shot Learning - A Comprehensive Evaluation of the
Good, the Bad and the Ugly.” IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 41, no. 9, Institute of Electrical and Electronics Engineers
(IEEE), 2019, pp. 2251–65, doi:10.1109/tpami.2018.2857768.
short: Y. Xian, C. Lampert, B. Schiele, Z. Akata, IEEE Transactions on Pattern Analysis
and Machine Intelligence 41 (2019) 2251–2265.
date_created: 2019-06-11T14:05:59Z
date_published: 2019-09-01T00:00:00Z
date_updated: 2023-09-05T13:18:09Z
day: '01'
department:
- _id: ChLa
doi: 10.1109/tpami.2018.2857768
external_id:
arxiv:
- '1707.00600'
isi:
- '000480343900015'
intvolume: ' 41'
isi: 1
issue: '9'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/1707.00600
month: '09'
oa: 1
oa_version: Preprint
page: 2251 - 2265
publication: IEEE Transactions on Pattern Analysis and Machine Intelligence
publication_identifier:
eissn:
- 1939-3539
issn:
- 0162-8828
publication_status: published
publisher: Institute of Electrical and Electronics Engineers (IEEE)
quality_controlled: '1'
scopus_import: '1'
status: public
title: Zero-shot learning - A comprehensive evaluation of the good, the bad and the
ugly
type: journal_article
user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1
volume: 41
year: '2019'
...
---
_id: '7479'
abstract:
- lang: eng
text: "Multi-exit architectures, in which a stack of processing layers is interleaved
with early output layers, allow the processing of a test example to stop early
and thus save computation time and/or energy. In this work, we propose a new
training procedure for multi-exit architectures based on the principle of knowledge
distillation. The method encourage searly exits to mimic later, more accurate
exits, by matching their output probabilities.\r\nExperiments on CIFAR100 and
\ ImageNet show that distillation-based training significantly improves the
accuracy of early exits while maintaining state-of-the-art accuracy for late
\ ones. The method is particularly beneficial when training data is limited
\ and it allows a straightforward extension to semi-supervised learning,i.e.
making use of unlabeled data at training time. Moreover, it takes only afew lines
to implement and incurs almost no computational overhead at training time, and
none at all at test time."
article_processing_charge: No
author:
- first_name: Phuong
full_name: Bui Thi Mai, Phuong
id: 3EC6EE64-F248-11E8-B48F-1D18A9856A87
last_name: Bui Thi Mai
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Phuong M, Lampert C. Distillation-based training for multi-exit architectures.
In: IEEE International Conference on Computer Vision. Vol 2019-October.
IEEE; 2019:1355-1364. doi:10.1109/ICCV.2019.00144'
apa: 'Phuong, M., & Lampert, C. (2019). Distillation-based training for multi-exit
architectures. In IEEE International Conference on Computer Vision (Vol.
2019–October, pp. 1355–1364). Seoul, Korea: IEEE. https://doi.org/10.1109/ICCV.2019.00144'
chicago: Phuong, Mary, and Christoph Lampert. “Distillation-Based Training for Multi-Exit
Architectures.” In IEEE International Conference on Computer Vision, 2019–October:1355–64.
IEEE, 2019. https://doi.org/10.1109/ICCV.2019.00144.
ieee: M. Phuong and C. Lampert, “Distillation-based training for multi-exit architectures,”
in IEEE International Conference on Computer Vision, Seoul, Korea, 2019,
vol. 2019–October, pp. 1355–1364.
ista: 'Phuong M, Lampert C. 2019. Distillation-based training for multi-exit architectures.
IEEE International Conference on Computer Vision. ICCV: International Conference
on Computer Vision vol. 2019–October, 1355–1364.'
mla: Phuong, Mary, and Christoph Lampert. “Distillation-Based Training for Multi-Exit
Architectures.” IEEE International Conference on Computer Vision, vol.
2019–October, IEEE, 2019, pp. 1355–64, doi:10.1109/ICCV.2019.00144.
short: M. Phuong, C. Lampert, in:, IEEE International Conference on Computer Vision,
IEEE, 2019, pp. 1355–1364.
conference:
end_date: 2019-11-02
location: Seoul, Korea
name: 'ICCV: International Conference on Computer Vision'
start_date: 2019-10-27
date_created: 2020-02-11T09:06:57Z
date_published: 2019-10-01T00:00:00Z
date_updated: 2023-09-08T11:11:12Z
day: '01'
ddc:
- '000'
department:
- _id: ChLa
doi: 10.1109/ICCV.2019.00144
ec_funded: 1
external_id:
isi:
- '000531438101047'
file:
- access_level: open_access
checksum: 7b77fb5c2d27c4c37a7612ba46a66117
content_type: application/pdf
creator: bphuong
date_created: 2020-02-11T09:06:39Z
date_updated: 2020-07-14T12:47:59Z
file_id: '7480'
file_name: main.pdf
file_size: 735768
relation: main_file
file_date_updated: 2020-07-14T12:47:59Z
has_accepted_license: '1'
isi: 1
language:
- iso: eng
month: '10'
oa: 1
oa_version: Submitted Version
page: 1355-1364
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '308036'
name: Lifelong Learning of Visual Scene Understanding
publication: IEEE International Conference on Computer Vision
publication_identifier:
isbn:
- '9781728148038'
issn:
- '15505499'
publication_status: published
publisher: IEEE
quality_controlled: '1'
related_material:
record:
- id: '9418'
relation: dissertation_contains
status: public
scopus_import: '1'
status: public
title: Distillation-based training for multi-exit architectures
type: conference
user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1
volume: 2019-October
year: '2019'
...
---
_id: '7640'
abstract:
- lang: eng
text: We propose a new model for detecting visual relationships, such as "person
riding motorcycle" or "bottle on table". This task is an important step towards
comprehensive structured mage understanding, going beyond detecting individual
objects. Our main novelty is a Box Attention mechanism that allows to model pairwise
interactions between objects using standard object detection pipelines. The resulting
model is conceptually clean, expressive and relies on well-justified training
and prediction procedures. Moreover, unlike previously proposed approaches, our
model does not introduce any additional complex components or hyperparameters
on top of those already required by the underlying detection model. We conduct
an experimental evaluation on two datasets, V-COCO and Open Images, demonstrating
strong quantitative and qualitative results.
article_number: 1749-1753
article_processing_charge: No
author:
- first_name: Alexander
full_name: Kolesnikov, Alexander
id: 2D157DB6-F248-11E8-B48F-1D18A9856A87
last_name: Kolesnikov
- first_name: Alina
full_name: Kuznetsova, Alina
last_name: Kuznetsova
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
- first_name: Vittorio
full_name: Ferrari, Vittorio
last_name: Ferrari
citation:
ama: 'Kolesnikov A, Kuznetsova A, Lampert C, Ferrari V. Detecting visual relationships
using box attention. In: Proceedings of the 2019 International Conference on
Computer Vision Workshop. IEEE; 2019. doi:10.1109/ICCVW.2019.00217'
apa: 'Kolesnikov, A., Kuznetsova, A., Lampert, C., & Ferrari, V. (2019). Detecting
visual relationships using box attention. In Proceedings of the 2019 International
Conference on Computer Vision Workshop. Seoul, South Korea: IEEE. https://doi.org/10.1109/ICCVW.2019.00217'
chicago: Kolesnikov, Alexander, Alina Kuznetsova, Christoph Lampert, and Vittorio
Ferrari. “Detecting Visual Relationships Using Box Attention.” In Proceedings
of the 2019 International Conference on Computer Vision Workshop. IEEE, 2019.
https://doi.org/10.1109/ICCVW.2019.00217.
ieee: A. Kolesnikov, A. Kuznetsova, C. Lampert, and V. Ferrari, “Detecting visual
relationships using box attention,” in Proceedings of the 2019 International
Conference on Computer Vision Workshop, Seoul, South Korea, 2019.
ista: 'Kolesnikov A, Kuznetsova A, Lampert C, Ferrari V. 2019. Detecting visual
relationships using box attention. Proceedings of the 2019 International Conference
on Computer Vision Workshop. ICCVW: International Conference on Computer Vision
Workshop, 1749–1753.'
mla: Kolesnikov, Alexander, et al. “Detecting Visual Relationships Using Box Attention.”
Proceedings of the 2019 International Conference on Computer Vision Workshop,
1749–1753, IEEE, 2019, doi:10.1109/ICCVW.2019.00217.
short: A. Kolesnikov, A. Kuznetsova, C. Lampert, V. Ferrari, in:, Proceedings of
the 2019 International Conference on Computer Vision Workshop, IEEE, 2019.
conference:
end_date: 2019-10-28
location: Seoul, South Korea
name: 'ICCVW: International Conference on Computer Vision Workshop'
start_date: 2019-10-27
date_created: 2020-04-05T22:00:51Z
date_published: 2019-10-01T00:00:00Z
date_updated: 2023-09-08T11:18:37Z
day: '01'
department:
- _id: ChLa
doi: 10.1109/ICCVW.2019.00217
ec_funded: 1
external_id:
arxiv:
- '1807.02136'
isi:
- '000554591601098'
isi: 1
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/1807.02136
month: '10'
oa: 1
oa_version: Preprint
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '308036'
name: Lifelong Learning of Visual Scene Understanding
publication: Proceedings of the 2019 International Conference on Computer Vision Workshop
publication_identifier:
isbn:
- '9781728150239'
publication_status: published
publisher: IEEE
quality_controlled: '1'
scopus_import: '1'
status: public
title: Detecting visual relationships using box attention
type: conference
user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1
year: '2019'
...
---
_id: '6569'
abstract:
- lang: eng
text: 'Knowledge distillation, i.e. one classifier being trained on the outputs
of another classifier, is an empirically very successful technique for knowledge
transfer between classifiers. It has even been observed that classifiers learn
much faster and more reliably if trained with the outputs of another classifier
as soft labels, instead of from ground truth data. So far, however, there is no
satisfactory theoretical explanation of this phenomenon. In this work, we provide
the first insights into the working mechanisms of distillation by studying the
special case of linear and deep linear classifiers. Specifically, we prove a
generalization bound that establishes fast convergence of the expected risk of
a distillation-trained linear classifier. From the bound and its proof we extract
three keyfactors that determine the success of distillation: data geometry – geometric
properties of the datadistribution, in particular class separation, has an immediate
influence on the convergence speed of the risk; optimization bias– gradient descentoptimization
finds a very favorable minimum of the distillation objective; and strong monotonicity–
the expected risk of the student classifier always decreases when the size of
the training set grows.'
article_processing_charge: No
author:
- first_name: Phuong
full_name: Bui Thi Mai, Phuong
id: 3EC6EE64-F248-11E8-B48F-1D18A9856A87
last_name: Bui Thi Mai
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Phuong M, Lampert C. Towards understanding knowledge distillation. In: Proceedings
of the 36th International Conference on Machine Learning. Vol 97. ML Research
Press; 2019:5142-5151.'
apa: 'Phuong, M., & Lampert, C. (2019). Towards understanding knowledge distillation.
In Proceedings of the 36th International Conference on Machine Learning
(Vol. 97, pp. 5142–5151). Long Beach, CA, United States: ML Research Press.'
chicago: Phuong, Mary, and Christoph Lampert. “Towards Understanding Knowledge Distillation.”
In Proceedings of the 36th International Conference on Machine Learning,
97:5142–51. ML Research Press, 2019.
ieee: M. Phuong and C. Lampert, “Towards understanding knowledge distillation,”
in Proceedings of the 36th International Conference on Machine Learning,
Long Beach, CA, United States, 2019, vol. 97, pp. 5142–5151.
ista: 'Phuong M, Lampert C. 2019. Towards understanding knowledge distillation.
Proceedings of the 36th International Conference on Machine Learning. ICML: International
Conference on Machine Learning vol. 97, 5142–5151.'
mla: Phuong, Mary, and Christoph Lampert. “Towards Understanding Knowledge Distillation.”
Proceedings of the 36th International Conference on Machine Learning, vol.
97, ML Research Press, 2019, pp. 5142–51.
short: M. Phuong, C. Lampert, in:, Proceedings of the 36th International Conference
on Machine Learning, ML Research Press, 2019, pp. 5142–5151.
conference:
end_date: 2019-06-15
location: Long Beach, CA, United States
name: 'ICML: International Conference on Machine Learning'
start_date: 2019-06-10
date_created: 2019-06-20T18:23:03Z
date_published: 2019-06-13T00:00:00Z
date_updated: 2023-10-17T12:31:38Z
day: '13'
ddc:
- '000'
department:
- _id: ChLa
file:
- access_level: open_access
checksum: a66d00e2694d749250f8507f301320ca
content_type: application/pdf
creator: bphuong
date_created: 2019-06-20T18:22:56Z
date_updated: 2020-07-14T12:47:33Z
file_id: '6570'
file_name: paper.pdf
file_size: 686432
relation: main_file
file_date_updated: 2020-07-14T12:47:33Z
has_accepted_license: '1'
intvolume: ' 97'
language:
- iso: eng
month: '06'
oa: 1
oa_version: Published Version
page: 5142-5151
publication: Proceedings of the 36th International Conference on Machine Learning
publication_status: published
publisher: ML Research Press
quality_controlled: '1'
scopus_import: '1'
status: public
title: Towards understanding knowledge distillation
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 97
year: '2019'
...