---
_id: '14771'
abstract:
- lang: eng
text: Pruning—that is, setting a significant subset of the parameters of a neural
network to zero—is one of the most popular methods of model compression. Yet,
several recent works have raised the issue that pruning may induce or exacerbate
bias in the output of the compressed model. Despite existing evidence for this
phenomenon, the relationship between neural network pruning and induced bias is
not well-understood. In this work, we systematically investigate and characterize
this phenomenon in Convolutional Neural Networks for computer vision. First, we
show that it is in fact possible to obtain highly-sparse models, e.g. with less
than 10% remaining weights, which do not decrease in accuracy nor substantially
increase in bias when compared to dense models. At the same time, we also find
that, at higher sparsities, pruned models exhibit higher uncertainty in their
outputs, as well as increased correlations, which we directly link to increased
bias. We propose easy-to-use criteria which, based only on the uncompressed model,
establish whether bias will increase with pruning, and identify the samples most
susceptible to biased predictions post-compression. Our code can be found at https://github.com/IST-DASLab/pruned-vision-model-bias.
acknowledgement: The authors would like to sincerely thank Sara Hooker for her feedback
during the development of this work. EI was supported in part by the FWF DK VGSCO,
grant agreement number W1260-N35. AP and DA acknowledge generous ERC support, via
Starting Grant 805223 ScaleML.
article_processing_charge: No
author:
- first_name: Eugenia B
full_name: Iofinova, Eugenia B
id: f9a17499-f6e0-11ea-865d-fdf9a3f77117
last_name: Iofinova
orcid: 0000-0002-7778-3221
- first_name: Elena-Alexandra
full_name: Peste, Elena-Alexandra
id: 32D78294-F248-11E8-B48F-1D18A9856A87
last_name: Peste
- first_name: Dan-Adrian
full_name: Alistarh, Dan-Adrian
id: 4A899BFC-F248-11E8-B48F-1D18A9856A87
last_name: Alistarh
orcid: 0000-0003-3650-940X
citation:
ama: 'Iofinova EB, Peste E-A, Alistarh D-A. Bias in pruned vision models: In-depth
analysis and countermeasures. In: 2023 IEEE/CVF Conference on Computer Vision
and Pattern Recognition. IEEE; 2023:24364-24373. doi:10.1109/cvpr52729.2023.02334'
apa: 'Iofinova, E. B., Peste, E.-A., & Alistarh, D.-A. (2023). Bias in pruned
vision models: In-depth analysis and countermeasures. In 2023 IEEE/CVF Conference
on Computer Vision and Pattern Recognition (pp. 24364–24373). Vancouver, BC,
Canada: IEEE. https://doi.org/10.1109/cvpr52729.2023.02334'
chicago: 'Iofinova, Eugenia B, Elena-Alexandra Peste, and Dan-Adrian Alistarh. “Bias
in Pruned Vision Models: In-Depth Analysis and Countermeasures.” In 2023 IEEE/CVF
Conference on Computer Vision and Pattern Recognition, 24364–73. IEEE, 2023.
https://doi.org/10.1109/cvpr52729.2023.02334.'
ieee: 'E. B. Iofinova, E.-A. Peste, and D.-A. Alistarh, “Bias in pruned vision models:
In-depth analysis and countermeasures,” in 2023 IEEE/CVF Conference on Computer
Vision and Pattern Recognition, Vancouver, BC, Canada, 2023, pp. 24364–24373.'
ista: 'Iofinova EB, Peste E-A, Alistarh D-A. 2023. Bias in pruned vision models:
In-depth analysis and countermeasures. 2023 IEEE/CVF Conference on Computer Vision
and Pattern Recognition. CVPR: Conference on Computer Vision and Pattern Recognition,
24364–24373.'
mla: 'Iofinova, Eugenia B., et al. “Bias in Pruned Vision Models: In-Depth Analysis
and Countermeasures.” 2023 IEEE/CVF Conference on Computer Vision and Pattern
Recognition, IEEE, 2023, pp. 24364–73, doi:10.1109/cvpr52729.2023.02334.'
short: E.B. Iofinova, E.-A. Peste, D.-A. Alistarh, in:, 2023 IEEE/CVF Conference
on Computer Vision and Pattern Recognition, IEEE, 2023, pp. 24364–24373.
conference:
end_date: 2023-06-24
location: Vancouver, BC, Canada
name: 'CVPR: Conference on Computer Vision and Pattern Recognition'
start_date: 2023-06-17
date_created: 2024-01-10T08:42:40Z
date_published: 2023-08-22T00:00:00Z
date_updated: 2024-01-10T08:59:26Z
day: '22'
department:
- _id: DaAl
- _id: ChLa
doi: 10.1109/cvpr52729.2023.02334
ec_funded: 1
external_id:
arxiv:
- '2304.12622'
isi:
- '001062531308068'
isi: 1
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://doi.org/10.48550/arXiv.2304.12622
month: '08'
oa: 1
oa_version: Preprint
page: 24364-24373
project:
- _id: 9B9290DE-BA93-11EA-9121-9846C619BF3A
grant_number: ' W1260-N35'
name: Vienna Graduate School on Computational Optimization
- _id: 268A44D6-B435-11E9-9278-68D0E5697425
call_identifier: H2020
grant_number: '805223'
name: Elastic Coordination for Scalable Machine Learning
publication: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition
publication_identifier:
eisbn:
- '9798350301298'
eissn:
- 2575-7075
publication_status: published
publisher: IEEE
quality_controlled: '1'
related_material:
link:
- relation: software
url: https://github.com/IST-DASLab/pruned-vision-model-bias
status: public
title: 'Bias in pruned vision models: In-depth analysis and countermeasures'
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2023'
...
---
_id: '14921'
abstract:
- lang: eng
text: Neural collapse (NC) refers to the surprising structure of the last layer
of deep neural networks in the terminal phase of gradient descent training. Recently,
an increasing amount of experimental evidence has pointed to the propagation of
NC to earlier layers of neural networks. However, while the NC in the last layer
is well studied theoretically, much less is known about its multi-layered counterpart
- deep neural collapse (DNC). In particular, existing work focuses either on linear
layers or only on the last two layers at the price of an extra assumption. Our
paper fills this gap by generalizing the established analytical framework for
NC - the unconstrained features model - to multiple non-linear layers. Our key
technical contribution is to show that, in a deep unconstrained features model,
the unique global optimum for binary classification exhibits all the properties
typical of DNC. This explains the existing experimental evidence of DNC. We also
empirically show that (i) by optimizing deep unconstrained features models via
gradient descent, the resulting solution agrees well with our theory, and (ii)
trained networks recover the unconstrained features suitable for the occurrence
of DNC, thus supporting the validity of this modeling principle.
acknowledgement: M. M. is partially supported by the 2019 Lopez-Loreta Prize. The
authors would like to thank Eugenia Iofinova, Bernd Prach and Simone Bombari for
valuable feedback on the manuscript.
alternative_title:
- NeurIPS
article_processing_charge: No
author:
- first_name: Peter
full_name: Súkeník, Peter
id: d64d6a8d-eb8e-11eb-b029-96fd216dec3c
last_name: Súkeník
- first_name: Marco
full_name: Mondelli, Marco
id: 27EB676C-8706-11E9-9510-7717E6697425
last_name: Mondelli
orcid: 0000-0002-3242-7020
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Súkeník P, Mondelli M, Lampert C. Deep neural collapse is provably optimal
for the deep unconstrained features model. In: 37th Annual Conference on Neural
Information Processing Systems.'
apa: Súkeník, P., Mondelli, M., & Lampert, C. (n.d.). Deep neural collapse is
provably optimal for the deep unconstrained features model. In 37th Annual
Conference on Neural Information Processing Systems. New Orleans, LA, United
States.
chicago: Súkeník, Peter, Marco Mondelli, and Christoph Lampert. “Deep Neural Collapse
Is Provably Optimal for the Deep Unconstrained Features Model.” In 37th Annual
Conference on Neural Information Processing Systems, n.d.
ieee: P. Súkeník, M. Mondelli, and C. Lampert, “Deep neural collapse is provably
optimal for the deep unconstrained features model,” in 37th Annual Conference
on Neural Information Processing Systems, New Orleans, LA, United States.
ista: 'Súkeník P, Mondelli M, Lampert C. Deep neural collapse is provably optimal
for the deep unconstrained features model. 37th Annual Conference on Neural Information
Processing Systems. NeurIPS: Neural Information Processing Systems, NeurIPS, .'
mla: Súkeník, Peter, et al. “Deep Neural Collapse Is Provably Optimal for the Deep
Unconstrained Features Model.” 37th Annual Conference on Neural Information
Processing Systems.
short: P. Súkeník, M. Mondelli, C. Lampert, in:, 37th Annual Conference on Neural
Information Processing Systems, n.d.
conference:
end_date: 2023-12-16
location: New Orleans, LA, United States
name: 'NeurIPS: Neural Information Processing Systems'
start_date: 2023-12-10
date_created: 2024-02-02T11:17:41Z
date_published: 2023-12-15T00:00:00Z
date_updated: 2024-02-06T07:53:26Z
day: '15'
department:
- _id: MaMo
- _id: ChLa
external_id:
arxiv:
- '2305.13165'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: ' https://doi.org/10.48550/arXiv.2305.13165'
month: '12'
oa: 1
oa_version: Preprint
project:
- _id: 059876FA-7A3F-11EA-A408-12923DDC885E
name: Prix Lopez-Loretta 2019 - Marco Mondelli
publication: 37th Annual Conference on Neural Information Processing Systems
publication_status: inpress
quality_controlled: '1'
status: public
title: Deep neural collapse is provably optimal for the deep unconstrained features
model
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2023'
...
---
_id: '15039'
abstract:
- lang: eng
text: 'A crucial property for achieving secure, trustworthy and interpretable deep
learning systems is their robustness: small changes to a system''s inputs should
not result in large changes to its outputs. Mathematically, this means one strives
for networks with a small Lipschitz constant. Several recent works have focused
on how to construct such Lipschitz networks, typically by imposing constraints
on the weight matrices. In this work, we study an orthogonal aspect, namely the
role of the activation function. We show that commonly used activation functions,
such as MaxMin, as well as all piece-wise linear ones with two segments unnecessarily
restrict the class of representable functions, even in the simplest one-dimensional
setting. We furthermore introduce the new N-activation function that is provably
more expressive than currently popular activation functions. We provide code at
this https URL.'
article_number: '2311.06103'
article_processing_charge: No
author:
- first_name: Bernd
full_name: Prach, Bernd
id: 2D561D42-C427-11E9-89B4-9C1AE6697425
last_name: Prach
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: Prach B, Lampert C. 1-Lipschitz neural networks are more expressive with N-activations.
arXiv. doi:10.48550/ARXIV.2311.06103
apa: Prach, B., & Lampert, C. (n.d.). 1-Lipschitz neural networks are more expressive
with N-activations. arXiv. https://doi.org/10.48550/ARXIV.2311.06103
chicago: Prach, Bernd, and Christoph Lampert. “1-Lipschitz Neural Networks Are More
Expressive with N-Activations.” ArXiv, n.d. https://doi.org/10.48550/ARXIV.2311.06103.
ieee: B. Prach and C. Lampert, “1-Lipschitz neural networks are more expressive
with N-activations,” arXiv. .
ista: Prach B, Lampert C. 1-Lipschitz neural networks are more expressive with N-activations.
arXiv, 2311.06103.
mla: Prach, Bernd, and Christoph Lampert. “1-Lipschitz Neural Networks Are More
Expressive with N-Activations.” ArXiv, 2311.06103, doi:10.48550/ARXIV.2311.06103.
short: B. Prach, C. Lampert, ArXiv (n.d.).
date_created: 2024-02-28T17:59:32Z
date_published: 2023-11-10T00:00:00Z
date_updated: 2024-03-04T07:02:39Z
day: '10'
department:
- _id: GradSch
- _id: ChLa
doi: 10.48550/ARXIV.2311.06103
external_id:
arxiv:
- '2311.06103'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://doi.org/10.48550/arXiv.2311.06103
month: '11'
oa: 1
oa_version: Preprint
publication: arXiv
publication_status: submitted
status: public
title: 1-Lipschitz neural networks are more expressive with N-activations
type: preprint
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2023'
...
---
_id: '12660'
abstract:
- lang: eng
text: 'We present Cross-Client Label Propagation(XCLP), a new method for transductive
federated learning. XCLP estimates a data graph jointly from the data of multiple
clients and computes labels for the unlabeled data by propagating label information
across the graph. To avoid clients having to share their data with anyone, XCLP
employs two cryptographically secure protocols: secure Hamming distance computation
and secure summation. We demonstrate two distinct applications of XCLP within
federated learning. In the first, we use it in a one-shot way to predict labels
for unseen test points. In the second, we use it to repeatedly pseudo-label unlabeled
training data in a federated semi-supervised setting. Experiments on both real
federated and standard benchmark datasets show that in both applications XCLP
achieves higher classification accuracy than alternative approaches.'
article_number: '2210.06434'
article_processing_charge: No
author:
- first_name: Jonathan A
full_name: Scott, Jonathan A
id: e499926b-f6e0-11ea-865d-9c63db0031e8
last_name: Scott
- first_name: Michelle X
full_name: Yeo, Michelle X
id: 2D82B818-F248-11E8-B48F-1D18A9856A87
last_name: Yeo
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: Scott JA, Yeo MX, Lampert C. Cross-client Label Propagation for transductive
federated learning. arXiv. doi:10.48550/arXiv.2210.06434
apa: Scott, J. A., Yeo, M. X., & Lampert, C. (n.d.). Cross-client Label Propagation
for transductive federated learning. arXiv. https://doi.org/10.48550/arXiv.2210.06434
chicago: Scott, Jonathan A, Michelle X Yeo, and Christoph Lampert. “Cross-Client
Label Propagation for Transductive Federated Learning.” ArXiv, n.d. https://doi.org/10.48550/arXiv.2210.06434.
ieee: J. A. Scott, M. X. Yeo, and C. Lampert, “Cross-client Label Propagation for
transductive federated learning,” arXiv. .
ista: Scott JA, Yeo MX, Lampert C. Cross-client Label Propagation for transductive
federated learning. arXiv, 2210.06434.
mla: Scott, Jonathan A., et al. “Cross-Client Label Propagation for Transductive
Federated Learning.” ArXiv, 2210.06434, doi:10.48550/arXiv.2210.06434.
short: J.A. Scott, M.X. Yeo, C. Lampert, ArXiv (n.d.).
date_created: 2023-02-20T08:21:50Z
date_published: 2022-10-12T00:00:00Z
date_updated: 2023-02-21T08:20:18Z
day: '12'
ddc:
- '004'
department:
- _id: ChLa
doi: 10.48550/arXiv.2210.06434
external_id:
arxiv:
- '2210.06434'
file:
- access_level: open_access
checksum: 7ab20543fd4393f14fb857ce2e4f03c6
content_type: application/pdf
creator: chl
date_created: 2023-02-20T08:21:35Z
date_updated: 2023-02-20T08:21:35Z
file_id: '12661'
file_name: 2210.06434.pdf
file_size: 291893
relation: main_file
success: 1
file_date_updated: 2023-02-20T08:21:35Z
has_accepted_license: '1'
language:
- iso: eng
month: '10'
oa: 1
oa_version: Preprint
publication: arXiv
publication_status: submitted
status: public
title: Cross-client Label Propagation for transductive federated learning
tmp:
image: /images/cc_by.png
legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
short: CC BY (4.0)
type: preprint
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2022'
...
---
_id: '12662'
abstract:
- lang: eng
text: 'Modern machine learning tasks often require considering not just one but
multiple objectives. For example, besides the prediction quality, this could be
the efficiency, robustness or fairness of the learned models, or any of their
combinations. Multi-objective learning offers a natural framework for handling
such problems without having to commit to early trade-offs. Surprisingly, statistical
learning theory so far offers almost no insight into the generalization properties
of multi-objective learning. In this work, we make first steps to fill this gap:
we establish foundational generalization bounds for the multi-objective setting
as well as generalization and excess bounds for learning with scalarizations.
We also provide the first theoretical analysis of the relation between the Pareto-optimal
sets of the true objectives and the Pareto-optimal sets of their empirical approximations
from training data. In particular, we show a surprising asymmetry: all Pareto-optimal
solutions can be approximated by empirically Pareto-optimal ones, but not vice
versa.'
article_number: '2208.13499'
article_processing_charge: No
author:
- first_name: Peter
full_name: Súkeník, Peter
id: d64d6a8d-eb8e-11eb-b029-96fd216dec3c
last_name: Súkeník
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: Súkeník P, Lampert C. Generalization in Multi-objective machine learning. arXiv.
doi:10.48550/arXiv.2208.13499
apa: Súkeník, P., & Lampert, C. (n.d.). Generalization in Multi-objective machine
learning. arXiv. https://doi.org/10.48550/arXiv.2208.13499
chicago: Súkeník, Peter, and Christoph Lampert. “Generalization in Multi-Objective
Machine Learning.” ArXiv, n.d. https://doi.org/10.48550/arXiv.2208.13499.
ieee: P. Súkeník and C. Lampert, “Generalization in Multi-objective machine learning,”
arXiv. .
ista: Súkeník P, Lampert C. Generalization in Multi-objective machine learning.
arXiv, 2208.13499.
mla: Súkeník, Peter, and Christoph Lampert. “Generalization in Multi-Objective Machine
Learning.” ArXiv, 2208.13499, doi:10.48550/arXiv.2208.13499.
short: P. Súkeník, C. Lampert, ArXiv (n.d.).
date_created: 2023-02-20T08:23:06Z
date_published: 2022-08-29T00:00:00Z
date_updated: 2023-02-21T08:24:55Z
day: '29'
ddc:
- '004'
department:
- _id: ChLa
doi: 10.48550/arXiv.2208.13499
external_id:
arxiv:
- '2208.13499'
has_accepted_license: '1'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: ' https://doi.org/10.48550/arXiv.2208.13499'
month: '08'
oa: 1
oa_version: Preprint
publication: arXiv
publication_status: submitted
status: public
title: Generalization in Multi-objective machine learning
type: preprint
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2022'
...
---
_id: '12495'
abstract:
- lang: eng
text: "Fairness-aware learning aims at constructing classifiers that not only make
accurate predictions, but also do not discriminate against specific groups. It
is a fast-growing area of\r\nmachine learning with far-reaching societal impact.
However, existing fair learning methods\r\nare vulnerable to accidental or malicious
artifacts in the training data, which can cause\r\nthem to unknowingly produce
unfair classifiers. In this work we address the problem of\r\nfair learning from
unreliable training data in the robust multisource setting, where the\r\navailable
training data comes from multiple sources, a fraction of which might not be representative
of the true data distribution. We introduce FLEA, a filtering-based algorithm\r\nthat
identifies and suppresses those data sources that would have a negative impact
on\r\nfairness or accuracy if they were used for training. As such, FLEA is not
a replacement of\r\nprior fairness-aware learning methods but rather an augmentation
that makes any of them\r\nrobust against unreliable training data. We show the
effectiveness of our approach by a\r\ndiverse range of experiments on multiple
datasets. Additionally, we prove formally that\r\n–given enough data– FLEA protects
the learner against corruptions as long as the fraction of\r\naffected data sources
is less than half. Our source code and documentation are available at\r\nhttps://github.com/ISTAustria-CVML/FLEA."
acknowledged_ssus:
- _id: ScienComp
acknowledgement: 'The authors would like to thank Bernd Prach, Elias Frantar, Alexandra
Peste, Mahdi Nikdan, and Peter Súkeník for their helpful feedback. This research
was supported by the Scientific Service Units (SSU) of IST Austria through resources
provided by Scientific Computing (SciComp). This publication was made possible by
an ETH AI Center postdoctoral fellowship granted to Nikola Konstantinov. Eugenia
Iofinova was supported in part by the FWF DK VGSCO, grant agreement number W1260-N35. '
article_processing_charge: No
article_type: original
author:
- first_name: Eugenia B
full_name: Iofinova, Eugenia B
id: f9a17499-f6e0-11ea-865d-fdf9a3f77117
last_name: Iofinova
orcid: 0000-0002-7778-3221
- first_name: Nikola H
full_name: Konstantinov, Nikola H
id: 4B9D76E4-F248-11E8-B48F-1D18A9856A87
last_name: Konstantinov
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Iofinova EB, Konstantinov NH, Lampert C. FLEA: Provably robust fair multisource
learning from unreliable training data. Transactions on Machine Learning Research.
2022.'
apa: 'Iofinova, E. B., Konstantinov, N. H., & Lampert, C. (2022). FLEA: Provably
robust fair multisource learning from unreliable training data. Transactions
on Machine Learning Research. ML Research Press.'
chicago: 'Iofinova, Eugenia B, Nikola H Konstantinov, and Christoph Lampert. “FLEA:
Provably Robust Fair Multisource Learning from Unreliable Training Data.” Transactions
on Machine Learning Research. ML Research Press, 2022.'
ieee: 'E. B. Iofinova, N. H. Konstantinov, and C. Lampert, “FLEA: Provably robust
fair multisource learning from unreliable training data,” Transactions on Machine
Learning Research. ML Research Press, 2022.'
ista: 'Iofinova EB, Konstantinov NH, Lampert C. 2022. FLEA: Provably robust fair
multisource learning from unreliable training data. Transactions on Machine Learning
Research.'
mla: 'Iofinova, Eugenia B., et al. “FLEA: Provably Robust Fair Multisource Learning
from Unreliable Training Data.” Transactions on Machine Learning Research,
ML Research Press, 2022.'
short: E.B. Iofinova, N.H. Konstantinov, C. Lampert, Transactions on Machine Learning
Research (2022).
date_created: 2023-02-02T20:29:57Z
date_published: 2022-12-22T00:00:00Z
date_updated: 2023-02-23T10:30:54Z
day: '22'
ddc:
- '000'
department:
- _id: ChLa
external_id:
arxiv:
- '2106.11732'
file:
- access_level: open_access
checksum: 97c8a8470759cab597abb973ca137a3b
content_type: application/pdf
creator: dernst
date_created: 2023-02-23T10:30:04Z
date_updated: 2023-02-23T10:30:04Z
file_id: '12673'
file_name: 2022_TMLR_Iofinova.pdf
file_size: 1948063
relation: main_file
success: 1
file_date_updated: 2023-02-23T10:30:04Z
has_accepted_license: '1'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://openreview.net/forum?id=XsPopigZXV
month: '12'
oa: 1
oa_version: Published Version
project:
- _id: 9B9290DE-BA93-11EA-9121-9846C619BF3A
grant_number: ' W1260-N35'
name: Vienna Graduate School on Computational Optimization
publication: Transactions on Machine Learning Research
publication_identifier:
issn:
- 2835-8856
publication_status: published
publisher: ML Research Press
quality_controlled: '1'
related_material:
link:
- description: source code
relation: software
url: https://github.com/ISTAustria-CVML/FLEA
status: public
title: 'FLEA: Provably robust fair multisource learning from unreliable training data'
tmp:
image: /images/cc_by.png
legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
short: CC BY (4.0)
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2022'
...
---
_id: '11839'
abstract:
- lang: eng
text: "It is a highly desirable property for deep networks to be robust against\r\nsmall
input changes. One popular way to achieve this property is by designing\r\nnetworks
with a small Lipschitz constant. In this work, we propose a new\r\ntechnique for
constructing such Lipschitz networks that has a number of\r\ndesirable properties:
it can be applied to any linear network layer\r\n(fully-connected or convolutional),
it provides formal guarantees on the\r\nLipschitz constant, it is easy to implement
and efficient to run, and it can be\r\ncombined with any training objective and
optimization method. In fact, our\r\ntechnique is the first one in the literature
that achieves all of these\r\nproperties simultaneously. Our main contribution
is a rescaling-based weight\r\nmatrix parametrization that guarantees each network
layer to have a Lipschitz\r\nconstant of at most 1 and results in the learned
weight matrices to be close to\r\northogonal. Hence we call such layers almost-orthogonal
Lipschitz (AOL).\r\nExperiments and ablation studies in the context of image classification
with\r\ncertified robust accuracy confirm that AOL layers achieve results that
are on\r\npar with most existing methods. Yet, they are simpler to implement and
more\r\nbroadly applicable, because they do not require computationally expensive\r\nmatrix
orthogonalization or inversion steps as part of the network\r\narchitecture. We
provide code at https://github.com/berndprach/AOL."
alternative_title:
- LNCS
article_processing_charge: No
author:
- first_name: Bernd
full_name: Prach, Bernd
id: 2D561D42-C427-11E9-89B4-9C1AE6697425
last_name: Prach
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Prach B, Lampert C. Almost-orthogonal layers for efficient general-purpose
Lipschitz networks. In: Computer Vision – ECCV 2022. Vol 13681. Springer
Nature; 2022:350-365. doi:10.1007/978-3-031-19803-8_21'
apa: 'Prach, B., & Lampert, C. (2022). Almost-orthogonal layers for efficient
general-purpose Lipschitz networks. In Computer Vision – ECCV 2022 (Vol.
13681, pp. 350–365). Tel Aviv, Israel: Springer Nature. https://doi.org/10.1007/978-3-031-19803-8_21'
chicago: Prach, Bernd, and Christoph Lampert. “Almost-Orthogonal Layers for Efficient
General-Purpose Lipschitz Networks.” In Computer Vision – ECCV 2022, 13681:350–65.
Springer Nature, 2022. https://doi.org/10.1007/978-3-031-19803-8_21.
ieee: B. Prach and C. Lampert, “Almost-orthogonal layers for efficient general-purpose
Lipschitz networks,” in Computer Vision – ECCV 2022, Tel Aviv, Israel,
2022, vol. 13681, pp. 350–365.
ista: 'Prach B, Lampert C. 2022. Almost-orthogonal layers for efficient general-purpose
Lipschitz networks. Computer Vision – ECCV 2022. ECCV: European Conference on
Computer Vision, LNCS, vol. 13681, 350–365.'
mla: Prach, Bernd, and Christoph Lampert. “Almost-Orthogonal Layers for Efficient
General-Purpose Lipschitz Networks.” Computer Vision – ECCV 2022, vol.
13681, Springer Nature, 2022, pp. 350–65, doi:10.1007/978-3-031-19803-8_21.
short: B. Prach, C. Lampert, in:, Computer Vision – ECCV 2022, Springer Nature,
2022, pp. 350–365.
conference:
end_date: 2022-10-27
location: Tel Aviv, Israel
name: 'ECCV: European Conference on Computer Vision'
start_date: 2022-10-23
date_created: 2022-08-12T15:09:47Z
date_published: 2022-10-23T00:00:00Z
date_updated: 2023-05-03T08:00:46Z
day: '23'
department:
- _id: GradSch
- _id: ChLa
doi: 10.1007/978-3-031-19803-8_21
external_id:
arxiv:
- '2208.03160'
intvolume: ' 13681'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: ' https://doi.org/10.48550/arXiv.2208.03160'
month: '10'
oa: 1
oa_version: Preprint
page: 350-365
publication: Computer Vision – ECCV 2022
publication_identifier:
eisbn:
- '9783031198038'
isbn:
- '9783031198021'
publication_status: published
publisher: Springer Nature
quality_controlled: '1'
scopus_import: '1'
status: public
title: Almost-orthogonal layers for efficient general-purpose Lipschitz networks
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 13681
year: '2022'
...
---
_id: '10752'
abstract:
- lang: eng
text: 'The digitalization of almost all aspects of our everyday lives has led to
unprecedented amounts of data being freely available on the Internet. In particular
social media platforms provide rich sources of user-generated data, though typically
in unstructured form, and with high diversity, such as written in many different
languages. Automatically identifying meaningful information in such big data resources
and extracting it efficiently is one of the ongoing challenges of our time. A
common step for this is sentiment analysis, which forms the foundation for tasks
such as opinion mining or trend prediction. Unfortunately, publicly available
tools for this task are almost exclusively available for English-language texts.
Consequently, a large fraction of the Internet users, who do not communicate in
English, are ignored in automatized studies, a phenomenon called rare-language
discrimination.In this work we propose a technique to overcome this problem by
a truly multi-lingual model, which can be trained automatically without linguistic
knowledge or even the ability to read the many target languages. The main step
is to combine self-annotation, specifically the use of emoticons as a proxy for
labels, with multi-lingual sentence representations.To evaluate our method we
curated several large datasets from data obtained via the free Twitter streaming
API. The results show that our proposed multi-lingual training is able to achieve
sentiment predictions at the same quality level for rare languages as for frequent
ones, and in particular clearly better than what mono-lingual training achieves
on the same data. '
article_processing_charge: No
author:
- first_name: Jasmin
full_name: Lampert, Jasmin
last_name: Lampert
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0002-4561-241X
citation:
ama: 'Lampert J, Lampert C. Overcoming rare-language discrimination in multi-lingual
sentiment analysis. In: 2021 IEEE International Conference on Big Data.
IEEE; 2022:5185-5192. doi:10.1109/bigdata52589.2021.9672003'
apa: 'Lampert, J., & Lampert, C. (2022). Overcoming rare-language discrimination
in multi-lingual sentiment analysis. In 2021 IEEE International Conference
on Big Data (pp. 5185–5192). Orlando, FL, United States: IEEE. https://doi.org/10.1109/bigdata52589.2021.9672003'
chicago: Lampert, Jasmin, and Christoph Lampert. “Overcoming Rare-Language Discrimination
in Multi-Lingual Sentiment Analysis.” In 2021 IEEE International Conference
on Big Data, 5185–92. IEEE, 2022. https://doi.org/10.1109/bigdata52589.2021.9672003.
ieee: J. Lampert and C. Lampert, “Overcoming rare-language discrimination in multi-lingual
sentiment analysis,” in 2021 IEEE International Conference on Big Data,
Orlando, FL, United States, 2022, pp. 5185–5192.
ista: 'Lampert J, Lampert C. 2022. Overcoming rare-language discrimination in multi-lingual
sentiment analysis. 2021 IEEE International Conference on Big Data. Big Data:
International Conference on Big Data, 5185–5192.'
mla: Lampert, Jasmin, and Christoph Lampert. “Overcoming Rare-Language Discrimination
in Multi-Lingual Sentiment Analysis.” 2021 IEEE International Conference on
Big Data, IEEE, 2022, pp. 5185–92, doi:10.1109/bigdata52589.2021.9672003.
short: J. Lampert, C. Lampert, in:, 2021 IEEE International Conference on Big Data,
IEEE, 2022, pp. 5185–5192.
conference:
end_date: 2021-12-18
location: Orlando, FL, United States
name: 'Big Data: International Conference on Big Data'
start_date: 2021-12-15
date_created: 2022-02-10T14:08:23Z
date_published: 2022-01-13T00:00:00Z
date_updated: 2023-08-02T14:27:50Z
day: '13'
department:
- _id: ChLa
doi: 10.1109/bigdata52589.2021.9672003
external_id:
isi:
- '000800559505036'
isi: 1
language:
- iso: eng
month: '01'
oa_version: None
page: 5185-5192
publication: 2021 IEEE International Conference on Big Data
publication_identifier:
isbn:
- '9781665439022'
publication_status: published
publisher: IEEE
quality_controlled: '1'
status: public
title: Overcoming rare-language discrimination in multi-lingual sentiment analysis
type: conference
user_id: 4359f0d1-fa6c-11eb-b949-802e58b17ae8
year: '2022'
...
---
_id: '12161'
abstract:
- lang: eng
text: 'We introduce LIMES, a new method for learning with non-stationary streaming
data, inspired by the recent success of meta-learning. The main idea is not to
attempt to learn a single classifier that would have to work well across all occurring
data distributions, nor many separate classifiers, but to exploit a hybrid strategy:
we learn a single set of model parameters from which a specific classifier for
any specific data distribution is derived via classifier adaptation. Assuming
a multiclass classification setting with class-prior shift, the adaptation step
can be performed analytically with only the classifier’s bias terms being affected.
Another contribution of our work is an extrapolation step that predicts suitable
adaptation parameters for future time steps based on the previous data. In combination,
we obtain a lightweight procedure for learning from streaming data with varying
class distribution that adds no trainable parameters and almost no memory or computational
overhead compared to training a single model. Experiments on a set of exemplary
tasks using Twitter data show that LIMES achieves higher accuracy than alternative
approaches, especially with respect to the relevant real-world metric of lowest
within-day accuracy.'
article_processing_charge: No
author:
- first_name: Paulina
full_name: Tomaszewska, Paulina
last_name: Tomaszewska
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Tomaszewska P, Lampert C. Lightweight conditional model extrapolation for
streaming data under class-prior shift. In: 26th International Conference on
Pattern Recognition. Vol 2022. Institute of Electrical and Electronics Engineers;
2022:2128-2134. doi:10.1109/icpr56361.2022.9956195'
apa: 'Tomaszewska, P., & Lampert, C. (2022). Lightweight conditional model extrapolation
for streaming data under class-prior shift. In 26th International Conference
on Pattern Recognition (Vol. 2022, pp. 2128–2134). Montreal, Canada: Institute
of Electrical and Electronics Engineers. https://doi.org/10.1109/icpr56361.2022.9956195'
chicago: Tomaszewska, Paulina, and Christoph Lampert. “Lightweight Conditional Model
Extrapolation for Streaming Data under Class-Prior Shift.” In 26th International
Conference on Pattern Recognition, 2022:2128–34. Institute of Electrical and
Electronics Engineers, 2022. https://doi.org/10.1109/icpr56361.2022.9956195.
ieee: P. Tomaszewska and C. Lampert, “Lightweight conditional model extrapolation
for streaming data under class-prior shift,” in 26th International Conference
on Pattern Recognition, Montreal, Canada, 2022, vol. 2022, pp. 2128–2134.
ista: 'Tomaszewska P, Lampert C. 2022. Lightweight conditional model extrapolation
for streaming data under class-prior shift. 26th International Conference on Pattern
Recognition. ICPR: International Conference on Pattern Recognition vol. 2022,
2128–2134.'
mla: Tomaszewska, Paulina, and Christoph Lampert. “Lightweight Conditional Model
Extrapolation for Streaming Data under Class-Prior Shift.” 26th International
Conference on Pattern Recognition, vol. 2022, Institute of Electrical and
Electronics Engineers, 2022, pp. 2128–34, doi:10.1109/icpr56361.2022.9956195.
short: P. Tomaszewska, C. Lampert, in:, 26th International Conference on Pattern
Recognition, Institute of Electrical and Electronics Engineers, 2022, pp. 2128–2134.
conference:
end_date: 2022-08-25
location: Montreal, Canada
name: 'ICPR: International Conference on Pattern Recognition'
start_date: 2022-08-21
date_created: 2023-01-12T12:09:38Z
date_published: 2022-11-29T00:00:00Z
date_updated: 2023-08-04T09:06:34Z
day: '29'
department:
- _id: ChLa
doi: 10.1109/icpr56361.2022.9956195
external_id:
arxiv:
- '2206.05181'
isi:
- '000897707602018'
intvolume: ' 2022'
isi: 1
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://doi.org/10.48550/arXiv.2206.05181
month: '11'
oa: 1
oa_version: Preprint
page: 2128-2134
publication: 26th International Conference on Pattern Recognition
publication_identifier:
eisbn:
- '9781665490627'
eissn:
- 2831-7475
publication_status: published
publisher: Institute of Electrical and Electronics Engineers
quality_controlled: '1'
scopus_import: '1'
status: public
title: Lightweight conditional model extrapolation for streaming data under class-prior
shift
type: conference
user_id: 4359f0d1-fa6c-11eb-b949-802e58b17ae8
volume: 2022
year: '2022'
...
---
_id: '12299'
abstract:
- lang: eng
text: 'Transfer learning is a classic paradigm by which models pretrained on large
“upstream” datasets are adapted to yield good results on “downstream” specialized
datasets. Generally, more accurate models on the “upstream” dataset tend to provide
better transfer accuracy “downstream”. In this work, we perform an in-depth investigation
of this phenomenon in the context of convolutional neural networks (CNNs) trained
on the ImageNet dataset, which have been pruned-that is, compressed by sparsifiying
their connections. We consider transfer using unstructured pruned models obtained
by applying several state-of-the-art pruning methods, including magnitude-based,
second-order, regrowth, lottery-ticket, and regularization approaches, in the
context of twelve standard transfer tasks. In a nutshell, our study shows that
sparse models can match or even outperform the transfer performance of dense models,
even at high sparsities, and, while doing so, can lead to significant inference
and even training speedups. At the same time, we observe and analyze significant
differences in the behaviour of different pruning methods. The code is available
at: https://github.com/IST-DASLab/sparse-imagenet-transfer.'
acknowledgement: he authors would like to sincerely thank Christoph Lampert and Nir
Shavit for fruitful discussions during the development of this work, and Eldar Kurtic
for experimental support. EI was supported in part by the FWF DK VGSCO, grant agreement
number W1260-N35, while AP and DA acknowledge generous support by the ERC, via Starting
Grant 805223 ScaleML.
article_processing_charge: No
author:
- first_name: Eugenia B
full_name: Iofinova, Eugenia B
id: f9a17499-f6e0-11ea-865d-fdf9a3f77117
last_name: Iofinova
orcid: 0000-0002-7778-3221
- first_name: Elena-Alexandra
full_name: Peste, Elena-Alexandra
id: 32D78294-F248-11E8-B48F-1D18A9856A87
last_name: Peste
- first_name: Mark
full_name: Kurtz, Mark
last_name: Kurtz
- first_name: Dan-Adrian
full_name: Alistarh, Dan-Adrian
id: 4A899BFC-F248-11E8-B48F-1D18A9856A87
last_name: Alistarh
orcid: 0000-0003-3650-940X
citation:
ama: 'Iofinova EB, Peste E-A, Kurtz M, Alistarh D-A. How well do sparse ImageNet
models transfer? In: 2022 IEEE/CVF Conference on Computer Vision and Pattern
Recognition. Institute of Electrical and Electronics Engineers; 2022:12256-12266.
doi:10.1109/cvpr52688.2022.01195'
apa: 'Iofinova, E. B., Peste, E.-A., Kurtz, M., & Alistarh, D.-A. (2022). How
well do sparse ImageNet models transfer? In 2022 IEEE/CVF Conference on Computer
Vision and Pattern Recognition (pp. 12256–12266). New Orleans, LA, United
States: Institute of Electrical and Electronics Engineers. https://doi.org/10.1109/cvpr52688.2022.01195'
chicago: Iofinova, Eugenia B, Elena-Alexandra Peste, Mark Kurtz, and Dan-Adrian
Alistarh. “How Well Do Sparse ImageNet Models Transfer?” In 2022 IEEE/CVF Conference
on Computer Vision and Pattern Recognition, 12256–66. Institute of Electrical
and Electronics Engineers, 2022. https://doi.org/10.1109/cvpr52688.2022.01195.
ieee: E. B. Iofinova, E.-A. Peste, M. Kurtz, and D.-A. Alistarh, “How well do sparse
ImageNet models transfer?,” in 2022 IEEE/CVF Conference on Computer Vision
and Pattern Recognition, New Orleans, LA, United States, 2022, pp. 12256–12266.
ista: 'Iofinova EB, Peste E-A, Kurtz M, Alistarh D-A. 2022. How well do sparse ImageNet
models transfer? 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
CVPR: Computer Vision and Pattern Recognition, 12256–12266.'
mla: Iofinova, Eugenia B., et al. “How Well Do Sparse ImageNet Models Transfer?”
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Institute
of Electrical and Electronics Engineers, 2022, pp. 12256–66, doi:10.1109/cvpr52688.2022.01195.
short: E.B. Iofinova, E.-A. Peste, M. Kurtz, D.-A. Alistarh, in:, 2022 IEEE/CVF
Conference on Computer Vision and Pattern Recognition, Institute of Electrical
and Electronics Engineers, 2022, pp. 12256–12266.
conference:
end_date: 2022-06-24
location: New Orleans, LA, United States
name: 'CVPR: Computer Vision and Pattern Recognition'
start_date: 2022-06-18
date_created: 2023-01-16T10:06:00Z
date_published: 2022-09-27T00:00:00Z
date_updated: 2023-08-04T10:33:28Z
day: '27'
department:
- _id: DaAl
- _id: ChLa
doi: 10.1109/cvpr52688.2022.01195
ec_funded: 1
external_id:
arxiv:
- '2111.13445'
isi:
- '000870759105034'
isi: 1
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://doi.org/10.48550/arXiv.2111.13445
month: '09'
oa: 1
oa_version: Preprint
page: 12256-12266
project:
- _id: 9B9290DE-BA93-11EA-9121-9846C619BF3A
grant_number: ' W1260-N35'
name: Vienna Graduate School on Computational Optimization
- _id: 268A44D6-B435-11E9-9278-68D0E5697425
call_identifier: H2020
grant_number: '805223'
name: Elastic Coordination for Scalable Machine Learning
publication: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition
publication_identifier:
eissn:
- 2575-7075
publication_status: published
publisher: Institute of Electrical and Electronics Engineers
quality_controlled: '1'
related_material:
record:
- id: '13074'
relation: dissertation_contains
status: public
scopus_import: '1'
status: public
title: How well do sparse ImageNet models transfer?
type: conference
user_id: 4359f0d1-fa6c-11eb-b949-802e58b17ae8
year: '2022'
...
---
_id: '10802'
abstract:
- lang: eng
text: "Addressing fairness concerns about machine learning models is a crucial step
towards their long-term adoption in real-world automated systems. While many approaches
have been developed for training fair models from data, little is known about
the robustness of these methods to data corruption. In this work we consider fairness-aware
learning under worst-case data manipulations. We show that an adversary can in
some situations force any learner to return an overly biased classifier, regardless
of the sample size and with or without degrading\r\naccuracy, and that the strength
of the excess bias increases for learning problems with underrepresented protected
groups in the data. We also prove that our hardness results are tight up to constant
factors. To this end, we study two natural learning algorithms that optimize for
both accuracy and fairness and show that these algorithms enjoy guarantees that
are order-optimal in terms of the corruption ratio and the protected groups frequencies
in the large data\r\nlimit."
acknowledgement: The authors thank Eugenia Iofinova and Bernd Prach for providing
feedback on early versions of this paper. This publication was made possible by
an ETH AI Center postdoctoral fellowship to Nikola Konstantinov.
article_processing_charge: No
article_type: original
author:
- first_name: Nikola H
full_name: Konstantinov, Nikola H
id: 4B9D76E4-F248-11E8-B48F-1D18A9856A87
last_name: Konstantinov
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0002-4561-241X
citation:
ama: Konstantinov NH, Lampert C. Fairness-aware PAC learning from corrupted data.
Journal of Machine Learning Research. 2022;23:1-60.
apa: Konstantinov, N. H., & Lampert, C. (2022). Fairness-aware PAC learning
from corrupted data. Journal of Machine Learning Research. ML Research
Press.
chicago: Konstantinov, Nikola H, and Christoph Lampert. “Fairness-Aware PAC Learning
from Corrupted Data.” Journal of Machine Learning Research. ML Research
Press, 2022.
ieee: N. H. Konstantinov and C. Lampert, “Fairness-aware PAC learning from corrupted
data,” Journal of Machine Learning Research, vol. 23. ML Research Press,
pp. 1–60, 2022.
ista: Konstantinov NH, Lampert C. 2022. Fairness-aware PAC learning from corrupted
data. Journal of Machine Learning Research. 23, 1–60.
mla: Konstantinov, Nikola H., and Christoph Lampert. “Fairness-Aware PAC Learning
from Corrupted Data.” Journal of Machine Learning Research, vol. 23, ML
Research Press, 2022, pp. 1–60.
short: N.H. Konstantinov, C. Lampert, Journal of Machine Learning Research 23 (2022)
1–60.
date_created: 2022-02-28T14:05:42Z
date_published: 2022-05-01T00:00:00Z
date_updated: 2023-09-26T10:44:37Z
day: '01'
ddc:
- '004'
department:
- _id: ChLa
external_id:
arxiv:
- '2102.06004'
file:
- access_level: open_access
checksum: 9cac897b54a0ddf3a553a2c33e88cfda
content_type: application/pdf
creator: kschuh
date_created: 2022-07-12T15:08:28Z
date_updated: 2022-07-12T15:08:28Z
file_id: '11570'
file_name: 2022_JournalMachineLearningResearch_Konstantinov.pdf
file_size: 551862
relation: main_file
success: 1
file_date_updated: 2022-07-12T15:08:28Z
has_accepted_license: '1'
intvolume: ' 23'
keyword:
- Fairness
- robustness
- data poisoning
- trustworthy machine learning
- PAC learning
language:
- iso: eng
month: '05'
oa: 1
oa_version: Published Version
page: 1-60
publication: Journal of Machine Learning Research
publication_identifier:
eissn:
- 1533-7928
issn:
- 1532-4435
publication_status: published
publisher: ML Research Press
quality_controlled: '1'
related_material:
record:
- id: '10799'
relation: dissertation_contains
status: public
- id: '13241'
relation: shorter_version
status: public
scopus_import: '1'
status: public
title: Fairness-aware PAC learning from corrupted data
tmp:
image: /images/cc_by.png
legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
short: CC BY (4.0)
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 23
year: '2022'
...
---
_id: '13241'
abstract:
- lang: eng
text: Addressing fairness concerns about machine learning models is a crucial step
towards their long-term adoption in real-world automated systems. Many approaches
for training fair models from data have been developed and an implicit assumption
about such algorithms is that they are able to recover a fair model, despite potential
historical biases in the data. In this work we show a number of impossibility
results that indicate that there is no learning algorithm that can recover a fair
model when a proportion of the dataset is subject to arbitrary manipulations.
Specifically, we prove that there are situations in which an adversary can force
any learner to return a biased classifier, with or without degrading accuracy,
and that the strength of this bias increases for learning problems with underrepresented
protected groups in the data. Our results emphasize on the importance of studying
further data corruption models of various strength and of establishing stricter
data collection practices for fairness-aware learning.
acknowledgement: "This paper is a shortened, workshop version of Konstantinov and
Lampert (2021),\r\nhttps://arxiv.org/abs/2102.06004. For further results, including
an analysis of algorithms achieving the lower bounds from this paper, we refer to
the full version."
article_processing_charge: No
author:
- first_name: Nikola H
full_name: Konstantinov, Nikola H
id: 4B9D76E4-F248-11E8-B48F-1D18A9856A87
last_name: Konstantinov
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Konstantinov NH, Lampert C. On the impossibility of fairness-aware learning
from corrupted data. In: Proceedings of Machine Learning Research. Vol
171. ML Research Press; 2022:59-83.'
apa: Konstantinov, N. H., & Lampert, C. (2022). On the impossibility of fairness-aware
learning from corrupted data. In Proceedings of Machine Learning Research
(Vol. 171, pp. 59–83). ML Research Press.
chicago: Konstantinov, Nikola H, and Christoph Lampert. “On the Impossibility of
Fairness-Aware Learning from Corrupted Data.” In Proceedings of Machine Learning
Research, 171:59–83. ML Research Press, 2022.
ieee: N. H. Konstantinov and C. Lampert, “On the impossibility of fairness-aware
learning from corrupted data,” in Proceedings of Machine Learning Research,
2022, vol. 171, pp. 59–83.
ista: Konstantinov NH, Lampert C. 2022. On the impossibility of fairness-aware learning
from corrupted data. Proceedings of Machine Learning Research. vol. 171, 59–83.
mla: Konstantinov, Nikola H., and Christoph Lampert. “On the Impossibility of Fairness-Aware
Learning from Corrupted Data.” Proceedings of Machine Learning Research,
vol. 171, ML Research Press, 2022, pp. 59–83.
short: N.H. Konstantinov, C. Lampert, in:, Proceedings of Machine Learning Research,
ML Research Press, 2022, pp. 59–83.
date_created: 2023-07-16T22:01:13Z
date_published: 2022-12-01T00:00:00Z
date_updated: 2023-09-26T10:44:37Z
day: '01'
department:
- _id: ChLa
external_id:
arxiv:
- '2102.06004'
intvolume: ' 171'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/2102.06004
month: '12'
oa: 1
oa_version: Preprint
page: 59-83
publication: Proceedings of Machine Learning Research
publication_identifier:
eissn:
- 2640-3498
publication_status: published
publisher: ML Research Press
quality_controlled: '1'
related_material:
record:
- id: '10802'
relation: extended_version
status: public
scopus_import: '1'
status: public
title: On the impossibility of fairness-aware learning from corrupted data
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 171
year: '2022'
...
---
_id: '10799'
abstract:
- lang: eng
text: "Because of the increasing popularity of machine learning methods, it is becoming
important to understand the impact of learned components on automated decision-making
systems and to guarantee that their consequences are beneficial to society. In
other words, it is necessary to ensure that machine learning is sufficiently trustworthy
to be used in real-world applications. This thesis studies two properties of machine
learning models that are highly desirable for the\r\nsake of reliability: robustness
and fairness. In the first part of the thesis we study the robustness of learning
algorithms to training data corruption. Previous work has shown that machine learning
models are vulnerable to a range\r\nof training set issues, varying from label
noise through systematic biases to worst-case data manipulations. This is an especially
relevant problem from a present perspective, since modern machine learning methods
are particularly data hungry and therefore practitioners often have to rely on
data collected from various external sources, e.g. from the Internet, from app
users or via crowdsourcing. Naturally, such sources vary greatly in the quality
and reliability of the\r\ndata they provide. With these considerations in mind,
we study the problem of designing machine learning algorithms that are robust
to corruptions in data coming from multiple sources. We show that, in contrast
to the case of a single dataset with outliers, successful learning within this
model is possible both theoretically and practically, even under worst-case data
corruptions. The second part of this thesis deals with fairness-aware machine
learning. There are multiple areas where machine learning models have shown promising
results, but where careful considerations are required, in order to avoid discrimanative
decisions taken by such learned components. Ensuring fairness can be particularly
challenging, because real-world training datasets are expected to contain various
forms of historical bias that may affect the learning process. In this thesis
we show that data corruption can indeed render the problem of achieving fairness
impossible, by tightly characterizing the theoretical limits of fair learning
under worst-case data manipulations. However, assuming access to clean data, we
also show how fairness-aware learning can be made practical in contexts beyond
binary classification, in particular in the challenging learning to rank setting."
alternative_title:
- ISTA Thesis
article_processing_charge: No
author:
- first_name: Nikola H
full_name: Konstantinov, Nikola H
id: 4B9D76E4-F248-11E8-B48F-1D18A9856A87
last_name: Konstantinov
citation:
ama: Konstantinov NH. Robustness and fairness in machine learning. 2022. doi:10.15479/at:ista:10799
apa: Konstantinov, N. H. (2022). Robustness and fairness in machine learning.
Institute of Science and Technology Austria. https://doi.org/10.15479/at:ista:10799
chicago: Konstantinov, Nikola H. “Robustness and Fairness in Machine Learning.”
Institute of Science and Technology Austria, 2022. https://doi.org/10.15479/at:ista:10799.
ieee: N. H. Konstantinov, “Robustness and fairness in machine learning,” Institute
of Science and Technology Austria, 2022.
ista: Konstantinov NH. 2022. Robustness and fairness in machine learning. Institute
of Science and Technology Austria.
mla: Konstantinov, Nikola H. Robustness and Fairness in Machine Learning.
Institute of Science and Technology Austria, 2022, doi:10.15479/at:ista:10799.
short: N.H. Konstantinov, Robustness and Fairness in Machine Learning, Institute
of Science and Technology Austria, 2022.
date_created: 2022-02-28T13:03:49Z
date_published: 2022-03-08T00:00:00Z
date_updated: 2023-10-17T12:31:54Z
day: '08'
ddc:
- '000'
degree_awarded: PhD
department:
- _id: GradSch
- _id: ChLa
doi: 10.15479/at:ista:10799
ec_funded: 1
file:
- access_level: open_access
checksum: 626bc523ae8822d20e635d0e2d95182e
content_type: application/pdf
creator: nkonstan
date_created: 2022-03-06T11:42:54Z
date_updated: 2022-03-06T11:42:54Z
file_id: '10823'
file_name: thesis.pdf
file_size: 4204905
relation: main_file
success: 1
- access_level: closed
checksum: e2ca2b88350ac8ea1515b948885cbcb1
content_type: application/x-zip-compressed
creator: nkonstan
date_created: 2022-03-06T11:42:57Z
date_updated: 2022-03-10T12:11:48Z
file_id: '10824'
file_name: thesis.zip
file_size: 22841103
relation: source_file
file_date_updated: 2022-03-10T12:11:48Z
has_accepted_license: '1'
keyword:
- robustness
- fairness
- machine learning
- PAC learning
- adversarial learning
language:
- iso: eng
month: '03'
oa: 1
oa_version: Published Version
page: '176'
project:
- _id: 2564DBCA-B435-11E9-9278-68D0E5697425
call_identifier: H2020
grant_number: '665385'
name: International IST Doctoral Program
publication_identifier:
isbn:
- 978-3-99078-015-2
issn:
- 2663-337X
publication_status: published
publisher: Institute of Science and Technology Austria
related_material:
record:
- id: '8724'
relation: part_of_dissertation
status: public
- id: '10803'
relation: part_of_dissertation
status: public
- id: '10802'
relation: part_of_dissertation
status: public
- id: '6590'
relation: part_of_dissertation
status: public
status: public
supervisor:
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
title: Robustness and fairness in machine learning
type: dissertation
user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1
year: '2022'
...
---
_id: '9210'
abstract:
- lang: eng
text: "Modern neural networks can easily fit their training set perfectly. Surprisingly,
despite being “overfit” in this way, they tend to generalize well to future data,
thereby defying the classic bias–variance trade-off of machine learning theory.
Of the many possible explanations, a prevalent one is that training by stochastic
gradient descent (SGD) imposes an implicit bias that leads it to learn simple
functions, and these simple functions generalize well. However, the specifics
of this implicit bias are not well understood.\r\nIn this work, we explore the
smoothness conjecture which states that SGD is implicitly biased towards learning
functions that are smooth. We propose several measures to formalize the intuitive
notion of smoothness, and we conduct experiments to determine whether SGD indeed
implicitly optimizes for these measures. Our findings rule out the possibility
that smoothness measures based on first-order derivatives are being implicitly
enforced. They are supportive, though, of the smoothness conjecture for measures
based on second-order derivatives."
article_processing_charge: No
author:
- first_name: Vaclav
full_name: Volhejn, Vaclav
id: d5235fb4-7a6d-11eb-b254-f25d12d631a8
last_name: Volhejn
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Volhejn V, Lampert C. Does SGD implicitly optimize for smoothness? In: 42nd
German Conference on Pattern Recognition. Vol 12544. LNCS. Springer; 2021:246-259.
doi:10.1007/978-3-030-71278-5_18'
apa: 'Volhejn, V., & Lampert, C. (2021). Does SGD implicitly optimize for smoothness?
In 42nd German Conference on Pattern Recognition (Vol. 12544, pp. 246–259).
Tübingen, Germany: Springer. https://doi.org/10.1007/978-3-030-71278-5_18'
chicago: Volhejn, Vaclav, and Christoph Lampert. “Does SGD Implicitly Optimize for
Smoothness?” In 42nd German Conference on Pattern Recognition, 12544:246–59.
LNCS. Springer, 2021. https://doi.org/10.1007/978-3-030-71278-5_18.
ieee: V. Volhejn and C. Lampert, “Does SGD implicitly optimize for smoothness?,”
in 42nd German Conference on Pattern Recognition, Tübingen, Germany, 2021,
vol. 12544, pp. 246–259.
ista: 'Volhejn V, Lampert C. 2021. Does SGD implicitly optimize for smoothness?
42nd German Conference on Pattern Recognition. DAGM GCPR: German Conference on
Pattern Recognition LNCS vol. 12544, 246–259.'
mla: Volhejn, Vaclav, and Christoph Lampert. “Does SGD Implicitly Optimize for Smoothness?”
42nd German Conference on Pattern Recognition, vol. 12544, Springer, 2021,
pp. 246–59, doi:10.1007/978-3-030-71278-5_18.
short: V. Volhejn, C. Lampert, in:, 42nd German Conference on Pattern Recognition,
Springer, 2021, pp. 246–259.
conference:
end_date: 2020-10-01
location: Tübingen, Germany
name: 'DAGM GCPR: German Conference on Pattern Recognition '
start_date: 2020-09-28
date_created: 2021-03-01T09:01:16Z
date_published: 2021-03-17T00:00:00Z
date_updated: 2022-08-12T07:28:47Z
day: '17'
ddc:
- '510'
department:
- _id: ChLa
doi: 10.1007/978-3-030-71278-5_18
file:
- access_level: open_access
checksum: 3e3628ab1cf658d82524963f808004ea
content_type: application/pdf
creator: dernst
date_created: 2022-08-12T07:27:58Z
date_updated: 2022-08-12T07:27:58Z
file_id: '11820'
file_name: 2020_GCPR_submitted_Volhejn.pdf
file_size: 420234
relation: main_file
success: 1
file_date_updated: 2022-08-12T07:27:58Z
has_accepted_license: '1'
intvolume: ' 12544'
language:
- iso: eng
month: '03'
oa: 1
oa_version: Submitted Version
page: 246-259
publication: 42nd German Conference on Pattern Recognition
publication_identifier:
eissn:
- 1611-3349
isbn:
- '9783030712778'
issn:
- 0302-9743
publication_status: published
publisher: Springer
quality_controlled: '1'
scopus_import: '1'
series_title: LNCS
status: public
title: Does SGD implicitly optimize for smoothness?
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 12544
year: '2021'
...
---
_id: '9416'
abstract:
- lang: eng
text: 'We study the inductive bias of two-layer ReLU networks trained by gradient
flow. We identify a class of easy-to-learn (`orthogonally separable'') datasets,
and characterise the solution that ReLU networks trained on such datasets converge
to. Irrespective of network width, the solution turns out to be a combination
of two max-margin classifiers: one corresponding to the positive data subset and
one corresponding to the negative data subset. The proof is based on the recently
introduced concept of extremal sectors, for which we prove a number of properties
in the context of orthogonal separability. In particular, we prove stationarity
of activation patterns from some time onwards, which enables a reduction of the
ReLU network to an ensemble of linear subnetworks.'
article_processing_charge: No
author:
- first_name: Phuong
full_name: Bui Thi Mai, Phuong
id: 3EC6EE64-F248-11E8-B48F-1D18A9856A87
last_name: Bui Thi Mai
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Phuong M, Lampert C. The inductive bias of ReLU networks on orthogonally separable
data. In: 9th International Conference on Learning Representations. ; 2021.'
apa: Phuong, M., & Lampert, C. (2021). The inductive bias of ReLU networks on
orthogonally separable data. In 9th International Conference on Learning Representations.
Virtual.
chicago: Phuong, Mary, and Christoph Lampert. “The Inductive Bias of ReLU Networks
on Orthogonally Separable Data.” In 9th International Conference on Learning
Representations, 2021.
ieee: M. Phuong and C. Lampert, “The inductive bias of ReLU networks on orthogonally
separable data,” in 9th International Conference on Learning Representations,
Virtual, 2021.
ista: 'Phuong M, Lampert C. 2021. The inductive bias of ReLU networks on orthogonally
separable data. 9th International Conference on Learning Representations. ICLR:
International Conference on Learning Representations.'
mla: Phuong, Mary, and Christoph Lampert. “The Inductive Bias of ReLU Networks on
Orthogonally Separable Data.” 9th International Conference on Learning Representations,
2021.
short: M. Phuong, C. Lampert, in:, 9th International Conference on Learning Representations,
2021.
conference:
end_date: 2021-05-07
location: Virtual
name: ' ICLR: International Conference on Learning Representations'
start_date: 2021-05-03
date_created: 2021-05-24T11:16:46Z
date_published: 2021-05-01T00:00:00Z
date_updated: 2023-09-07T13:29:50Z
day: '01'
ddc:
- '000'
department:
- _id: GradSch
- _id: ChLa
file:
- access_level: open_access
checksum: f34ff17017527db5ba6927f817bdd125
content_type: application/pdf
creator: bphuong
date_created: 2021-05-24T11:15:57Z
date_updated: 2021-05-24T11:15:57Z
file_id: '9417'
file_name: iclr2021_conference.pdf
file_size: 502356
relation: main_file
file_date_updated: 2021-05-24T11:15:57Z
has_accepted_license: '1'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://openreview.net/pdf?id=krz7T0xU9Z_
month: '05'
oa: 1
oa_version: Published Version
publication: 9th International Conference on Learning Representations
publication_status: published
quality_controlled: '1'
related_material:
record:
- id: '9418'
relation: dissertation_contains
status: public
scopus_import: '1'
status: public
title: The inductive bias of ReLU networks on orthogonally separable data
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2021'
...
---
_id: '10803'
abstract:
- lang: eng
text: Given the abundance of applications of ranking in recent years, addressing
fairness concerns around automated ranking systems becomes necessary for increasing
the trust among end-users. Previous work on fair ranking has mostly focused on
application-specific fairness notions, often tailored to online advertising, and
it rarely considers learning as part of the process. In this work, we show how
to transfer numerous fairness notions from binary classification to a learning
to rank setting. Our formalism allows us to design methods for incorporating fairness
objectives with provable generalization guarantees. An extensive experimental
evaluation shows that our method can improve ranking fairness substantially with
no or only little loss of model quality.
article_number: '2102.05996'
article_processing_charge: No
author:
- first_name: Nikola H
full_name: Konstantinov, Nikola H
id: 4B9D76E4-F248-11E8-B48F-1D18A9856A87
last_name: Konstantinov
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0002-4561-241X
citation:
ama: Konstantinov NH, Lampert C. Fairness through regularization for learning to
rank. arXiv. doi:10.48550/arXiv.2102.05996
apa: Konstantinov, N. H., & Lampert, C. (n.d.). Fairness through regularization
for learning to rank. arXiv. https://doi.org/10.48550/arXiv.2102.05996
chicago: Konstantinov, Nikola H, and Christoph Lampert. “Fairness through Regularization
for Learning to Rank.” ArXiv, n.d. https://doi.org/10.48550/arXiv.2102.05996.
ieee: N. H. Konstantinov and C. Lampert, “Fairness through regularization for learning
to rank,” arXiv. .
ista: Konstantinov NH, Lampert C. Fairness through regularization for learning to
rank. arXiv, 2102.05996.
mla: Konstantinov, Nikola H., and Christoph Lampert. “Fairness through Regularization
for Learning to Rank.” ArXiv, 2102.05996, doi:10.48550/arXiv.2102.05996.
short: N.H. Konstantinov, C. Lampert, ArXiv (n.d.).
date_created: 2022-02-28T14:13:59Z
date_published: 2021-06-07T00:00:00Z
date_updated: 2023-09-07T13:42:08Z
day: '07'
department:
- _id: ChLa
doi: 10.48550/arXiv.2102.05996
external_id:
arxiv:
- '2102.05996'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/2102.05996
month: '06'
oa: 1
oa_version: Preprint
publication: arXiv
publication_status: submitted
related_material:
record:
- id: '10799'
relation: dissertation_contains
status: public
status: public
title: Fairness through regularization for learning to rank
type: preprint
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2021'
...
---
_id: '9418'
abstract:
- lang: eng
text: "Deep learning is best known for its empirical success across a wide range
of applications\r\nspanning computer vision, natural language processing and speech.
Of equal significance,\r\nthough perhaps less known, are its ramifications for
learning theory: deep networks have\r\nbeen observed to perform surprisingly well
in the high-capacity regime, aka the overfitting\r\nor underspecified regime.
Classically, this regime on the far right of the bias-variance curve\r\nis associated
with poor generalisation; however, recent experiments with deep networks\r\nchallenge
this view.\r\n\r\nThis thesis is devoted to investigating various aspects of underspecification
in deep learning.\r\nFirst, we argue that deep learning models are underspecified
on two levels: a) any given\r\ntraining dataset can be fit by many different functions,
and b) any given function can be\r\nexpressed by many different parameter configurations.
We refer to the second kind of\r\nunderspecification as parameterisation redundancy
and we precisely characterise its extent.\r\nSecond, we characterise the implicit
criteria (the inductive bias) that guide learning in the\r\nunderspecified regime.
Specifically, we consider a nonlinear but tractable classification\r\nsetting,
and show that given the choice, neural networks learn classifiers with a large
margin.\r\nThird, we consider learning scenarios where the inductive bias is not
by itself sufficient to\r\ndeal with underspecification. We then study different
ways of ‘tightening the specification’: i)\r\nIn the setting of representation
learning with variational autoencoders, we propose a hand-\r\ncrafted regulariser
based on mutual information. ii) In the setting of binary classification, we\r\nconsider
soft-label (real-valued) supervision. We derive a generalisation bound for linear\r\nnetworks
supervised in this way and verify that soft labels facilitate fast learning. Finally,
we\r\nexplore an application of soft-label supervision to the training of multi-exit
models."
acknowledged_ssus:
- _id: ScienComp
- _id: CampIT
- _id: E-Lib
alternative_title:
- ISTA Thesis
article_processing_charge: No
author:
- first_name: Phuong
full_name: Bui Thi Mai, Phuong
id: 3EC6EE64-F248-11E8-B48F-1D18A9856A87
last_name: Bui Thi Mai
citation:
ama: Phuong M. Underspecification in deep learning. 2021. doi:10.15479/AT:ISTA:9418
apa: Phuong, M. (2021). Underspecification in deep learning. Institute of
Science and Technology Austria. https://doi.org/10.15479/AT:ISTA:9418
chicago: Phuong, Mary. “Underspecification in Deep Learning.” Institute of Science
and Technology Austria, 2021. https://doi.org/10.15479/AT:ISTA:9418.
ieee: M. Phuong, “Underspecification in deep learning,” Institute of Science and
Technology Austria, 2021.
ista: Phuong M. 2021. Underspecification in deep learning. Institute of Science
and Technology Austria.
mla: Phuong, Mary. Underspecification in Deep Learning. Institute of Science
and Technology Austria, 2021, doi:10.15479/AT:ISTA:9418.
short: M. Phuong, Underspecification in Deep Learning, Institute of Science and
Technology Austria, 2021.
date_created: 2021-05-24T13:06:23Z
date_published: 2021-05-30T00:00:00Z
date_updated: 2023-09-08T11:11:12Z
day: '30'
ddc:
- '000'
degree_awarded: PhD
department:
- _id: GradSch
- _id: ChLa
doi: 10.15479/AT:ISTA:9418
file:
- access_level: open_access
checksum: 4f0abe64114cfed264f9d36e8d1197e3
content_type: application/pdf
creator: bphuong
date_created: 2021-05-24T11:22:29Z
date_updated: 2021-05-24T11:22:29Z
file_id: '9419'
file_name: mph-thesis-v519-pdfimages.pdf
file_size: 2673905
relation: main_file
success: 1
- access_level: closed
checksum: f5699e876bc770a9b0df8345a77720a2
content_type: application/zip
creator: bphuong
date_created: 2021-05-24T11:56:02Z
date_updated: 2021-05-24T11:56:02Z
file_id: '9420'
file_name: thesis.zip
file_size: 92995100
relation: source_file
file_date_updated: 2021-05-24T11:56:02Z
has_accepted_license: '1'
language:
- iso: eng
month: '05'
oa: 1
oa_version: Published Version
page: '125'
publication_identifier:
issn:
- 2663-337X
publication_status: published
publisher: Institute of Science and Technology Austria
related_material:
record:
- id: '7435'
relation: part_of_dissertation
status: deleted
- id: '7481'
relation: part_of_dissertation
status: public
- id: '9416'
relation: part_of_dissertation
status: public
- id: '7479'
relation: part_of_dissertation
status: public
status: public
supervisor:
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
title: Underspecification in deep learning
type: dissertation
user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1
year: '2021'
...
---
_id: '14987'
abstract:
- lang: eng
text: "The goal of zero-shot learning is to construct a classifier that can identify
object classes for which no training examples are available. When training data
for some of the object classes is available but not for others, the name generalized
zero-shot learning is commonly used.\r\nIn a wider sense, the phrase zero-shot
is also used to describe other machine learning-based approaches that require
no training data from the problem of interest, such as zero-shot action recognition
or zero-shot machine translation."
article_processing_charge: No
author:
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Lampert C. Zero-Shot Learning. In: Ikeuchi K, ed. Computer Vision.
2nd ed. Cham: Springer; 2021:1395-1397. doi:10.1007/978-3-030-63416-2_874'
apa: 'Lampert, C. (2021). Zero-Shot Learning. In K. Ikeuchi (Ed.), Computer Vision
(2nd ed., pp. 1395–1397). Cham: Springer. https://doi.org/10.1007/978-3-030-63416-2_874'
chicago: 'Lampert, Christoph. “Zero-Shot Learning.” In Computer Vision, edited
by Katsushi Ikeuchi, 2nd ed., 1395–97. Cham: Springer, 2021. https://doi.org/10.1007/978-3-030-63416-2_874.'
ieee: 'C. Lampert, “Zero-Shot Learning,” in Computer Vision, 2nd ed., K.
Ikeuchi, Ed. Cham: Springer, 2021, pp. 1395–1397.'
ista: 'Lampert C. 2021.Zero-Shot Learning. In: Computer Vision. , 1395–1397.'
mla: Lampert, Christoph. “Zero-Shot Learning.” Computer Vision, edited by
Katsushi Ikeuchi, 2nd ed., Springer, 2021, pp. 1395–97, doi:10.1007/978-3-030-63416-2_874.
short: C. Lampert, in:, K. Ikeuchi (Ed.), Computer Vision, 2nd ed., Springer, Cham,
2021, pp. 1395–1397.
date_created: 2024-02-14T14:05:32Z
date_published: 2021-10-13T00:00:00Z
date_updated: 2024-02-19T10:59:04Z
day: '13'
department:
- _id: ChLa
doi: 10.1007/978-3-030-63416-2_874
edition: '2'
editor:
- first_name: Katsushi
full_name: Ikeuchi, Katsushi
last_name: Ikeuchi
language:
- iso: eng
month: '10'
oa_version: None
page: 1395-1397
place: Cham
publication: Computer Vision
publication_identifier:
eisbn:
- '9783030634162'
isbn:
- '9783030634155'
publication_status: published
publisher: Springer
quality_controlled: '1'
status: public
title: Zero-Shot Learning
type: book_chapter
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2021'
...
---
_id: '8063'
abstract:
- lang: eng
text: "We present a generative model of images that explicitly reasons over the
set\r\nof objects they show. Our model learns a structured latent representation
that\r\nseparates objects from each other and from the background; unlike prior
works,\r\nit explicitly represents the 2D position and depth of each object, as
well as\r\nan embedding of its segmentation mask and appearance. The model can
be trained\r\nfrom images alone in a purely unsupervised fashion without the need
for object\r\nmasks or depth information. Moreover, it always generates complete
objects,\r\neven though a significant fraction of training images contain occlusions.\r\nFinally,
we show that our model can infer decompositions of novel images into\r\ntheir
constituent objects, including accurate prediction of depth ordering and\r\nsegmentation
of occluded parts."
article_number: '2004.00642'
article_processing_charge: No
author:
- first_name: Titas
full_name: Anciukevicius, Titas
last_name: Anciukevicius
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
- first_name: Paul M
full_name: Henderson, Paul M
id: 13C09E74-18D9-11E9-8878-32CFE5697425
last_name: Henderson
orcid: 0000-0002-5198-7445
citation:
ama: Anciukevicius T, Lampert C, Henderson PM. Object-centric image generation with
factored depths, locations, and appearances. arXiv.
apa: Anciukevicius, T., Lampert, C., & Henderson, P. M. (n.d.). Object-centric
image generation with factored depths, locations, and appearances. arXiv.
chicago: Anciukevicius, Titas, Christoph Lampert, and Paul M Henderson. “Object-Centric
Image Generation with Factored Depths, Locations, and Appearances.” ArXiv,
n.d.
ieee: T. Anciukevicius, C. Lampert, and P. M. Henderson, “Object-centric image generation
with factored depths, locations, and appearances,” arXiv. .
ista: Anciukevicius T, Lampert C, Henderson PM. Object-centric image generation
with factored depths, locations, and appearances. arXiv, 2004.00642.
mla: Anciukevicius, Titas, et al. “Object-Centric Image Generation with Factored
Depths, Locations, and Appearances.” ArXiv, 2004.00642.
short: T. Anciukevicius, C. Lampert, P.M. Henderson, ArXiv (n.d.).
date_created: 2020-06-29T23:55:23Z
date_published: 2020-04-01T00:00:00Z
date_updated: 2021-01-12T08:16:44Z
day: '01'
ddc:
- '004'
department:
- _id: ChLa
external_id:
arxiv:
- '2004.00642'
language:
- iso: eng
license: https://creativecommons.org/licenses/by-sa/4.0/
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/2004.00642
month: '04'
oa: 1
oa_version: Preprint
publication: arXiv
publication_status: submitted
status: public
title: Object-centric image generation with factored depths, locations, and appearances
tmp:
image: /images/cc_by_sa.png
legal_code_url: https://creativecommons.org/licenses/by-sa/4.0/legalcode
name: Creative Commons Attribution-ShareAlike 4.0 International Public License (CC
BY-SA 4.0)
short: CC BY-SA (4.0)
type: preprint
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2020'
...
---
_id: '8188'
abstract:
- lang: eng
text: "A natural approach to generative modeling of videos is to represent them
as a composition of moving objects. Recent works model a set of 2D sprites over
a slowly-varying background, but without considering the underlying 3D scene that\r\ngives
rise to them. We instead propose to model a video as the view seen while moving
through a scene with multiple 3D objects and a 3D background. Our model is trained
from monocular videos without any supervision, yet learns to\r\ngenerate coherent
3D scenes containing several moving objects. We conduct detailed experiments on
two datasets, going beyond the visual complexity supported by state-of-the-art
generative approaches. We evaluate our method on\r\ndepth-prediction and 3D object
detection---tasks which cannot be addressed by those earlier works---and show
it out-performs them even on 2D instance segmentation and tracking."
acknowledged_ssus:
- _id: ScienComp
acknowledgement: "This research was supported by the Scientific Service Units (SSU)
of IST Austria through resources\r\nprovided by Scientific Computing (SciComp).
PH is employed part-time by Blackford Analysis, but\r\nthey did not support this
project in any way."
article_processing_charge: No
author:
- first_name: Paul M
full_name: Henderson, Paul M
id: 13C09E74-18D9-11E9-8878-32CFE5697425
last_name: Henderson
orcid: 0000-0002-5198-7445
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Henderson PM, Lampert C. Unsupervised object-centric video generation and
decomposition in 3D. In: 34th Conference on Neural Information Processing Systems.
Vol 33. Curran Associates; 2020:3106–3117.'
apa: 'Henderson, P. M., & Lampert, C. (2020). Unsupervised object-centric video
generation and decomposition in 3D. In 34th Conference on Neural Information
Processing Systems (Vol. 33, pp. 3106–3117). Vancouver, Canada: Curran Associates.'
chicago: Henderson, Paul M, and Christoph Lampert. “Unsupervised Object-Centric
Video Generation and Decomposition in 3D.” In 34th Conference on Neural Information
Processing Systems, 33:3106–3117. Curran Associates, 2020.
ieee: P. M. Henderson and C. Lampert, “Unsupervised object-centric video generation
and decomposition in 3D,” in 34th Conference on Neural Information Processing
Systems, Vancouver, Canada, 2020, vol. 33, pp. 3106–3117.
ista: 'Henderson PM, Lampert C. 2020. Unsupervised object-centric video generation
and decomposition in 3D. 34th Conference on Neural Information Processing Systems.
NeurIPS: Neural Information Processing Systems vol. 33, 3106–3117.'
mla: Henderson, Paul M., and Christoph Lampert. “Unsupervised Object-Centric Video
Generation and Decomposition in 3D.” 34th Conference on Neural Information
Processing Systems, vol. 33, Curran Associates, 2020, pp. 3106–3117.
short: P.M. Henderson, C. Lampert, in:, 34th Conference on Neural Information Processing
Systems, Curran Associates, 2020, pp. 3106–3117.
conference:
end_date: 2020-12-12
location: Vancouver, Canada
name: 'NeurIPS: Neural Information Processing Systems'
start_date: 2020-12-06
date_created: 2020-07-31T16:59:19Z
date_published: 2020-07-07T00:00:00Z
date_updated: 2023-04-25T09:49:58Z
day: '07'
department:
- _id: ChLa
external_id:
arxiv:
- '2007.06705'
intvolume: ' 33'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/2007.06705
month: '07'
oa: 1
oa_version: Preprint
page: 3106–3117
publication: 34th Conference on Neural Information Processing Systems
publication_identifier:
isbn:
- '9781713829546'
publication_status: published
publisher: Curran Associates
quality_controlled: '1'
status: public
title: Unsupervised object-centric video generation and decomposition in 3D
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 33
year: '2020'
...
---
_id: '6952'
abstract:
- lang: eng
text: 'We present a unified framework tackling two problems: class-specific 3D reconstruction
from a single image, and generation of new 3D shape samples. These tasks have
received considerable attention recently; however, most existing approaches rely
on 3D supervision, annotation of 2D images with keypoints or poses, and/or training
with multiple views of each object instance. Our framework is very general: it
can be trained in similar settings to existing approaches, while also supporting
weaker supervision. Importantly, it can be trained purely from 2D images, without
pose annotations, and with only a single view per instance. We employ meshes as
an output representation, instead of voxels used in most prior work. This allows
us to reason over lighting parameters and exploit shading information during training,
which previous 2D-supervised methods cannot. Thus, our method can learn to generate
and reconstruct concave object classes. We evaluate our approach in various settings,
showing that: (i) it learns to disentangle shape from pose and lighting; (ii)
using shading in the loss improves performance compared to just silhouettes; (iii)
when using a standard single white light, our model outperforms state-of-the-art
2D-supervised methods, both with and without pose supervision, thanks to exploiting
shading cues; (iv) performance improves further when using multiple coloured lights,
even approaching that of state-of-the-art 3D-supervised methods; (v) shapes produced
by our model capture smooth surfaces and fine details better than voxel-based
approaches; and (vi) our approach supports concave classes such as bathtubs and
sofas, which methods based on silhouettes cannot learn.'
acknowledgement: Open access funding provided by Institute of Science and Technology
(IST Austria).
article_processing_charge: Yes (via OA deal)
article_type: original
author:
- first_name: Paul M
full_name: Henderson, Paul M
id: 13C09E74-18D9-11E9-8878-32CFE5697425
last_name: Henderson
orcid: 0000-0002-5198-7445
- first_name: Vittorio
full_name: Ferrari, Vittorio
last_name: Ferrari
citation:
ama: Henderson PM, Ferrari V. Learning single-image 3D reconstruction by generative
modelling of shape, pose and shading. International Journal of Computer Vision.
2020;128:835-854. doi:10.1007/s11263-019-01219-8
apa: Henderson, P. M., & Ferrari, V. (2020). Learning single-image 3D reconstruction
by generative modelling of shape, pose and shading. International Journal of
Computer Vision. Springer Nature. https://doi.org/10.1007/s11263-019-01219-8
chicago: Henderson, Paul M, and Vittorio Ferrari. “Learning Single-Image 3D Reconstruction
by Generative Modelling of Shape, Pose and Shading.” International Journal
of Computer Vision. Springer Nature, 2020. https://doi.org/10.1007/s11263-019-01219-8.
ieee: P. M. Henderson and V. Ferrari, “Learning single-image 3D reconstruction by
generative modelling of shape, pose and shading,” International Journal of
Computer Vision, vol. 128. Springer Nature, pp. 835–854, 2020.
ista: Henderson PM, Ferrari V. 2020. Learning single-image 3D reconstruction by
generative modelling of shape, pose and shading. International Journal of Computer
Vision. 128, 835–854.
mla: Henderson, Paul M., and Vittorio Ferrari. “Learning Single-Image 3D Reconstruction
by Generative Modelling of Shape, Pose and Shading.” International Journal
of Computer Vision, vol. 128, Springer Nature, 2020, pp. 835–54, doi:10.1007/s11263-019-01219-8.
short: P.M. Henderson, V. Ferrari, International Journal of Computer Vision 128
(2020) 835–854.
date_created: 2019-10-17T13:38:20Z
date_published: 2020-04-01T00:00:00Z
date_updated: 2023-08-17T14:01:16Z
day: '01'
ddc:
- '004'
department:
- _id: ChLa
doi: 10.1007/s11263-019-01219-8
external_id:
arxiv:
- '1901.06447'
isi:
- '000491042100002'
file:
- access_level: open_access
checksum: a0f05dd4f5f64e4f713d8d9d4b5b1e3f
content_type: application/pdf
creator: dernst
date_created: 2019-10-25T10:28:29Z
date_updated: 2020-07-14T12:47:46Z
file_id: '6973'
file_name: 2019_CompVision_Henderson.pdf
file_size: 2243134
relation: main_file
file_date_updated: 2020-07-14T12:47:46Z
has_accepted_license: '1'
intvolume: ' 128'
isi: 1
language:
- iso: eng
month: '04'
oa: 1
oa_version: Published Version
page: 835-854
project:
- _id: B67AFEDC-15C9-11EA-A837-991A96BB2854
name: IST Austria Open Access Fund
publication: International Journal of Computer Vision
publication_identifier:
eissn:
- 1573-1405
issn:
- 0920-5691
publication_status: published
publisher: Springer Nature
quality_controlled: '1'
scopus_import: '1'
status: public
title: Learning single-image 3D reconstruction by generative modelling of shape, pose
and shading
tmp:
image: /images/cc_by.png
legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
short: CC BY (4.0)
type: journal_article
user_id: 4359f0d1-fa6c-11eb-b949-802e58b17ae8
volume: 128
year: '2020'
...
---
_id: '7936'
abstract:
- lang: eng
text: 'State-of-the-art detection systems are generally evaluated on their ability
to exhaustively retrieve objects densely distributed in the image, across a wide
variety of appearances and semantic categories. Orthogonal to this, many real-life
object detection applications, for example in remote sensing, instead require
dealing with large images that contain only a few small objects of a single class,
scattered heterogeneously across the space. In addition, they are often subject
to strict computational constraints, such as limited battery capacity and computing
power.To tackle these more practical scenarios, we propose a novel flexible detection
scheme that efficiently adapts to variable object sizes and densities: We rely
on a sequence of detection stages, each of which has the ability to predict groups
of objects as well as individuals. Similar to a detection cascade, this multi-stage
architecture spares computational effort by discarding large irrelevant regions
of the image early during the detection process. The ability to group objects
provides further computational and memory savings, as it allows working with lower
image resolutions in early stages, where groups are more easily detected than
individuals, as they are more salient. We report experimental results on two aerial
image datasets, and show that the proposed method is as accurate yet computationally
more efficient than standard single-shot detectors, consistently across three
different backbone architectures.'
article_number: 1716-1725
article_processing_charge: No
author:
- first_name: Amélie
full_name: Royer, Amélie
id: 3811D890-F248-11E8-B48F-1D18A9856A87
last_name: Royer
orcid: 0000-0002-8407-0705
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Royer A, Lampert C. Localizing grouped instances for efficient detection in
low-resource scenarios. In: IEEE Winter Conference on Applications of Computer
Vision. IEEE; 2020. doi:10.1109/WACV45572.2020.9093288'
apa: 'Royer, A., & Lampert, C. (2020). Localizing grouped instances for efficient
detection in low-resource scenarios. In IEEE Winter Conference on Applications
of Computer Vision. Snowmass Village, CO, United States: IEEE. https://doi.org/10.1109/WACV45572.2020.9093288'
chicago: Royer, Amélie, and Christoph Lampert. “Localizing Grouped Instances for
Efficient Detection in Low-Resource Scenarios.” In IEEE Winter Conference on
Applications of Computer Vision. IEEE, 2020. https://doi.org/10.1109/WACV45572.2020.9093288.
ieee: A. Royer and C. Lampert, “Localizing grouped instances for efficient detection
in low-resource scenarios,” in IEEE Winter Conference on Applications of Computer
Vision, Snowmass Village, CO, United States, 2020.
ista: 'Royer A, Lampert C. 2020. Localizing grouped instances for efficient detection
in low-resource scenarios. IEEE Winter Conference on Applications of Computer
Vision. WACV: Winter Conference on Applications of Computer Vision, 1716–1725.'
mla: Royer, Amélie, and Christoph Lampert. “Localizing Grouped Instances for Efficient
Detection in Low-Resource Scenarios.” IEEE Winter Conference on Applications
of Computer Vision, 1716–1725, IEEE, 2020, doi:10.1109/WACV45572.2020.9093288.
short: A. Royer, C. Lampert, in:, IEEE Winter Conference on Applications of Computer
Vision, IEEE, 2020.
conference:
end_date: 2020-03-05
location: ' Snowmass Village, CO, United States'
name: 'WACV: Winter Conference on Applications of Computer Vision'
start_date: 2020-03-01
date_created: 2020-06-07T22:00:53Z
date_published: 2020-03-01T00:00:00Z
date_updated: 2023-09-07T13:16:17Z
day: '01'
department:
- _id: ChLa
doi: 10.1109/WACV45572.2020.9093288
external_id:
arxiv:
- '2004.12623'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/2004.12623
month: '03'
oa: 1
oa_version: Preprint
publication: IEEE Winter Conference on Applications of Computer Vision
publication_identifier:
isbn:
- '9781728165530'
publication_status: published
publisher: IEEE
quality_controlled: '1'
related_material:
record:
- id: '8331'
relation: dissertation_contains
status: deleted
- id: '8390'
relation: dissertation_contains
status: public
scopus_import: 1
status: public
title: Localizing grouped instances for efficient detection in low-resource scenarios
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2020'
...
---
_id: '7937'
abstract:
- lang: eng
text: 'Fine-tuning is a popular way of exploiting knowledge contained in a pre-trained
convolutional network for a new visual recognition task. However, the orthogonal
setting of transferring knowledge from a pretrained network to a visually different
yet semantically close source is rarely considered: This commonly happens with
real-life data, which is not necessarily as clean as the training source (noise,
geometric transformations, different modalities, etc.).To tackle such scenarios,
we introduce a new, generalized form of fine-tuning, called flex-tuning, in which
any individual unit (e.g. layer) of a network can be tuned, and the most promising
one is chosen automatically. In order to make the method appealing for practical
use, we propose two lightweight and faster selection procedures that prove to
be good approximations in practice. We study these selection criteria empirically
across a variety of domain shifts and data scarcity scenarios, and show that fine-tuning
individual units, despite its simplicity, yields very good results as an adaptation
technique. As it turns out, in contrast to common practice, rather than the last
fully-connected unit it is best to tune an intermediate or early one in many domain-
shift scenarios, which is accurately detected by flex-tuning.'
article_number: 2180-2189
article_processing_charge: No
author:
- first_name: Amélie
full_name: Royer, Amélie
id: 3811D890-F248-11E8-B48F-1D18A9856A87
last_name: Royer
orcid: 0000-0002-8407-0705
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Royer A, Lampert C. A flexible selection scheme for minimum-effort transfer
learning. In: 2020 IEEE Winter Conference on Applications of Computer Vision.
IEEE; 2020. doi:10.1109/WACV45572.2020.9093635'
apa: 'Royer, A., & Lampert, C. (2020). A flexible selection scheme for minimum-effort
transfer learning. In 2020 IEEE Winter Conference on Applications of Computer
Vision. Snowmass Village, CO, United States: IEEE. https://doi.org/10.1109/WACV45572.2020.9093635'
chicago: Royer, Amélie, and Christoph Lampert. “A Flexible Selection Scheme for
Minimum-Effort Transfer Learning.” In 2020 IEEE Winter Conference on Applications
of Computer Vision. IEEE, 2020. https://doi.org/10.1109/WACV45572.2020.9093635.
ieee: A. Royer and C. Lampert, “A flexible selection scheme for minimum-effort transfer
learning,” in 2020 IEEE Winter Conference on Applications of Computer Vision,
Snowmass Village, CO, United States, 2020.
ista: 'Royer A, Lampert C. 2020. A flexible selection scheme for minimum-effort
transfer learning. 2020 IEEE Winter Conference on Applications of Computer Vision.
WACV: Winter Conference on Applications of Computer Vision, 2180–2189.'
mla: Royer, Amélie, and Christoph Lampert. “A Flexible Selection Scheme for Minimum-Effort
Transfer Learning.” 2020 IEEE Winter Conference on Applications of Computer
Vision, 2180–2189, IEEE, 2020, doi:10.1109/WACV45572.2020.9093635.
short: A. Royer, C. Lampert, in:, 2020 IEEE Winter Conference on Applications of
Computer Vision, IEEE, 2020.
conference:
end_date: 2020-03-05
location: Snowmass Village, CO, United States
name: 'WACV: Winter Conference on Applications of Computer Vision'
start_date: 2020-03-01
date_created: 2020-06-07T22:00:53Z
date_published: 2020-03-01T00:00:00Z
date_updated: 2023-09-07T13:16:17Z
day: '01'
department:
- _id: ChLa
doi: 10.1109/WACV45572.2020.9093635
external_id:
arxiv:
- '2008.11995'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: http://arxiv.org/abs/2008.11995
month: '03'
oa: 1
oa_version: Preprint
publication: 2020 IEEE Winter Conference on Applications of Computer Vision
publication_identifier:
isbn:
- '9781728165530'
publication_status: published
publisher: IEEE
quality_controlled: '1'
related_material:
record:
- id: '8331'
relation: dissertation_contains
status: deleted
- id: '8390'
relation: dissertation_contains
status: public
scopus_import: '1'
status: public
title: A flexible selection scheme for minimum-effort transfer learning
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2020'
...
---
_id: '8092'
abstract:
- lang: eng
text: Image translation refers to the task of mapping images from a visual domain
to another. Given two unpaired collections of images, we aim to learn a mapping
between the corpus-level style of each collection, while preserving semantic content
shared across the two domains. We introduce xgan, a dual adversarial auto-encoder,
which captures a shared representation of the common domain semantic content in
an unsupervised way, while jointly learning the domain-to-domain image translations
in both directions. We exploit ideas from the domain adaptation literature and
define a semantic consistency loss which encourages the learned embedding to preserve
semantics shared across domains. We report promising qualitative results for the
task of face-to-cartoon translation. The cartoon dataset we collected for this
purpose, “CartoonSet”, is also publicly available as a new benchmark for semantic
style transfer at https://google.github.io/cartoonset/index.html.
article_processing_charge: No
author:
- first_name: Amélie
full_name: Royer, Amélie
id: 3811D890-F248-11E8-B48F-1D18A9856A87
last_name: Royer
orcid: 0000-0002-8407-0705
- first_name: Konstantinos
full_name: Bousmalis, Konstantinos
last_name: Bousmalis
- first_name: Stephan
full_name: Gouws, Stephan
last_name: Gouws
- first_name: Fred
full_name: Bertsch, Fred
last_name: Bertsch
- first_name: Inbar
full_name: Mosseri, Inbar
last_name: Mosseri
- first_name: Forrester
full_name: Cole, Forrester
last_name: Cole
- first_name: Kevin
full_name: Murphy, Kevin
last_name: Murphy
citation:
ama: 'Royer A, Bousmalis K, Gouws S, et al. XGAN: Unsupervised image-to-image translation
for many-to-many mappings. In: Singh R, Vatsa M, Patel VM, Ratha N, eds. Domain
Adaptation for Visual Understanding. Springer Nature; 2020:33-49. doi:10.1007/978-3-030-30671-7_3'
apa: 'Royer, A., Bousmalis, K., Gouws, S., Bertsch, F., Mosseri, I., Cole, F., &
Murphy, K. (2020). XGAN: Unsupervised image-to-image translation for many-to-many
mappings. In R. Singh, M. Vatsa, V. M. Patel, & N. Ratha (Eds.), Domain
Adaptation for Visual Understanding (pp. 33–49). Springer Nature. https://doi.org/10.1007/978-3-030-30671-7_3'
chicago: 'Royer, Amélie, Konstantinos Bousmalis, Stephan Gouws, Fred Bertsch, Inbar
Mosseri, Forrester Cole, and Kevin Murphy. “XGAN: Unsupervised Image-to-Image
Translation for Many-to-Many Mappings.” In Domain Adaptation for Visual Understanding,
edited by Richa Singh, Mayank Vatsa, Vishal M. Patel, and Nalini Ratha, 33–49.
Springer Nature, 2020. https://doi.org/10.1007/978-3-030-30671-7_3.'
ieee: 'A. Royer et al., “XGAN: Unsupervised image-to-image translation for
many-to-many mappings,” in Domain Adaptation for Visual Understanding,
R. Singh, M. Vatsa, V. M. Patel, and N. Ratha, Eds. Springer Nature, 2020, pp.
33–49.'
ista: 'Royer A, Bousmalis K, Gouws S, Bertsch F, Mosseri I, Cole F, Murphy K. 2020.XGAN:
Unsupervised image-to-image translation for many-to-many mappings. In: Domain
Adaptation for Visual Understanding. , 33–49.'
mla: 'Royer, Amélie, et al. “XGAN: Unsupervised Image-to-Image Translation for Many-to-Many
Mappings.” Domain Adaptation for Visual Understanding, edited by Richa
Singh et al., Springer Nature, 2020, pp. 33–49, doi:10.1007/978-3-030-30671-7_3.'
short: A. Royer, K. Bousmalis, S. Gouws, F. Bertsch, I. Mosseri, F. Cole, K. Murphy,
in:, R. Singh, M. Vatsa, V.M. Patel, N. Ratha (Eds.), Domain Adaptation for Visual
Understanding, Springer Nature, 2020, pp. 33–49.
date_created: 2020-07-05T22:00:46Z
date_published: 2020-01-08T00:00:00Z
date_updated: 2023-09-07T13:16:18Z
day: '08'
department:
- _id: ChLa
doi: 10.1007/978-3-030-30671-7_3
editor:
- first_name: Richa
full_name: Singh, Richa
last_name: Singh
- first_name: Mayank
full_name: Vatsa, Mayank
last_name: Vatsa
- first_name: Vishal M.
full_name: Patel, Vishal M.
last_name: Patel
- first_name: Nalini
full_name: Ratha, Nalini
last_name: Ratha
external_id:
arxiv:
- '1711.05139'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/1711.05139
month: '01'
oa: 1
oa_version: Preprint
page: 33-49
publication: Domain Adaptation for Visual Understanding
publication_identifier:
isbn:
- '9783030306717'
publication_status: published
publisher: Springer Nature
quality_controlled: '1'
related_material:
record:
- id: '8331'
relation: dissertation_contains
status: deleted
- id: '8390'
relation: dissertation_contains
status: public
scopus_import: '1'
status: public
title: 'XGAN: Unsupervised image-to-image translation for many-to-many mappings'
type: book_chapter
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2020'
...
---
_id: '7481'
abstract:
- lang: eng
text: 'We address the following question: How redundant is the parameterisation
of ReLU networks? Specifically, we consider transformations of the weight space
which leave the function implemented by the network intact. Two such transformations
are known for feed-forward architectures: permutation of neurons within a layer,
and positive scaling of all incoming weights of a neuron coupled with inverse
scaling of its outgoing weights. In this work, we show for architectures with
non-increasing widths that permutation and scaling are in fact the only function-preserving
weight transformations. For any eligible architecture we give an explicit construction
of a neural network such that any other network that implements the same function
can be obtained from the original one by the application of permutations and rescaling. The
proof relies on a geometric understanding of boundaries between linear regions
of ReLU networks, and we hope the developed mathematical tools are of independent
interest.'
article_processing_charge: No
author:
- first_name: Phuong
full_name: Bui Thi Mai, Phuong
id: 3EC6EE64-F248-11E8-B48F-1D18A9856A87
last_name: Bui Thi Mai
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Phuong M, Lampert C. Functional vs. parametric equivalence of ReLU networks.
In: 8th International Conference on Learning Representations. ; 2020.'
apa: Phuong, M., & Lampert, C. (2020). Functional vs. parametric equivalence
of ReLU networks. In 8th International Conference on Learning Representations.
Online.
chicago: Phuong, Mary, and Christoph Lampert. “Functional vs. Parametric Equivalence
of ReLU Networks.” In 8th International Conference on Learning Representations,
2020.
ieee: M. Phuong and C. Lampert, “Functional vs. parametric equivalence of ReLU networks,”
in 8th International Conference on Learning Representations, Online, 2020.
ista: 'Phuong M, Lampert C. 2020. Functional vs. parametric equivalence of ReLU
networks. 8th International Conference on Learning Representations. ICLR: International
Conference on Learning Representations.'
mla: Phuong, Mary, and Christoph Lampert. “Functional vs. Parametric Equivalence
of ReLU Networks.” 8th International Conference on Learning Representations,
2020.
short: M. Phuong, C. Lampert, in:, 8th International Conference on Learning Representations,
2020.
conference:
end_date: 2020-04-30
location: Online
name: 'ICLR: International Conference on Learning Representations'
start_date: 2020-04-27
date_created: 2020-02-11T09:07:37Z
date_published: 2020-04-26T00:00:00Z
date_updated: 2023-09-07T13:29:50Z
day: '26'
ddc:
- '000'
department:
- _id: ChLa
file:
- access_level: open_access
checksum: 8d372ea5defd8cb8fdc430111ed754a9
content_type: application/pdf
creator: bphuong
date_created: 2020-02-11T09:07:27Z
date_updated: 2020-07-14T12:47:59Z
file_id: '7482'
file_name: main.pdf
file_size: 405469
relation: main_file
file_date_updated: 2020-07-14T12:47:59Z
has_accepted_license: '1'
language:
- iso: eng
month: '04'
oa: 1
oa_version: Published Version
publication: 8th International Conference on Learning Representations
publication_status: published
quality_controlled: '1'
related_material:
link:
- relation: supplementary_material
url: https://iclr.cc/virtual_2020/poster_Bylx-TNKvH.html
record:
- id: '9418'
relation: dissertation_contains
status: public
status: public
title: Functional vs. parametric equivalence of ReLU networks
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2020'
...
---
_id: '8724'
abstract:
- lang: eng
text: "We study the problem of learning from multiple untrusted data sources, a
scenario of increasing practical relevance given the recent emergence of crowdsourcing
and collaborative learning paradigms. Specifically, we analyze the situation in
which a learning system obtains datasets from multiple sources, some of which
might be biased or even adversarially perturbed. It is\r\nknown that in the single-source
case, an adversary with the power to corrupt a fixed fraction of the training
data can prevent PAC-learnability, that is, even in the limit of infinitely much
training data, no learning system can approach the optimal test error. In this
work we show that, surprisingly, the same is not true in the multi-source setting,
where the adversary can arbitrarily\r\ncorrupt a fixed fraction of the data sources.
Our main results are a generalization bound that provides finite-sample guarantees
for this learning setting, as well as corresponding lower bounds. Besides establishing
PAC-learnability our results also show that in a cooperative learning setting
sharing data with other parties has provable benefits, even if some\r\nparticipants
are malicious. "
acknowledged_ssus:
- _id: ScienComp
acknowledgement: Dan Alistarh is supported in part by the European Research Council
(ERC) under the European Union’s Horizon 2020 research and innovation programme
(grant agreement No 805223 ScaleML). This research was supported by the Scientific
Service Units (SSU) of IST Austria through resources provided by Scientific Computing
(SciComp).
article_processing_charge: No
author:
- first_name: Nikola H
full_name: Konstantinov, Nikola H
id: 4B9D76E4-F248-11E8-B48F-1D18A9856A87
last_name: Konstantinov
- first_name: Elias
full_name: Frantar, Elias
id: 09a8f98d-ec99-11ea-ae11-c063a7b7fe5f
last_name: Frantar
- first_name: Dan-Adrian
full_name: Alistarh, Dan-Adrian
id: 4A899BFC-F248-11E8-B48F-1D18A9856A87
last_name: Alistarh
orcid: 0000-0003-3650-940X
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Konstantinov NH, Frantar E, Alistarh D-A, Lampert C. On the sample complexity
of adversarial multi-source PAC learning. In: Proceedings of the 37th International
Conference on Machine Learning. Vol 119. ML Research Press; 2020:5416-5425.'
apa: 'Konstantinov, N. H., Frantar, E., Alistarh, D.-A., & Lampert, C. (2020).
On the sample complexity of adversarial multi-source PAC learning. In Proceedings
of the 37th International Conference on Machine Learning (Vol. 119, pp. 5416–5425).
Online: ML Research Press.'
chicago: Konstantinov, Nikola H, Elias Frantar, Dan-Adrian Alistarh, and Christoph
Lampert. “On the Sample Complexity of Adversarial Multi-Source PAC Learning.”
In Proceedings of the 37th International Conference on Machine Learning,
119:5416–25. ML Research Press, 2020.
ieee: N. H. Konstantinov, E. Frantar, D.-A. Alistarh, and C. Lampert, “On the sample
complexity of adversarial multi-source PAC learning,” in Proceedings of the
37th International Conference on Machine Learning, Online, 2020, vol. 119,
pp. 5416–5425.
ista: 'Konstantinov NH, Frantar E, Alistarh D-A, Lampert C. 2020. On the sample
complexity of adversarial multi-source PAC learning. Proceedings of the 37th International
Conference on Machine Learning. ICML: International Conference on Machine Learning
vol. 119, 5416–5425.'
mla: Konstantinov, Nikola H., et al. “On the Sample Complexity of Adversarial Multi-Source
PAC Learning.” Proceedings of the 37th International Conference on Machine
Learning, vol. 119, ML Research Press, 2020, pp. 5416–25.
short: N.H. Konstantinov, E. Frantar, D.-A. Alistarh, C. Lampert, in:, Proceedings
of the 37th International Conference on Machine Learning, ML Research Press, 2020,
pp. 5416–5425.
conference:
end_date: 2020-07-18
location: Online
name: 'ICML: International Conference on Machine Learning'
start_date: 2020-07-12
date_created: 2020-11-05T15:25:58Z
date_published: 2020-07-12T00:00:00Z
date_updated: 2023-09-07T13:42:08Z
day: '12'
ddc:
- '000'
department:
- _id: DaAl
- _id: ChLa
ec_funded: 1
external_id:
arxiv:
- '2002.10384'
file:
- access_level: open_access
checksum: cc755d0054bc4b2be778ea7aa7884d2f
content_type: application/pdf
creator: dernst
date_created: 2021-02-15T09:00:01Z
date_updated: 2021-02-15T09:00:01Z
file_id: '9120'
file_name: 2020_PMLR_Konstantinov.pdf
file_size: 281286
relation: main_file
success: 1
file_date_updated: 2021-02-15T09:00:01Z
has_accepted_license: '1'
intvolume: ' 119'
language:
- iso: eng
month: '07'
oa: 1
oa_version: Published Version
page: 5416-5425
project:
- _id: 268A44D6-B435-11E9-9278-68D0E5697425
call_identifier: H2020
grant_number: '805223'
name: Elastic Coordination for Scalable Machine Learning
publication: Proceedings of the 37th International Conference on Machine Learning
publication_identifier:
issn:
- 2640-3498
publication_status: published
publisher: ML Research Press
quality_controlled: '1'
related_material:
link:
- relation: supplementary_material
url: http://proceedings.mlr.press/v119/konstantinov20a/konstantinov20a-supp.pdf
record:
- id: '10799'
relation: dissertation_contains
status: public
scopus_import: '1'
status: public
title: On the sample complexity of adversarial multi-source PAC learning
type: conference
user_id: 3E5EF7F0-F248-11E8-B48F-1D18A9856A87
volume: 119
year: '2020'
...
---
_id: '8390'
abstract:
- lang: eng
text: "Deep neural networks have established a new standard for data-dependent feature
extraction pipelines in the Computer Vision literature. Despite their remarkable
performance in the standard supervised learning scenario, i.e. when models are
trained with labeled data and tested on samples that follow a similar distribution,
neural networks have been shown to struggle with more advanced generalization
abilities, such as transferring knowledge across visually different domains, or
generalizing to new unseen combinations of known concepts. In this thesis we argue
that, in contrast to the usual black-box behavior of neural networks, leveraging
more structured internal representations is a promising direction\r\nfor tackling
such problems. In particular, we focus on two forms of structure. First, we tackle
modularity: We show that (i) compositional architectures are a natural tool for
modeling reasoning tasks, in that they efficiently capture their combinatorial
nature, which is key for generalizing beyond the compositions seen during training.
We investigate how to to learn such models, both formally and experimentally,
for the task of abstract visual reasoning. Then, we show that (ii) in some settings,
modularity allows us to efficiently break down complex tasks into smaller, easier,
modules, thereby improving computational efficiency; We study this behavior in
the context of generative models for colorization, as well as for small objects
detection. Secondly, we investigate the inherently layered structure of representations
learned by neural networks, and analyze its role in the context of transfer learning
and domain adaptation across visually\r\ndissimilar domains. "
acknowledged_ssus:
- _id: CampIT
- _id: ScienComp
acknowledgement: Last but not least, I would like to acknowledge the support of the
IST IT and scientific computing team for helping provide a great work environment.
alternative_title:
- ISTA Thesis
article_processing_charge: No
author:
- first_name: Amélie
full_name: Royer, Amélie
id: 3811D890-F248-11E8-B48F-1D18A9856A87
last_name: Royer
orcid: 0000-0002-8407-0705
citation:
ama: Royer A. Leveraging structure in Computer Vision tasks for flexible Deep Learning
models. 2020. doi:10.15479/AT:ISTA:8390
apa: Royer, A. (2020). Leveraging structure in Computer Vision tasks for flexible
Deep Learning models. Institute of Science and Technology Austria. https://doi.org/10.15479/AT:ISTA:8390
chicago: Royer, Amélie. “Leveraging Structure in Computer Vision Tasks for Flexible
Deep Learning Models.” Institute of Science and Technology Austria, 2020. https://doi.org/10.15479/AT:ISTA:8390.
ieee: A. Royer, “Leveraging structure in Computer Vision tasks for flexible Deep
Learning models,” Institute of Science and Technology Austria, 2020.
ista: Royer A. 2020. Leveraging structure in Computer Vision tasks for flexible
Deep Learning models. Institute of Science and Technology Austria.
mla: Royer, Amélie. Leveraging Structure in Computer Vision Tasks for Flexible
Deep Learning Models. Institute of Science and Technology Austria, 2020, doi:10.15479/AT:ISTA:8390.
short: A. Royer, Leveraging Structure in Computer Vision Tasks for Flexible Deep
Learning Models, Institute of Science and Technology Austria, 2020.
date_created: 2020-09-14T13:42:09Z
date_published: 2020-09-14T00:00:00Z
date_updated: 2023-10-16T10:04:02Z
day: '14'
ddc:
- '000'
degree_awarded: PhD
department:
- _id: ChLa
doi: 10.15479/AT:ISTA:8390
file:
- access_level: open_access
checksum: c914d2f88846032f3d8507734861b6ee
content_type: application/pdf
creator: dernst
date_created: 2020-09-14T13:39:14Z
date_updated: 2020-09-14T13:39:14Z
file_id: '8391'
file_name: 2020_Thesis_Royer.pdf
file_size: 30224591
relation: main_file
success: 1
- access_level: closed
checksum: ae98fb35d912cff84a89035ae5794d3c
content_type: application/x-zip-compressed
creator: dernst
date_created: 2020-09-14T13:39:17Z
date_updated: 2020-09-14T13:39:17Z
file_id: '8392'
file_name: thesis_sources.zip
file_size: 74227627
relation: main_file
file_date_updated: 2020-09-14T13:39:17Z
has_accepted_license: '1'
language:
- iso: eng
month: '09'
oa: 1
oa_version: Published Version
page: '197'
publication_identifier:
isbn:
- 978-3-99078-007-7
issn:
- 2663-337X
publication_status: published
publisher: Institute of Science and Technology Austria
related_material:
record:
- id: '7936'
relation: part_of_dissertation
status: public
- id: '7937'
relation: part_of_dissertation
status: public
- id: '8193'
relation: part_of_dissertation
status: public
- id: '8092'
relation: part_of_dissertation
status: public
- id: '911'
relation: part_of_dissertation
status: public
status: public
supervisor:
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
title: Leveraging structure in Computer Vision tasks for flexible Deep Learning models
tmp:
image: /images/cc_by_nc_sa.png
legal_code_url: https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode
name: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC
BY-NC-SA 4.0)
short: CC BY-NC-SA (4.0)
type: dissertation
user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1
year: '2020'
...
---
_id: '8186'
abstract:
- lang: eng
text: "Numerous methods have been proposed for probabilistic generative modelling
of\r\n3D objects. However, none of these is able to produce textured objects,
which\r\nrenders them of limited use for practical tasks. In this work, we present
the\r\nfirst generative model of textured 3D meshes. Training such a model would\r\ntraditionally
require a large dataset of textured meshes, but unfortunately,\r\nexisting datasets
of meshes lack detailed textures. We instead propose a new\r\ntraining methodology
that allows learning from collections of 2D images without\r\nany 3D information.
To do so, we train our model to explain a distribution of\r\nimages by modelling
each image as a 3D foreground object placed in front of a\r\n2D background. Thus,
it learns to generate meshes that when rendered, produce\r\nimages similar to
those in its training set.\r\n A well-known problem when generating meshes with
deep networks is the\r\nemergence of self-intersections, which are problematic
for many use-cases. As a\r\nsecond contribution we therefore introduce a new generation
process for 3D\r\nmeshes that guarantees no self-intersections arise, based on
the physical\r\nintuition that faces should push one another out of the way as
they move.\r\n We conduct extensive experiments on our approach, reporting quantitative
and\r\nqualitative results on both synthetic data and natural images. These show
our\r\nmethod successfully learns to generate plausible and diverse textured 3D\r\nsamples
for five challenging object classes."
article_processing_charge: No
author:
- first_name: Paul M
full_name: Henderson, Paul M
id: 13C09E74-18D9-11E9-8878-32CFE5697425
last_name: Henderson
orcid: 0000-0002-5198-7445
- first_name: Vagia
full_name: Tsiminaki, Vagia
last_name: Tsiminaki
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Henderson PM, Tsiminaki V, Lampert C. Leveraging 2D data to learn textured
3D mesh generation. In: Proceedings of the IEEE/CVF Conference on Computer
Vision and Pattern Recognition. IEEE; 2020:7498-7507. doi:10.1109/CVPR42600.2020.00752'
apa: 'Henderson, P. M., Tsiminaki, V., & Lampert, C. (2020). Leveraging 2D data
to learn textured 3D mesh generation. In Proceedings of the IEEE/CVF Conference
on Computer Vision and Pattern Recognition (pp. 7498–7507). Virtual: IEEE.
https://doi.org/10.1109/CVPR42600.2020.00752'
chicago: Henderson, Paul M, Vagia Tsiminaki, and Christoph Lampert. “Leveraging
2D Data to Learn Textured 3D Mesh Generation.” In Proceedings of the IEEE/CVF
Conference on Computer Vision and Pattern Recognition, 7498–7507. IEEE, 2020.
https://doi.org/10.1109/CVPR42600.2020.00752.
ieee: P. M. Henderson, V. Tsiminaki, and C. Lampert, “Leveraging 2D data to learn
textured 3D mesh generation,” in Proceedings of the IEEE/CVF Conference on
Computer Vision and Pattern Recognition, Virtual, 2020, pp. 7498–7507.
ista: 'Henderson PM, Tsiminaki V, Lampert C. 2020. Leveraging 2D data to learn textured
3D mesh generation. Proceedings of the IEEE/CVF Conference on Computer Vision
and Pattern Recognition. CVPR: Conference on Computer Vision and Pattern Recognition,
7498–7507.'
mla: Henderson, Paul M., et al. “Leveraging 2D Data to Learn Textured 3D Mesh Generation.”
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,
IEEE, 2020, pp. 7498–507, doi:10.1109/CVPR42600.2020.00752.
short: P.M. Henderson, V. Tsiminaki, C. Lampert, in:, Proceedings of the IEEE/CVF
Conference on Computer Vision and Pattern Recognition, IEEE, 2020, pp. 7498–7507.
conference:
end_date: 2020-06-19
location: Virtual
name: 'CVPR: Conference on Computer Vision and Pattern Recognition'
start_date: 2020-06-14
date_created: 2020-07-31T16:53:49Z
date_published: 2020-07-01T00:00:00Z
date_updated: 2023-10-17T07:37:11Z
day: '01'
ddc:
- '004'
department:
- _id: ChLa
doi: 10.1109/CVPR42600.2020.00752
external_id:
arxiv:
- '2004.04180'
file:
- access_level: open_access
content_type: application/pdf
creator: phenders
date_created: 2020-07-31T16:57:12Z
date_updated: 2020-07-31T16:57:12Z
file_id: '8187'
file_name: paper.pdf
file_size: 10262773
relation: main_file
success: 1
file_date_updated: 2020-07-31T16:57:12Z
has_accepted_license: '1'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://openaccess.thecvf.com/content_CVPR_2020/papers/Henderson_Leveraging_2D_Data_to_Learn_Textured_3D_Mesh_Generation_CVPR_2020_paper.pdf
month: '07'
oa: 1
oa_version: Submitted Version
page: 7498-7507
publication: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern
Recognition
publication_identifier:
eisbn:
- '9781728171685'
eissn:
- 2575-7075
publication_status: published
publisher: IEEE
quality_controlled: '1'
scopus_import: '1'
status: public
title: Leveraging 2D data to learn textured 3D mesh generation
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2020'
...
---
_id: '6944'
abstract:
- lang: eng
text: 'We study the problem of automatically detecting if a given multi-class classifier
operates outside of its specifications (out-of-specs), i.e. on input data from
a different distribution than what it was trained for. This is an important problem
to solve on the road towards creating reliable computer vision systems for real-world
applications, because the quality of a classifier’s predictions cannot be guaranteed
if it operates out-of-specs. Previously proposed methods for out-of-specs detection
make decisions on the level of single inputs. This, however, is insufficient to
achieve low false positive rate and high false negative rates at the same time.
In this work, we describe a new procedure named KS(conf), based on statistical
reasoning. Its main component is a classical Kolmogorov–Smirnov test that is applied
to the set of predicted confidence values for batches of samples. Working with
batches instead of single samples allows increasing the true positive rate without
negatively affecting the false positive rate, thereby overcoming a crucial limitation
of single sample tests. We show by extensive experiments using a variety of convolutional
network architectures and datasets that KS(conf) reliably detects out-of-specs
situations even under conditions where other tests fail. It furthermore has a
number of properties that make it an excellent candidate for practical deployment:
it is easy to implement, adds almost no overhead to the system, works with any
classifier that outputs confidence scores, and requires no a priori knowledge
about how the data distribution could change.'
article_processing_charge: Yes (via OA deal)
article_type: original
author:
- first_name: Rémy
full_name: Sun, Rémy
last_name: Sun
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Sun R, Lampert C. KS(conf): A light-weight test if a multiclass classifier
operates outside of its specifications. International Journal of Computer Vision.
2020;128(4):970-995. doi:10.1007/s11263-019-01232-x'
apa: 'Sun, R., & Lampert, C. (2020). KS(conf): A light-weight test if a multiclass
classifier operates outside of its specifications. International Journal of
Computer Vision. Springer Nature. https://doi.org/10.1007/s11263-019-01232-x'
chicago: 'Sun, Rémy, and Christoph Lampert. “KS(Conf): A Light-Weight Test If a
Multiclass Classifier Operates Outside of Its Specifications.” International
Journal of Computer Vision. Springer Nature, 2020. https://doi.org/10.1007/s11263-019-01232-x.'
ieee: 'R. Sun and C. Lampert, “KS(conf): A light-weight test if a multiclass classifier
operates outside of its specifications,” International Journal of Computer
Vision, vol. 128, no. 4. Springer Nature, pp. 970–995, 2020.'
ista: 'Sun R, Lampert C. 2020. KS(conf): A light-weight test if a multiclass classifier
operates outside of its specifications. International Journal of Computer Vision.
128(4), 970–995.'
mla: 'Sun, Rémy, and Christoph Lampert. “KS(Conf): A Light-Weight Test If a Multiclass
Classifier Operates Outside of Its Specifications.” International Journal of
Computer Vision, vol. 128, no. 4, Springer Nature, 2020, pp. 970–95, doi:10.1007/s11263-019-01232-x.'
short: R. Sun, C. Lampert, International Journal of Computer Vision 128 (2020) 970–995.
date_created: 2019-10-14T09:14:28Z
date_published: 2020-04-01T00:00:00Z
date_updated: 2024-02-22T14:57:30Z
day: '01'
ddc:
- '004'
department:
- _id: ChLa
doi: 10.1007/s11263-019-01232-x
ec_funded: 1
external_id:
isi:
- '000494406800001'
file:
- access_level: open_access
checksum: 155e63edf664dcacb3bdc1c2223e606f
content_type: application/pdf
creator: dernst
date_created: 2019-11-26T10:30:02Z
date_updated: 2020-07-14T12:47:45Z
file_id: '7110'
file_name: 2019_IJCV_Sun.pdf
file_size: 1715072
relation: main_file
file_date_updated: 2020-07-14T12:47:45Z
has_accepted_license: '1'
intvolume: ' 128'
isi: 1
issue: '4'
language:
- iso: eng
month: '04'
oa: 1
oa_version: Published Version
page: 970-995
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '308036'
name: Lifelong Learning of Visual Scene Understanding
- _id: B67AFEDC-15C9-11EA-A837-991A96BB2854
name: IST Austria Open Access Fund
publication: International Journal of Computer Vision
publication_identifier:
eissn:
- 1573-1405
issn:
- 0920-5691
publication_status: published
publisher: Springer Nature
quality_controlled: '1'
related_material:
link:
- relation: erratum
url: https://doi.org/10.1007/s11263-019-01262-5
record:
- id: '6482'
relation: earlier_version
status: public
scopus_import: '1'
status: public
title: 'KS(conf): A light-weight test if a multiclass classifier operates outside
of its specifications'
tmp:
image: /images/cc_by.png
legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
short: CC BY (4.0)
type: journal_article
user_id: 3E5EF7F0-F248-11E8-B48F-1D18A9856A87
volume: 128
year: '2020'
...
---
_id: '7171'
abstract:
- lang: ger
text: "Wissen Sie, was sich hinter künstlicher Intelligenz und maschinellem Lernen
verbirgt? \r\nDieses Sachbuch erklärt Ihnen leicht verständlich und ohne komplizierte
Formeln die grundlegenden Methoden und Vorgehensweisen des maschinellen Lernens.
Mathematisches Vorwissen ist dafür nicht nötig. Kurzweilig und informativ illustriert
Lisa, die Protagonistin des Buches, diese anhand von Alltagssituationen. \r\nEin
Buch für alle, die in Diskussionen über Chancen und Risiken der aktuellen Entwicklung
der künstlichen Intelligenz und des maschinellen Lernens mit Faktenwissen punkten
möchten. Auch für Schülerinnen und Schüler geeignet!"
article_processing_charge: No
citation:
ama: 'Kersting K, Lampert C, Rothkopf C, eds. Wie Maschinen Lernen: Künstliche
Intelligenz Verständlich Erklärt. 1st ed. Wiesbaden: Springer Nature; 2019.
doi:10.1007/978-3-658-26763-6'
apa: 'Kersting, K., Lampert, C., & Rothkopf, C. (Eds.). (2019). Wie Maschinen
Lernen: Künstliche Intelligenz Verständlich Erklärt (1st ed.). Wiesbaden:
Springer Nature. https://doi.org/10.1007/978-3-658-26763-6'
chicago: 'Kersting, Kristian, Christoph Lampert, and Constantin Rothkopf, eds. Wie
Maschinen Lernen: Künstliche Intelligenz Verständlich Erklärt. 1st ed. Wiesbaden:
Springer Nature, 2019. https://doi.org/10.1007/978-3-658-26763-6.'
ieee: 'K. Kersting, C. Lampert, and C. Rothkopf, Eds., Wie Maschinen Lernen:
Künstliche Intelligenz Verständlich Erklärt, 1st ed. Wiesbaden: Springer Nature,
2019.'
ista: 'Kersting K, Lampert C, Rothkopf C eds. 2019. Wie Maschinen Lernen: Künstliche
Intelligenz Verständlich Erklärt 1st ed., Wiesbaden: Springer Nature, XIV, 245p.'
mla: 'Kersting, Kristian, et al., editors. Wie Maschinen Lernen: Künstliche Intelligenz
Verständlich Erklärt. 1st ed., Springer Nature, 2019, doi:10.1007/978-3-658-26763-6.'
short: 'K. Kersting, C. Lampert, C. Rothkopf, eds., Wie Maschinen Lernen: Künstliche
Intelligenz Verständlich Erklärt, 1st ed., Springer Nature, Wiesbaden, 2019.'
date_created: 2019-12-11T14:15:56Z
date_published: 2019-10-30T00:00:00Z
date_updated: 2021-12-22T14:40:58Z
day: '30'
department:
- _id: ChLa
doi: 10.1007/978-3-658-26763-6
edition: '1'
editor:
- first_name: Kristian
full_name: Kersting, Kristian
last_name: Kersting
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
- first_name: Constantin
full_name: Rothkopf, Constantin
last_name: Rothkopf
language:
- iso: ger
month: '10'
oa_version: None
page: XIV, 245
place: Wiesbaden
publication_identifier:
eisbn:
- 978-3-658-26763-6
isbn:
- 978-3-658-26762-9
publication_status: published
publisher: Springer Nature
quality_controlled: '1'
related_material:
link:
- description: News on IST Website
relation: press_release
url: https://ist.ac.at/en/news/book-release-how-machines-learn/
status: public
title: 'Wie Maschinen Lernen: Künstliche Intelligenz Verständlich Erklärt'
type: book_editor
user_id: 8b945eb4-e2f2-11eb-945a-df72226e66a9
year: '2019'
...
---
_id: '6942'
abstract:
- lang: eng
text: "Graph games and Markov decision processes (MDPs) are standard models in reactive
synthesis and verification of probabilistic systems with nondeterminism. The class
of \U0001D714 -regular winning conditions; e.g., safety, reachability, liveness,
parity conditions; provides a robust and expressive specification formalism for
properties that arise in analysis of reactive systems. The resolutions of nondeterminism
in games and MDPs are represented as strategies, and we consider succinct representation
of such strategies. The decision-tree data structure from machine learning retains
the flavor of decisions of strategies and allows entropy-based minimization to
obtain succinct trees. However, in contrast to traditional machine-learning problems
where small errors are allowed, for winning strategies in graph games and MDPs
no error is allowed, and the decision tree must represent the entire strategy.
In this work we propose decision trees with linear classifiers for representation
of strategies in graph games and MDPs. We have implemented strategy representation
using this data structure and we present experimental results for problems on
graph games and MDPs, which show that this new data structure presents a much
more efficient strategy representation as compared to standard decision trees."
alternative_title:
- LNCS
article_processing_charge: No
author:
- first_name: Pranav
full_name: Ashok, Pranav
last_name: Ashok
- first_name: Tomáš
full_name: Brázdil, Tomáš
last_name: Brázdil
- first_name: Krishnendu
full_name: Chatterjee, Krishnendu
id: 2E5DCA20-F248-11E8-B48F-1D18A9856A87
last_name: Chatterjee
orcid: 0000-0002-4561-241X
- first_name: Jan
full_name: Křetínský, Jan
last_name: Křetínský
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
- first_name: Viktor
full_name: Toman, Viktor
id: 3AF3DA7C-F248-11E8-B48F-1D18A9856A87
last_name: Toman
orcid: 0000-0001-9036-063X
citation:
ama: 'Ashok P, Brázdil T, Chatterjee K, Křetínský J, Lampert C, Toman V. Strategy
representation by decision trees with linear classifiers. In: 16th International
Conference on Quantitative Evaluation of Systems. Vol 11785. Springer Nature;
2019:109-128. doi:10.1007/978-3-030-30281-8_7'
apa: 'Ashok, P., Brázdil, T., Chatterjee, K., Křetínský, J., Lampert, C., &
Toman, V. (2019). Strategy representation by decision trees with linear classifiers.
In 16th International Conference on Quantitative Evaluation of Systems
(Vol. 11785, pp. 109–128). Glasgow, United Kingdom: Springer Nature. https://doi.org/10.1007/978-3-030-30281-8_7'
chicago: Ashok, Pranav, Tomáš Brázdil, Krishnendu Chatterjee, Jan Křetínský, Christoph
Lampert, and Viktor Toman. “Strategy Representation by Decision Trees with Linear
Classifiers.” In 16th International Conference on Quantitative Evaluation of
Systems, 11785:109–28. Springer Nature, 2019. https://doi.org/10.1007/978-3-030-30281-8_7.
ieee: P. Ashok, T. Brázdil, K. Chatterjee, J. Křetínský, C. Lampert, and V. Toman,
“Strategy representation by decision trees with linear classifiers,” in 16th
International Conference on Quantitative Evaluation of Systems, Glasgow, United
Kingdom, 2019, vol. 11785, pp. 109–128.
ista: 'Ashok P, Brázdil T, Chatterjee K, Křetínský J, Lampert C, Toman V. 2019.
Strategy representation by decision trees with linear classifiers. 16th International
Conference on Quantitative Evaluation of Systems. QEST: Quantitative Evaluation
of Systems, LNCS, vol. 11785, 109–128.'
mla: Ashok, Pranav, et al. “Strategy Representation by Decision Trees with Linear
Classifiers.” 16th International Conference on Quantitative Evaluation of Systems,
vol. 11785, Springer Nature, 2019, pp. 109–28, doi:10.1007/978-3-030-30281-8_7.
short: P. Ashok, T. Brázdil, K. Chatterjee, J. Křetínský, C. Lampert, V. Toman,
in:, 16th International Conference on Quantitative Evaluation of Systems, Springer
Nature, 2019, pp. 109–128.
conference:
end_date: 2019-09-12
location: Glasgow, United Kingdom
name: 'QEST: Quantitative Evaluation of Systems'
start_date: 2019-09-10
date_created: 2019-10-14T06:57:49Z
date_published: 2019-09-04T00:00:00Z
date_updated: 2023-08-30T06:59:36Z
day: '04'
department:
- _id: KrCh
- _id: ChLa
doi: 10.1007/978-3-030-30281-8_7
external_id:
arxiv:
- '1906.08178'
isi:
- '000679281300007'
intvolume: ' 11785'
isi: 1
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/1906.08178
month: '09'
oa: 1
oa_version: Preprint
page: 109-128
project:
- _id: 25863FF4-B435-11E9-9278-68D0E5697425
call_identifier: FWF
grant_number: S11407
name: Game Theory
- _id: 25F2ACDE-B435-11E9-9278-68D0E5697425
call_identifier: FWF
grant_number: S11402-N23
name: Rigorous Systems Engineering
- _id: 25892FC0-B435-11E9-9278-68D0E5697425
grant_number: ICT15-003
name: Efficient Algorithms for Computer Aided Verification
publication: 16th International Conference on Quantitative Evaluation of Systems
publication_identifier:
eisbn:
- '9783030302818'
isbn:
- '9783030302801'
issn:
- 0302-9743
publication_status: published
publisher: Springer Nature
quality_controlled: '1'
scopus_import: '1'
status: public
title: Strategy representation by decision trees with linear classifiers
type: conference
user_id: 4359f0d1-fa6c-11eb-b949-802e58b17ae8
volume: 11785
year: '2019'
...
---
_id: '6554'
abstract:
- lang: eng
text: Due to the importance of zero-shot learning, i.e. classifying images where
there is a lack of labeled training data, the number of proposed approaches has
recently increased steadily. We argue that it is time to take a step back and
to analyze the status quo of the area. The purpose of this paper is three-fold.
First, given the fact that there is no agreed upon zero-shot learning benchmark,
we first define a new benchmark by unifying both the evaluation protocols and
data splits of publicly available datasets used for this task. This is an important
contribution as published results are often not comparable and sometimes even
flawed due to, e.g. pre-training on zero-shot test classes. Moreover, we propose
a new zero-shot learning dataset, the Animals with Attributes 2 (AWA2) dataset
which we make publicly available both in terms of image features and the images
themselves. Second, we compare and analyze a significant number of the state-of-the-art
methods in depth, both in the classic zero-shot setting but also in the more realistic
generalized zero-shot setting. Finally, we discuss in detail the limitations of
the current status of the area which can be taken as a basis for advancing it.
article_processing_charge: No
article_type: original
author:
- first_name: Yongqin
full_name: Xian, Yongqin
last_name: Xian
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0002-4561-241X
- first_name: Bernt
full_name: Schiele, Bernt
last_name: Schiele
- first_name: Zeynep
full_name: Akata, Zeynep
last_name: Akata
citation:
ama: Xian Y, Lampert C, Schiele B, Akata Z. Zero-shot learning - A comprehensive
evaluation of the good, the bad and the ugly. IEEE Transactions on Pattern
Analysis and Machine Intelligence. 2019;41(9):2251-2265. doi:10.1109/tpami.2018.2857768
apa: Xian, Y., Lampert, C., Schiele, B., & Akata, Z. (2019). Zero-shot learning
- A comprehensive evaluation of the good, the bad and the ugly. IEEE Transactions
on Pattern Analysis and Machine Intelligence. Institute of Electrical and
Electronics Engineers (IEEE). https://doi.org/10.1109/tpami.2018.2857768
chicago: Xian, Yongqin, Christoph Lampert, Bernt Schiele, and Zeynep Akata. “Zero-Shot
Learning - A Comprehensive Evaluation of the Good, the Bad and the Ugly.” IEEE
Transactions on Pattern Analysis and Machine Intelligence. Institute of Electrical
and Electronics Engineers (IEEE), 2019. https://doi.org/10.1109/tpami.2018.2857768.
ieee: Y. Xian, C. Lampert, B. Schiele, and Z. Akata, “Zero-shot learning - A comprehensive
evaluation of the good, the bad and the ugly,” IEEE Transactions on Pattern
Analysis and Machine Intelligence, vol. 41, no. 9. Institute of Electrical
and Electronics Engineers (IEEE), pp. 2251–2265, 2019.
ista: Xian Y, Lampert C, Schiele B, Akata Z. 2019. Zero-shot learning - A comprehensive
evaluation of the good, the bad and the ugly. IEEE Transactions on Pattern Analysis
and Machine Intelligence. 41(9), 2251–2265.
mla: Xian, Yongqin, et al. “Zero-Shot Learning - A Comprehensive Evaluation of the
Good, the Bad and the Ugly.” IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 41, no. 9, Institute of Electrical and Electronics Engineers
(IEEE), 2019, pp. 2251–65, doi:10.1109/tpami.2018.2857768.
short: Y. Xian, C. Lampert, B. Schiele, Z. Akata, IEEE Transactions on Pattern Analysis
and Machine Intelligence 41 (2019) 2251–2265.
date_created: 2019-06-11T14:05:59Z
date_published: 2019-09-01T00:00:00Z
date_updated: 2023-09-05T13:18:09Z
day: '01'
department:
- _id: ChLa
doi: 10.1109/tpami.2018.2857768
external_id:
arxiv:
- '1707.00600'
isi:
- '000480343900015'
intvolume: ' 41'
isi: 1
issue: '9'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/1707.00600
month: '09'
oa: 1
oa_version: Preprint
page: 2251 - 2265
publication: IEEE Transactions on Pattern Analysis and Machine Intelligence
publication_identifier:
eissn:
- 1939-3539
issn:
- 0162-8828
publication_status: published
publisher: Institute of Electrical and Electronics Engineers (IEEE)
quality_controlled: '1'
scopus_import: '1'
status: public
title: Zero-shot learning - A comprehensive evaluation of the good, the bad and the
ugly
type: journal_article
user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1
volume: 41
year: '2019'
...
---
_id: '7479'
abstract:
- lang: eng
text: "Multi-exit architectures, in which a stack of processing layers is interleaved
with early output layers, allow the processing of a test example to stop early
and thus save computation time and/or energy. In this work, we propose a new
training procedure for multi-exit architectures based on the principle of knowledge
distillation. The method encourage searly exits to mimic later, more accurate
exits, by matching their output probabilities.\r\nExperiments on CIFAR100 and
\ ImageNet show that distillation-based training significantly improves the
accuracy of early exits while maintaining state-of-the-art accuracy for late
\ ones. The method is particularly beneficial when training data is limited
\ and it allows a straightforward extension to semi-supervised learning,i.e.
making use of unlabeled data at training time. Moreover, it takes only afew lines
to implement and incurs almost no computational overhead at training time, and
none at all at test time."
article_processing_charge: No
author:
- first_name: Phuong
full_name: Bui Thi Mai, Phuong
id: 3EC6EE64-F248-11E8-B48F-1D18A9856A87
last_name: Bui Thi Mai
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Phuong M, Lampert C. Distillation-based training for multi-exit architectures.
In: IEEE International Conference on Computer Vision. Vol 2019-October.
IEEE; 2019:1355-1364. doi:10.1109/ICCV.2019.00144'
apa: 'Phuong, M., & Lampert, C. (2019). Distillation-based training for multi-exit
architectures. In IEEE International Conference on Computer Vision (Vol.
2019–October, pp. 1355–1364). Seoul, Korea: IEEE. https://doi.org/10.1109/ICCV.2019.00144'
chicago: Phuong, Mary, and Christoph Lampert. “Distillation-Based Training for Multi-Exit
Architectures.” In IEEE International Conference on Computer Vision, 2019–October:1355–64.
IEEE, 2019. https://doi.org/10.1109/ICCV.2019.00144.
ieee: M. Phuong and C. Lampert, “Distillation-based training for multi-exit architectures,”
in IEEE International Conference on Computer Vision, Seoul, Korea, 2019,
vol. 2019–October, pp. 1355–1364.
ista: 'Phuong M, Lampert C. 2019. Distillation-based training for multi-exit architectures.
IEEE International Conference on Computer Vision. ICCV: International Conference
on Computer Vision vol. 2019–October, 1355–1364.'
mla: Phuong, Mary, and Christoph Lampert. “Distillation-Based Training for Multi-Exit
Architectures.” IEEE International Conference on Computer Vision, vol.
2019–October, IEEE, 2019, pp. 1355–64, doi:10.1109/ICCV.2019.00144.
short: M. Phuong, C. Lampert, in:, IEEE International Conference on Computer Vision,
IEEE, 2019, pp. 1355–1364.
conference:
end_date: 2019-11-02
location: Seoul, Korea
name: 'ICCV: International Conference on Computer Vision'
start_date: 2019-10-27
date_created: 2020-02-11T09:06:57Z
date_published: 2019-10-01T00:00:00Z
date_updated: 2023-09-08T11:11:12Z
day: '01'
ddc:
- '000'
department:
- _id: ChLa
doi: 10.1109/ICCV.2019.00144
ec_funded: 1
external_id:
isi:
- '000531438101047'
file:
- access_level: open_access
checksum: 7b77fb5c2d27c4c37a7612ba46a66117
content_type: application/pdf
creator: bphuong
date_created: 2020-02-11T09:06:39Z
date_updated: 2020-07-14T12:47:59Z
file_id: '7480'
file_name: main.pdf
file_size: 735768
relation: main_file
file_date_updated: 2020-07-14T12:47:59Z
has_accepted_license: '1'
isi: 1
language:
- iso: eng
month: '10'
oa: 1
oa_version: Submitted Version
page: 1355-1364
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '308036'
name: Lifelong Learning of Visual Scene Understanding
publication: IEEE International Conference on Computer Vision
publication_identifier:
isbn:
- '9781728148038'
issn:
- '15505499'
publication_status: published
publisher: IEEE
quality_controlled: '1'
related_material:
record:
- id: '9418'
relation: dissertation_contains
status: public
scopus_import: '1'
status: public
title: Distillation-based training for multi-exit architectures
type: conference
user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1
volume: 2019-October
year: '2019'
...
---
_id: '7640'
abstract:
- lang: eng
text: We propose a new model for detecting visual relationships, such as "person
riding motorcycle" or "bottle on table". This task is an important step towards
comprehensive structured mage understanding, going beyond detecting individual
objects. Our main novelty is a Box Attention mechanism that allows to model pairwise
interactions between objects using standard object detection pipelines. The resulting
model is conceptually clean, expressive and relies on well-justified training
and prediction procedures. Moreover, unlike previously proposed approaches, our
model does not introduce any additional complex components or hyperparameters
on top of those already required by the underlying detection model. We conduct
an experimental evaluation on two datasets, V-COCO and Open Images, demonstrating
strong quantitative and qualitative results.
article_number: 1749-1753
article_processing_charge: No
author:
- first_name: Alexander
full_name: Kolesnikov, Alexander
id: 2D157DB6-F248-11E8-B48F-1D18A9856A87
last_name: Kolesnikov
- first_name: Alina
full_name: Kuznetsova, Alina
last_name: Kuznetsova
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
- first_name: Vittorio
full_name: Ferrari, Vittorio
last_name: Ferrari
citation:
ama: 'Kolesnikov A, Kuznetsova A, Lampert C, Ferrari V. Detecting visual relationships
using box attention. In: Proceedings of the 2019 International Conference on
Computer Vision Workshop. IEEE; 2019. doi:10.1109/ICCVW.2019.00217'
apa: 'Kolesnikov, A., Kuznetsova, A., Lampert, C., & Ferrari, V. (2019). Detecting
visual relationships using box attention. In Proceedings of the 2019 International
Conference on Computer Vision Workshop. Seoul, South Korea: IEEE. https://doi.org/10.1109/ICCVW.2019.00217'
chicago: Kolesnikov, Alexander, Alina Kuznetsova, Christoph Lampert, and Vittorio
Ferrari. “Detecting Visual Relationships Using Box Attention.” In Proceedings
of the 2019 International Conference on Computer Vision Workshop. IEEE, 2019.
https://doi.org/10.1109/ICCVW.2019.00217.
ieee: A. Kolesnikov, A. Kuznetsova, C. Lampert, and V. Ferrari, “Detecting visual
relationships using box attention,” in Proceedings of the 2019 International
Conference on Computer Vision Workshop, Seoul, South Korea, 2019.
ista: 'Kolesnikov A, Kuznetsova A, Lampert C, Ferrari V. 2019. Detecting visual
relationships using box attention. Proceedings of the 2019 International Conference
on Computer Vision Workshop. ICCVW: International Conference on Computer Vision
Workshop, 1749–1753.'
mla: Kolesnikov, Alexander, et al. “Detecting Visual Relationships Using Box Attention.”
Proceedings of the 2019 International Conference on Computer Vision Workshop,
1749–1753, IEEE, 2019, doi:10.1109/ICCVW.2019.00217.
short: A. Kolesnikov, A. Kuznetsova, C. Lampert, V. Ferrari, in:, Proceedings of
the 2019 International Conference on Computer Vision Workshop, IEEE, 2019.
conference:
end_date: 2019-10-28
location: Seoul, South Korea
name: 'ICCVW: International Conference on Computer Vision Workshop'
start_date: 2019-10-27
date_created: 2020-04-05T22:00:51Z
date_published: 2019-10-01T00:00:00Z
date_updated: 2023-09-08T11:18:37Z
day: '01'
department:
- _id: ChLa
doi: 10.1109/ICCVW.2019.00217
ec_funded: 1
external_id:
arxiv:
- '1807.02136'
isi:
- '000554591601098'
isi: 1
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/1807.02136
month: '10'
oa: 1
oa_version: Preprint
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '308036'
name: Lifelong Learning of Visual Scene Understanding
publication: Proceedings of the 2019 International Conference on Computer Vision Workshop
publication_identifier:
isbn:
- '9781728150239'
publication_status: published
publisher: IEEE
quality_controlled: '1'
scopus_import: '1'
status: public
title: Detecting visual relationships using box attention
type: conference
user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1
year: '2019'
...
---
_id: '6569'
abstract:
- lang: eng
text: 'Knowledge distillation, i.e. one classifier being trained on the outputs
of another classifier, is an empirically very successful technique for knowledge
transfer between classifiers. It has even been observed that classifiers learn
much faster and more reliably if trained with the outputs of another classifier
as soft labels, instead of from ground truth data. So far, however, there is no
satisfactory theoretical explanation of this phenomenon. In this work, we provide
the first insights into the working mechanisms of distillation by studying the
special case of linear and deep linear classifiers. Specifically, we prove a
generalization bound that establishes fast convergence of the expected risk of
a distillation-trained linear classifier. From the bound and its proof we extract
three keyfactors that determine the success of distillation: data geometry – geometric
properties of the datadistribution, in particular class separation, has an immediate
influence on the convergence speed of the risk; optimization bias– gradient descentoptimization
finds a very favorable minimum of the distillation objective; and strong monotonicity–
the expected risk of the student classifier always decreases when the size of
the training set grows.'
article_processing_charge: No
author:
- first_name: Phuong
full_name: Bui Thi Mai, Phuong
id: 3EC6EE64-F248-11E8-B48F-1D18A9856A87
last_name: Bui Thi Mai
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Phuong M, Lampert C. Towards understanding knowledge distillation. In: Proceedings
of the 36th International Conference on Machine Learning. Vol 97. ML Research
Press; 2019:5142-5151.'
apa: 'Phuong, M., & Lampert, C. (2019). Towards understanding knowledge distillation.
In Proceedings of the 36th International Conference on Machine Learning
(Vol. 97, pp. 5142–5151). Long Beach, CA, United States: ML Research Press.'
chicago: Phuong, Mary, and Christoph Lampert. “Towards Understanding Knowledge Distillation.”
In Proceedings of the 36th International Conference on Machine Learning,
97:5142–51. ML Research Press, 2019.
ieee: M. Phuong and C. Lampert, “Towards understanding knowledge distillation,”
in Proceedings of the 36th International Conference on Machine Learning,
Long Beach, CA, United States, 2019, vol. 97, pp. 5142–5151.
ista: 'Phuong M, Lampert C. 2019. Towards understanding knowledge distillation.
Proceedings of the 36th International Conference on Machine Learning. ICML: International
Conference on Machine Learning vol. 97, 5142–5151.'
mla: Phuong, Mary, and Christoph Lampert. “Towards Understanding Knowledge Distillation.”
Proceedings of the 36th International Conference on Machine Learning, vol.
97, ML Research Press, 2019, pp. 5142–51.
short: M. Phuong, C. Lampert, in:, Proceedings of the 36th International Conference
on Machine Learning, ML Research Press, 2019, pp. 5142–5151.
conference:
end_date: 2019-06-15
location: Long Beach, CA, United States
name: 'ICML: International Conference on Machine Learning'
start_date: 2019-06-10
date_created: 2019-06-20T18:23:03Z
date_published: 2019-06-13T00:00:00Z
date_updated: 2023-10-17T12:31:38Z
day: '13'
ddc:
- '000'
department:
- _id: ChLa
file:
- access_level: open_access
checksum: a66d00e2694d749250f8507f301320ca
content_type: application/pdf
creator: bphuong
date_created: 2019-06-20T18:22:56Z
date_updated: 2020-07-14T12:47:33Z
file_id: '6570'
file_name: paper.pdf
file_size: 686432
relation: main_file
file_date_updated: 2020-07-14T12:47:33Z
has_accepted_license: '1'
intvolume: ' 97'
language:
- iso: eng
month: '06'
oa: 1
oa_version: Published Version
page: 5142-5151
publication: Proceedings of the 36th International Conference on Machine Learning
publication_status: published
publisher: ML Research Press
quality_controlled: '1'
scopus_import: '1'
status: public
title: Towards understanding knowledge distillation
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 97
year: '2019'
...
---
_id: '6590'
abstract:
- lang: eng
text: 'Modern machine learning methods often require more data for training than
a single expert can provide. Therefore, it has become a standard procedure to
collect data from external sources, e.g. via crowdsourcing. Unfortunately, the
quality of these sources is not always guaranteed. As additional complications,
the data might be stored in a distributed way, or might even have to remain private.
In this work, we address the question of how to learn robustly in such scenarios.
Studying the problem through the lens of statistical learning theory, we derive
a procedure that allows for learning from all available sources, yet automatically
suppresses irrelevant or corrupted data. We show by extensive experiments that
our method provides significant improvements over alternative approaches from
robust statistics and distributed optimization. '
article_processing_charge: No
author:
- first_name: Nikola H
full_name: Konstantinov, Nikola H
id: 4B9D76E4-F248-11E8-B48F-1D18A9856A87
last_name: Konstantinov
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Konstantinov NH, Lampert C. Robust learning from untrusted sources. In: Proceedings
of the 36th International Conference on Machine Learning. Vol 97. ML Research
Press; 2019:3488-3498.'
apa: 'Konstantinov, N. H., & Lampert, C. (2019). Robust learning from untrusted
sources. In Proceedings of the 36th International Conference on Machine Learning
(Vol. 97, pp. 3488–3498). Long Beach, CA, USA: ML Research Press.'
chicago: Konstantinov, Nikola H, and Christoph Lampert. “Robust Learning from Untrusted
Sources.” In Proceedings of the 36th International Conference on Machine Learning,
97:3488–98. ML Research Press, 2019.
ieee: N. H. Konstantinov and C. Lampert, “Robust learning from untrusted sources,”
in Proceedings of the 36th International Conference on Machine Learning,
Long Beach, CA, USA, 2019, vol. 97, pp. 3488–3498.
ista: 'Konstantinov NH, Lampert C. 2019. Robust learning from untrusted sources.
Proceedings of the 36th International Conference on Machine Learning. ICML: International
Conference on Machine Learning vol. 97, 3488–3498.'
mla: Konstantinov, Nikola H., and Christoph Lampert. “Robust Learning from Untrusted
Sources.” Proceedings of the 36th International Conference on Machine Learning,
vol. 97, ML Research Press, 2019, pp. 3488–98.
short: N.H. Konstantinov, C. Lampert, in:, Proceedings of the 36th International
Conference on Machine Learning, ML Research Press, 2019, pp. 3488–3498.
conference:
end_date: 2919-06-15
location: Long Beach, CA, USA
name: 'ICML: International Conference on Machine Learning'
start_date: 2019-06-10
date_created: 2019-06-27T14:18:23Z
date_published: 2019-06-01T00:00:00Z
date_updated: 2023-10-17T12:31:55Z
day: '01'
department:
- _id: ChLa
ec_funded: 1
external_id:
arxiv:
- '1901.10310'
intvolume: ' 97'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/1901.10310
month: '06'
oa: 1
oa_version: Preprint
page: 3488-3498
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '308036'
name: Lifelong Learning of Visual Scene Understanding
- _id: 2564DBCA-B435-11E9-9278-68D0E5697425
call_identifier: H2020
grant_number: '665385'
name: International IST Doctoral Program
publication: Proceedings of the 36th International Conference on Machine Learning
publication_status: published
publisher: ML Research Press
quality_controlled: '1'
related_material:
record:
- id: '10799'
relation: dissertation_contains
status: public
scopus_import: '1'
status: public
title: Robust learning from untrusted sources
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 97
year: '2019'
...
---
_id: '6482'
abstract:
- lang: eng
text: 'Computer vision systems for automatic image categorization have become accurate
and reliable enough that they can run continuously for days or even years as components
of real-world commercial applications. A major open problem in this context, however,
is quality control. Good classification performance can only be expected if systems
run under the specific conditions, in particular data distributions, that they
were trained for. Surprisingly, none of the currently used deep network architectures
have a built-in functionality that could detect if a network operates on data
from a distribution it was not trained for, such that potentially a warning to
the human users could be triggered. In this work, we describe KS(conf), a procedure
for detecting such outside of specifications (out-of-specs) operation, based on
statistical testing of the network outputs. We show by extensive experiments using
the ImageNet, AwA2 and DAVIS datasets on a variety of ConvNets architectures that
KS(conf) reliably detects out-of-specs situations. It furthermore has a number
of properties that make it a promising candidate for practical deployment: it
is easy to implement, adds almost no overhead to the system, works with all networks,
including pretrained ones, and requires no a priori knowledge of how the data
distribution could change. '
alternative_title:
- LNCS
article_processing_charge: No
author:
- first_name: Rémy
full_name: Sun, Rémy
last_name: Sun
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Sun R, Lampert C. KS(conf): A light-weight test if a ConvNet operates outside
of Its specifications. In: Vol 11269. Springer Nature; 2019:244-259. doi:10.1007/978-3-030-12939-2_18'
apa: 'Sun, R., & Lampert, C. (2019). KS(conf): A light-weight test if a ConvNet
operates outside of Its specifications (Vol. 11269, pp. 244–259). Presented at
the GCPR: Conference on Pattern Recognition, Stuttgart, Germany: Springer Nature.
https://doi.org/10.1007/978-3-030-12939-2_18'
chicago: 'Sun, Rémy, and Christoph Lampert. “KS(Conf): A Light-Weight Test If a
ConvNet Operates Outside of Its Specifications,” 11269:244–59. Springer Nature,
2019. https://doi.org/10.1007/978-3-030-12939-2_18.'
ieee: 'R. Sun and C. Lampert, “KS(conf): A light-weight test if a ConvNet operates
outside of Its specifications,” presented at the GCPR: Conference on Pattern Recognition,
Stuttgart, Germany, 2019, vol. 11269, pp. 244–259.'
ista: 'Sun R, Lampert C. 2019. KS(conf): A light-weight test if a ConvNet operates
outside of Its specifications. GCPR: Conference on Pattern Recognition, LNCS,
vol. 11269, 244–259.'
mla: 'Sun, Rémy, and Christoph Lampert. KS(Conf): A Light-Weight Test If a ConvNet
Operates Outside of Its Specifications. Vol. 11269, Springer Nature, 2019,
pp. 244–59, doi:10.1007/978-3-030-12939-2_18.'
short: R. Sun, C. Lampert, in:, Springer Nature, 2019, pp. 244–259.
conference:
end_date: 2018-10-12
location: Stuttgart, Germany
name: 'GCPR: Conference on Pattern Recognition'
start_date: 2018-10-09
date_created: 2019-05-24T09:48:36Z
date_published: 2019-02-14T00:00:00Z
date_updated: 2024-02-22T14:57:29Z
day: '14'
department:
- _id: ChLa
doi: 10.1007/978-3-030-12939-2_18
ec_funded: 1
external_id:
arxiv:
- '1804.04171'
intvolume: ' 11269'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/1804.04171
month: '02'
oa: 1
oa_version: Preprint
page: 244-259
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '308036'
name: Lifelong Learning of Visual Scene Understanding
publication_identifier:
eissn:
- 1611-3349
isbn:
- '9783030129385'
- '9783030129392'
issn:
- 0302-9743
publication_status: published
publisher: Springer Nature
quality_controlled: '1'
related_material:
record:
- id: '6944'
relation: later_version
status: public
scopus_import: '1'
status: public
title: 'KS(conf): A light-weight test if a ConvNet operates outside of Its specifications'
type: conference
user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1
volume: 11269
year: '2019'
...
---
_id: '68'
abstract:
- lang: eng
text: The most common assumption made in statistical learning theory is the assumption
of the independent and identically distributed (i.i.d.) data. While being very
convenient mathematically, it is often very clearly violated in practice. This
disparity between the machine learning theory and applications underlies a growing
demand in the development of algorithms that learn from dependent data and theory
that can provide generalization guarantees similar to the independent situations.
This thesis is dedicated to two variants of dependencies that can arise in practice.
One is a dependence on the level of samples in a single learning task. Another
dependency type arises in the multi-task setting when the tasks are dependent
on each other even though the data for them can be i.i.d. In both cases we model
the data (samples or tasks) as stochastic processes and introduce new algorithms
for both settings that take into account and exploit the resulting dependencies.
We prove the theoretical guarantees on the performance of the introduced algorithms
under different evaluation criteria and, in addition, we compliment the theoretical
study by the empirical one, where we evaluate some of the algorithms on two real
world datasets to highlight their practical applicability.
alternative_title:
- ISTA Thesis
article_processing_charge: No
author:
- first_name: Alexander
full_name: Zimin, Alexander
id: 37099E9C-F248-11E8-B48F-1D18A9856A87
last_name: Zimin
citation:
ama: Zimin A. Learning from dependent data. 2018. doi:10.15479/AT:ISTA:TH1048
apa: Zimin, A. (2018). Learning from dependent data. Institute of Science
and Technology Austria. https://doi.org/10.15479/AT:ISTA:TH1048
chicago: Zimin, Alexander. “Learning from Dependent Data.” Institute of Science
and Technology Austria, 2018. https://doi.org/10.15479/AT:ISTA:TH1048.
ieee: A. Zimin, “Learning from dependent data,” Institute of Science and Technology
Austria, 2018.
ista: Zimin A. 2018. Learning from dependent data. Institute of Science and Technology
Austria.
mla: Zimin, Alexander. Learning from Dependent Data. Institute of Science
and Technology Austria, 2018, doi:10.15479/AT:ISTA:TH1048.
short: A. Zimin, Learning from Dependent Data, Institute of Science and Technology
Austria, 2018.
date_created: 2018-12-11T11:44:27Z
date_published: 2018-09-01T00:00:00Z
date_updated: 2023-09-07T12:29:07Z
day: '01'
ddc:
- '004'
- '519'
degree_awarded: PhD
department:
- _id: ChLa
doi: 10.15479/AT:ISTA:TH1048
ec_funded: 1
file:
- access_level: open_access
checksum: e849dd40a915e4d6c5572b51b517f098
content_type: application/pdf
creator: dernst
date_created: 2019-04-09T07:32:47Z
date_updated: 2020-07-14T12:47:40Z
file_id: '6253'
file_name: 2018_Thesis_Zimin.pdf
file_size: 1036137
relation: main_file
- access_level: closed
checksum: da092153cec55c97461bd53c45c5d139
content_type: application/zip
creator: dernst
date_created: 2019-04-09T07:32:47Z
date_updated: 2020-07-14T12:47:40Z
file_id: '6254'
file_name: 2018_Thesis_Zimin_Source.zip
file_size: 637490
relation: source_file
file_date_updated: 2020-07-14T12:47:40Z
has_accepted_license: '1'
language:
- iso: eng
month: '09'
oa: 1
oa_version: Published Version
page: '92'
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '308036'
name: Lifelong Learning of Visual Scene Understanding
publication_identifier:
issn:
- 2663-337X
publication_status: published
publisher: Institute of Science and Technology Austria
publist_id: '7986'
pubrep_id: '1048'
status: public
supervisor:
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
title: Learning from dependent data
type: dissertation
user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1
year: '2018'
...
---
_id: '197'
abstract:
- lang: eng
text: Modern computer vision systems heavily rely on statistical machine learning
models, which typically require large amounts of labeled data to be learned reliably.
Moreover, very recently computer vision research widely adopted techniques for
representation learning, which further increase the demand for labeled data. However,
for many important practical problems there is relatively small amount of labeled
data available, so it is problematic to leverage full potential of the representation
learning methods. One way to overcome this obstacle is to invest substantial resources
into producing large labelled datasets. Unfortunately, this can be prohibitively
expensive in practice. In this thesis we focus on the alternative way of tackling
the aforementioned issue. We concentrate on methods, which make use of weakly-labeled
or even unlabeled data. Specifically, the first half of the thesis is dedicated
to the semantic image segmentation task. We develop a technique, which achieves
competitive segmentation performance and only requires annotations in a form of
global image-level labels instead of dense segmentation masks. Subsequently, we
present a new methodology, which further improves segmentation performance by
leveraging tiny additional feedback from a human annotator. By using our methods
practitioners can greatly reduce the amount of data annotation effort, which is
required to learn modern image segmentation models. In the second half of the
thesis we focus on methods for learning from unlabeled visual data. We study a
family of autoregressive models for modeling structure of natural images and discuss
potential applications of these models. Moreover, we conduct in-depth study of
one of these applications, where we develop the state-of-the-art model for the
probabilistic image colorization task.
acknowledgement: I also gratefully acknowledge the support of NVIDIA Corporation with
the donation of the GPUs used for this research.
alternative_title:
- ISTA Thesis
article_processing_charge: No
author:
- first_name: Alexander
full_name: Kolesnikov, Alexander
id: 2D157DB6-F248-11E8-B48F-1D18A9856A87
last_name: Kolesnikov
citation:
ama: Kolesnikov A. Weakly-Supervised Segmentation and Unsupervised Modeling of Natural
Images. 2018. doi:10.15479/AT:ISTA:th_1021
apa: Kolesnikov, A. (2018). Weakly-Supervised Segmentation and Unsupervised Modeling
of Natural Images. Institute of Science and Technology Austria. https://doi.org/10.15479/AT:ISTA:th_1021
chicago: Kolesnikov, Alexander. “Weakly-Supervised Segmentation and Unsupervised
Modeling of Natural Images.” Institute of Science and Technology Austria, 2018.
https://doi.org/10.15479/AT:ISTA:th_1021.
ieee: A. Kolesnikov, “Weakly-Supervised Segmentation and Unsupervised Modeling of
Natural Images,” Institute of Science and Technology Austria, 2018.
ista: Kolesnikov A. 2018. Weakly-Supervised Segmentation and Unsupervised Modeling
of Natural Images. Institute of Science and Technology Austria.
mla: Kolesnikov, Alexander. Weakly-Supervised Segmentation and Unsupervised Modeling
of Natural Images. Institute of Science and Technology Austria, 2018, doi:10.15479/AT:ISTA:th_1021.
short: A. Kolesnikov, Weakly-Supervised Segmentation and Unsupervised Modeling of
Natural Images, Institute of Science and Technology Austria, 2018.
date_created: 2018-12-11T11:45:09Z
date_published: 2018-05-25T00:00:00Z
date_updated: 2023-09-07T12:51:46Z
day: '25'
ddc:
- '004'
degree_awarded: PhD
department:
- _id: ChLa
doi: 10.15479/AT:ISTA:th_1021
ec_funded: 1
file:
- access_level: open_access
checksum: bc678e02468d8ebc39dc7267dfb0a1c4
content_type: application/pdf
creator: system
date_created: 2018-12-12T10:14:57Z
date_updated: 2020-07-14T12:45:22Z
file_id: '5113'
file_name: IST-2018-1021-v1+1_thesis-unsigned-pdfa.pdf
file_size: 12918758
relation: main_file
- access_level: closed
checksum: bc66973b086da5a043f1162dcfb1fde4
content_type: application/zip
creator: dernst
date_created: 2019-04-05T09:34:49Z
date_updated: 2020-07-14T12:45:22Z
file_id: '6225'
file_name: 2018_Thesis_Kolesnikov_source.zip
file_size: 55973760
relation: source_file
file_date_updated: 2020-07-14T12:45:22Z
has_accepted_license: '1'
language:
- iso: eng
month: '05'
oa: 1
oa_version: Published Version
page: '113'
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '308036'
name: Lifelong Learning of Visual Scene Understanding
publication_identifier:
issn:
- 2663-337X
publication_status: published
publisher: Institute of Science and Technology Austria
publist_id: '7718'
pubrep_id: '1021'
status: public
supervisor:
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
title: Weakly-Supervised Segmentation and Unsupervised Modeling of Natural Images
type: dissertation
user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1
year: '2018'
...
---
_id: '563'
abstract:
- lang: eng
text: "In continuous populations with local migration, nearby pairs of individuals
have on average more similar genotypes\r\nthan geographically well separated pairs.
A barrier to gene flow distorts this classical pattern of isolation by distance.
Genetic similarity is decreased for sample pairs on different sides of the barrier
and increased for pairs on the same side near the barrier. Here, we introduce
an inference scheme that utilizes this signal to detect and estimate the strength
of a linear barrier to gene flow in two-dimensions. We use a diffusion approximation
to model the effects of a barrier on the geographical spread of ancestry backwards
in time. This approach allows us to calculate the chance of recent coalescence
and probability of identity by descent. We introduce an inference scheme that
fits these theoretical results to the geographical covariance structure of bialleleic
genetic markers. It can estimate the strength of the barrier as well as several
demographic parameters. We investigate the power of our inference scheme to detect
barriers by applying it to a wide range of simulated data. We also showcase an
example application to a Antirrhinum majus (snapdragon) flower color hybrid zone,
where we do not detect any signal of a strong genome wide barrier to gene flow."
article_processing_charge: No
author:
- first_name: Harald
full_name: Ringbauer, Harald
id: 417FCFF4-F248-11E8-B48F-1D18A9856A87
last_name: Ringbauer
orcid: 0000-0002-4884-9682
- first_name: Alexander
full_name: Kolesnikov, Alexander
id: 2D157DB6-F248-11E8-B48F-1D18A9856A87
last_name: Kolesnikov
- first_name: David
full_name: Field, David
last_name: Field
- first_name: Nicholas H
full_name: Barton, Nicholas H
id: 4880FE40-F248-11E8-B48F-1D18A9856A87
last_name: Barton
orcid: 0000-0002-8548-5240
citation:
ama: Ringbauer H, Kolesnikov A, Field D, Barton NH. Estimating barriers to gene
flow from distorted isolation-by-distance patterns. Genetics. 2018;208(3):1231-1245.
doi:10.1534/genetics.117.300638
apa: Ringbauer, H., Kolesnikov, A., Field, D., & Barton, N. H. (2018). Estimating
barriers to gene flow from distorted isolation-by-distance patterns. Genetics.
Genetics Society of America. https://doi.org/10.1534/genetics.117.300638
chicago: Ringbauer, Harald, Alexander Kolesnikov, David Field, and Nicholas H Barton.
“Estimating Barriers to Gene Flow from Distorted Isolation-by-Distance Patterns.”
Genetics. Genetics Society of America, 2018. https://doi.org/10.1534/genetics.117.300638.
ieee: H. Ringbauer, A. Kolesnikov, D. Field, and N. H. Barton, “Estimating barriers
to gene flow from distorted isolation-by-distance patterns,” Genetics,
vol. 208, no. 3. Genetics Society of America, pp. 1231–1245, 2018.
ista: Ringbauer H, Kolesnikov A, Field D, Barton NH. 2018. Estimating barriers to
gene flow from distorted isolation-by-distance patterns. Genetics. 208(3), 1231–1245.
mla: Ringbauer, Harald, et al. “Estimating Barriers to Gene Flow from Distorted
Isolation-by-Distance Patterns.” Genetics, vol. 208, no. 3, Genetics Society
of America, 2018, pp. 1231–45, doi:10.1534/genetics.117.300638.
short: H. Ringbauer, A. Kolesnikov, D. Field, N.H. Barton, Genetics 208 (2018) 1231–1245.
date_created: 2018-12-11T11:47:12Z
date_published: 2018-03-01T00:00:00Z
date_updated: 2023-09-11T13:42:38Z
day: '01'
department:
- _id: NiBa
- _id: ChLa
doi: 10.1534/genetics.117.300638
external_id:
isi:
- '000426219600025'
intvolume: ' 208'
isi: 1
issue: '3'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://www.biorxiv.org/content/10.1101/205484v1
month: '03'
oa: 1
oa_version: Preprint
page: 1231-1245
publication: Genetics
publication_status: published
publisher: Genetics Society of America
publist_id: '7251'
quality_controlled: '1'
related_material:
record:
- id: '200'
relation: dissertation_contains
status: public
scopus_import: '1'
status: public
title: Estimating barriers to gene flow from distorted isolation-by-distance patterns
type: journal_article
user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1
volume: 208
year: '2018'
...
---
_id: '321'
abstract:
- lang: eng
text: The twelve papers in this special section focus on learning systems with shared
information for computer vision and multimedia communication analysis. In the
real world, a realistic setting for computer vision or multimedia recognition
problems is that we have some classes containing lots of training data and many
classes containing a small amount of training data. Therefore, how to use frequent
classes to help learning rare classes for which it is harder to collect the training
data is an open question. Learning with shared information is an emerging topic
in machine learning, computer vision and multimedia analysis. There are different
levels of components that can be shared during concept modeling and machine learning
stages, such as sharing generic object parts, sharing attributes, sharing transformations,
sharing regularization parameters and sharing training examples, etc. Regarding
the specific methods, multi-task learning, transfer learning and deep learning
can be seen as using different strategies to share information. These learning
with shared information methods are very effective in solving real-world large-scale
problems.
article_processing_charge: No
article_type: original
author:
- first_name: Trevor
full_name: Darrell, Trevor
last_name: Darrell
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
- first_name: Nico
full_name: Sebe, Nico
last_name: Sebe
- first_name: Ying
full_name: Wu, Ying
last_name: Wu
- first_name: Yan
full_name: Yan, Yan
last_name: Yan
citation:
ama: Darrell T, Lampert C, Sebe N, Wu Y, Yan Y. Guest editors’ introduction to the
special section on learning with Shared information for computer vision and multimedia
analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence.
2018;40(5):1029-1031. doi:10.1109/TPAMI.2018.2804998
apa: Darrell, T., Lampert, C., Sebe, N., Wu, Y., & Yan, Y. (2018). Guest editors’
introduction to the special section on learning with Shared information for computer
vision and multimedia analysis. IEEE Transactions on Pattern Analysis and Machine
Intelligence. IEEE. https://doi.org/10.1109/TPAMI.2018.2804998
chicago: Darrell, Trevor, Christoph Lampert, Nico Sebe, Ying Wu, and Yan Yan. “Guest
Editors’ Introduction to the Special Section on Learning with Shared Information
for Computer Vision and Multimedia Analysis.” IEEE Transactions on Pattern
Analysis and Machine Intelligence. IEEE, 2018. https://doi.org/10.1109/TPAMI.2018.2804998.
ieee: T. Darrell, C. Lampert, N. Sebe, Y. Wu, and Y. Yan, “Guest editors’ introduction
to the special section on learning with Shared information for computer vision
and multimedia analysis,” IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 40, no. 5. IEEE, pp. 1029–1031, 2018.
ista: Darrell T, Lampert C, Sebe N, Wu Y, Yan Y. 2018. Guest editors’ introduction
to the special section on learning with Shared information for computer vision
and multimedia analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence.
40(5), 1029–1031.
mla: Darrell, Trevor, et al. “Guest Editors’ Introduction to the Special Section
on Learning with Shared Information for Computer Vision and Multimedia Analysis.”
IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40,
no. 5, IEEE, 2018, pp. 1029–31, doi:10.1109/TPAMI.2018.2804998.
short: T. Darrell, C. Lampert, N. Sebe, Y. Wu, Y. Yan, IEEE Transactions on Pattern
Analysis and Machine Intelligence 40 (2018) 1029–1031.
date_created: 2018-12-11T11:45:48Z
date_published: 2018-05-01T00:00:00Z
date_updated: 2023-09-11T14:07:54Z
day: '01'
ddc:
- '000'
department:
- _id: ChLa
doi: 10.1109/TPAMI.2018.2804998
external_id:
isi:
- '000428901200001'
file:
- access_level: open_access
checksum: b19c75da06faf3291a3ca47dfa50ef63
content_type: application/pdf
creator: dernst
date_created: 2020-05-14T12:50:48Z
date_updated: 2020-07-14T12:46:03Z
file_id: '7835'
file_name: 2018_IEEE_Darrell.pdf
file_size: 141724
relation: main_file
file_date_updated: 2020-07-14T12:46:03Z
has_accepted_license: '1'
intvolume: ' 40'
isi: 1
issue: '5'
language:
- iso: eng
month: '05'
oa: 1
oa_version: Published Version
page: 1029 - 1031
publication: IEEE Transactions on Pattern Analysis and Machine Intelligence
publication_status: published
publisher: IEEE
publist_id: '7544'
quality_controlled: '1'
scopus_import: '1'
status: public
title: Guest editors' introduction to the special section on learning with Shared
information for computer vision and multimedia analysis
type: journal_article
user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1
volume: 40
year: '2018'
...
---
_id: '10882'
abstract:
- lang: eng
text: 'We introduce Intelligent Annotation Dialogs for bounding box annotation.
We train an agent to automatically choose a sequence of actions for a human annotator
to produce a bounding box in a minimal amount of time. Specifically, we consider
two actions: box verification [34], where the annotator verifies a box generated
by an object detector, and manual box drawing. We explore two kinds of agents,
one based on predicting the probability that a box will be positively verified,
and the other based on reinforcement learning. We demonstrate that (1) our agents
are able to learn efficient annotation strategies in several scenarios, automatically
adapting to the image difficulty, the desired quality of the boxes, and the detector
strength; (2) in all scenarios the resulting annotation dialogs speed up annotation
compared to manual box drawing alone and box verification alone, while also outperforming
any fixed combination of verification and drawing in most scenarios; (3) in a
realistic scenario where the detector is iteratively re-trained, our agents evolve
a series of strategies that reflect the shifting trade-off between verification
and drawing as the detector grows stronger.'
article_processing_charge: No
author:
- first_name: Jasper
full_name: Uijlings, Jasper
last_name: Uijlings
- first_name: Ksenia
full_name: Konyushkova, Ksenia
last_name: Konyushkova
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
- first_name: Vittorio
full_name: Ferrari, Vittorio
last_name: Ferrari
citation:
ama: 'Uijlings J, Konyushkova K, Lampert C, Ferrari V. Learning intelligent dialogs
for bounding box annotation. In: 2018 IEEE/CVF Conference on Computer Vision
and Pattern Recognition. IEEE; 2018:9175-9184. doi:10.1109/cvpr.2018.00956'
apa: 'Uijlings, J., Konyushkova, K., Lampert, C., & Ferrari, V. (2018). Learning
intelligent dialogs for bounding box annotation. In 2018 IEEE/CVF Conference
on Computer Vision and Pattern Recognition (pp. 9175–9184). Salt Lake City,
UT, United States: IEEE. https://doi.org/10.1109/cvpr.2018.00956'
chicago: Uijlings, Jasper, Ksenia Konyushkova, Christoph Lampert, and Vittorio Ferrari.
“Learning Intelligent Dialogs for Bounding Box Annotation.” In 2018 IEEE/CVF
Conference on Computer Vision and Pattern Recognition, 9175–84. IEEE, 2018.
https://doi.org/10.1109/cvpr.2018.00956.
ieee: J. Uijlings, K. Konyushkova, C. Lampert, and V. Ferrari, “Learning intelligent
dialogs for bounding box annotation,” in 2018 IEEE/CVF Conference on Computer
Vision and Pattern Recognition, Salt Lake City, UT, United States, 2018, pp.
9175–9184.
ista: 'Uijlings J, Konyushkova K, Lampert C, Ferrari V. 2018. Learning intelligent
dialogs for bounding box annotation. 2018 IEEE/CVF Conference on Computer Vision
and Pattern Recognition. CVF: Conference on Computer Vision and Pattern Recognition,
9175–9184.'
mla: Uijlings, Jasper, et al. “Learning Intelligent Dialogs for Bounding Box Annotation.”
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE,
2018, pp. 9175–84, doi:10.1109/cvpr.2018.00956.
short: J. Uijlings, K. Konyushkova, C. Lampert, V. Ferrari, in:, 2018 IEEE/CVF Conference
on Computer Vision and Pattern Recognition, IEEE, 2018, pp. 9175–9184.
conference:
end_date: 2018-06-23
location: Salt Lake City, UT, United States
name: 'CVF: Conference on Computer Vision and Pattern Recognition'
start_date: 2018-06-18
date_created: 2022-03-18T12:45:09Z
date_published: 2018-12-17T00:00:00Z
date_updated: 2023-09-19T15:11:49Z
day: '17'
department:
- _id: ChLa
doi: 10.1109/cvpr.2018.00956
external_id:
arxiv:
- '1712.08087'
isi:
- '000457843609036'
isi: 1
language:
- iso: eng
main_file_link:
- open_access: '1'
url: ' https://doi.org/10.48550/arXiv.1712.08087'
month: '12'
oa: 1
oa_version: Preprint
page: 9175-9184
publication: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
publication_identifier:
eissn:
- 2575-7075
isbn:
- '9781538664209'
publication_status: published
publisher: IEEE
quality_controlled: '1'
scopus_import: '1'
status: public
title: Learning intelligent dialogs for bounding box annotation
type: conference
user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1
year: '2018'
...
---
_id: '6012'
abstract:
- lang: eng
text: We present an approach to identify concise equations from data using a shallow
neural network approach. In contrast to ordinary black-box regression, this approach
allows understanding functional relations and generalizing them from observed
data to unseen parts of the parameter space. We show how to extend the class of
learnable equations for a recently proposed equation learning network to include
divisions, and we improve the learning and model selection strategy to be useful
for challenging real-world data. For systems governed by analytical expressions,
our method can in many cases identify the true underlying equation and extrapolate
to unseen domains. We demonstrate its effectiveness by experiments on a cart-pendulum
system, where only 2 random rollouts are required to learn the forward dynamics
and successfully achieve the swing-up task.
article_processing_charge: No
author:
- first_name: Subham
full_name: Sahoo, Subham
last_name: Sahoo
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
- first_name: Georg S
full_name: Martius, Georg S
id: 3A276B68-F248-11E8-B48F-1D18A9856A87
last_name: Martius
citation:
ama: 'Sahoo S, Lampert C, Martius GS. Learning equations for extrapolation and control.
In: Proceedings of the 35th International Conference on Machine Learning.
Vol 80. ML Research Press; 2018:4442-4450.'
apa: 'Sahoo, S., Lampert, C., & Martius, G. S. (2018). Learning equations for
extrapolation and control. In Proceedings of the 35th International Conference
on Machine Learning (Vol. 80, pp. 4442–4450). Stockholm, Sweden: ML Research
Press.'
chicago: Sahoo, Subham, Christoph Lampert, and Georg S Martius. “Learning Equations
for Extrapolation and Control.” In Proceedings of the 35th International Conference
on Machine Learning, 80:4442–50. ML Research Press, 2018.
ieee: S. Sahoo, C. Lampert, and G. S. Martius, “Learning equations for extrapolation
and control,” in Proceedings of the 35th International Conference on Machine
Learning, Stockholm, Sweden, 2018, vol. 80, pp. 4442–4450.
ista: 'Sahoo S, Lampert C, Martius GS. 2018. Learning equations for extrapolation
and control. Proceedings of the 35th International Conference on Machine Learning.
ICML: International Conference on Machine Learning vol. 80, 4442–4450.'
mla: Sahoo, Subham, et al. “Learning Equations for Extrapolation and Control.” Proceedings
of the 35th International Conference on Machine Learning, vol. 80, ML Research
Press, 2018, pp. 4442–50.
short: S. Sahoo, C. Lampert, G.S. Martius, in:, Proceedings of the 35th International
Conference on Machine Learning, ML Research Press, 2018, pp. 4442–4450.
conference:
end_date: 2018-07-15
location: Stockholm, Sweden
name: 'ICML: International Conference on Machine Learning'
start_date: 2018-07-10
date_created: 2019-02-14T15:21:07Z
date_published: 2018-02-01T00:00:00Z
date_updated: 2023-10-17T09:50:53Z
day: '01'
department:
- _id: ChLa
ec_funded: 1
external_id:
arxiv:
- '1806.07259'
isi:
- '000683379204058'
intvolume: ' 80'
isi: 1
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/1806.07259
month: '02'
oa: 1
oa_version: Preprint
page: 4442-4450
project:
- _id: 25681D80-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '291734'
name: International IST Postdoc Fellowship Programme
publication: Proceedings of the 35th International Conference on Machine Learning
publication_status: published
publisher: ML Research Press
quality_controlled: '1'
related_material:
link:
- description: News on IST Homepage
relation: press_release
url: https://ist.ac.at/en/news/first-machine-learning-method-capable-of-accurate-extrapolation/
scopus_import: '1'
status: public
title: Learning equations for extrapolation and control
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 80
year: '2018'
...
---
_id: '6011'
abstract:
- lang: eng
text: 'We establish a data-dependent notion of algorithmic stability for Stochastic
Gradient Descent (SGD), and employ it to develop novel generalization bounds.
This is in contrast to previous distribution-free algorithmic stability results
for SGD which depend on the worst-case constants. By virtue of the data-dependent
argument, our bounds provide new insights into learning with SGD on convex and
non-convex problems. In the convex case, we show that the bound on the generalization
error depends on the risk at the initialization point. In the non-convex case,
we prove that the expected curvature of the objective function around the initialization
point has crucial influence on the generalization error. In both cases, our results
suggest a simple data-driven strategy to stabilize SGD by pre-screening its initialization.
As a corollary, our results allow us to show optimistic generalization bounds
that exhibit fast convergence rates for SGD subject to a vanishing empirical risk
and low noise of stochastic gradient. '
article_processing_charge: No
author:
- first_name: Ilja
full_name: Kuzborskij, Ilja
last_name: Kuzborskij
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Kuzborskij I, Lampert C. Data-dependent stability of stochastic gradient descent.
In: Proceedings of the 35 Th International Conference on Machine Learning.
Vol 80. ML Research Press; 2018:2815-2824.'
apa: 'Kuzborskij, I., & Lampert, C. (2018). Data-dependent stability of stochastic
gradient descent. In Proceedings of the 35 th International Conference on Machine
Learning (Vol. 80, pp. 2815–2824). Stockholm, Sweden: ML Research Press.'
chicago: Kuzborskij, Ilja, and Christoph Lampert. “Data-Dependent Stability of Stochastic
Gradient Descent.” In Proceedings of the 35 Th International Conference on
Machine Learning, 80:2815–24. ML Research Press, 2018.
ieee: I. Kuzborskij and C. Lampert, “Data-dependent stability of stochastic gradient
descent,” in Proceedings of the 35 th International Conference on Machine Learning,
Stockholm, Sweden, 2018, vol. 80, pp. 2815–2824.
ista: 'Kuzborskij I, Lampert C. 2018. Data-dependent stability of stochastic gradient
descent. Proceedings of the 35 th International Conference on Machine Learning.
ICML: International Conference on Machine Learning vol. 80, 2815–2824.'
mla: Kuzborskij, Ilja, and Christoph Lampert. “Data-Dependent Stability of Stochastic
Gradient Descent.” Proceedings of the 35 Th International Conference on Machine
Learning, vol. 80, ML Research Press, 2018, pp. 2815–24.
short: I. Kuzborskij, C. Lampert, in:, Proceedings of the 35 Th International Conference
on Machine Learning, ML Research Press, 2018, pp. 2815–2824.
conference:
end_date: 2018-07-15
location: Stockholm, Sweden
name: 'ICML: International Conference on Machine Learning'
start_date: 2018-07-10
date_created: 2019-02-14T14:51:57Z
date_published: 2018-02-01T00:00:00Z
date_updated: 2023-10-17T09:51:13Z
day: '01'
department:
- _id: ChLa
ec_funded: 1
external_id:
arxiv:
- '1703.01678'
isi:
- '000683379202095'
intvolume: ' 80'
isi: 1
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/1703.01678
month: '02'
oa: 1
oa_version: Preprint
page: 2815-2824
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '308036'
name: Lifelong Learning of Visual Scene Understanding
publication: Proceedings of the 35 th International Conference on Machine Learning
publication_status: published
publisher: ML Research Press
quality_controlled: '1'
scopus_import: '1'
status: public
title: Data-dependent stability of stochastic gradient descent
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 80
year: '2018'
...
---
_id: '6589'
abstract:
- lang: eng
text: Distributed training of massive machine learning models, in particular deep
neural networks, via Stochastic Gradient Descent (SGD) is becoming commonplace.
Several families of communication-reduction methods, such as quantization, large-batch
methods, and gradient sparsification, have been proposed. To date, gradient sparsification
methods--where each node sorts gradients by magnitude, and only communicates a
subset of the components, accumulating the rest locally--are known to yield some
of the largest practical gains. Such methods can reduce the amount of communication
per step by up to \emph{three orders of magnitude}, while preserving model accuracy.
Yet, this family of methods currently has no theoretical justification. This is
the question we address in this paper. We prove that, under analytic assumptions,
sparsifying gradients by magnitude with local error correction provides convergence
guarantees, for both convex and non-convex smooth objectives, for data-parallel
SGD. The main insight is that sparsification methods implicitly maintain bounds
on the maximum impact of stale updates, thanks to selection by magnitude. Our
analysis and empirical validation also reveal that these methods do require analytical
conditions to converge well, justifying existing heuristics.
article_processing_charge: No
author:
- first_name: Dan-Adrian
full_name: Alistarh, Dan-Adrian
id: 4A899BFC-F248-11E8-B48F-1D18A9856A87
last_name: Alistarh
orcid: 0000-0003-3650-940X
- first_name: Torsten
full_name: Hoefler, Torsten
last_name: Hoefler
- first_name: Mikael
full_name: Johansson, Mikael
last_name: Johansson
- first_name: Nikola H
full_name: Konstantinov, Nikola H
id: 4B9D76E4-F248-11E8-B48F-1D18A9856A87
last_name: Konstantinov
- first_name: Sarit
full_name: Khirirat, Sarit
last_name: Khirirat
- first_name: Cedric
full_name: Renggli, Cedric
last_name: Renggli
citation:
ama: 'Alistarh D-A, Hoefler T, Johansson M, Konstantinov NH, Khirirat S, Renggli
C. The convergence of sparsified gradient methods. In: Advances in Neural Information
Processing Systems 31. Vol Volume 2018. Neural Information Processing Systems
Foundation; 2018:5973-5983.'
apa: 'Alistarh, D.-A., Hoefler, T., Johansson, M., Konstantinov, N. H., Khirirat,
S., & Renggli, C. (2018). The convergence of sparsified gradient methods.
In Advances in Neural Information Processing Systems 31 (Vol. Volume 2018,
pp. 5973–5983). Montreal, Canada: Neural Information Processing Systems Foundation.'
chicago: Alistarh, Dan-Adrian, Torsten Hoefler, Mikael Johansson, Nikola H Konstantinov,
Sarit Khirirat, and Cedric Renggli. “The Convergence of Sparsified Gradient Methods.”
In Advances in Neural Information Processing Systems 31, Volume 2018:5973–83.
Neural Information Processing Systems Foundation, 2018.
ieee: D.-A. Alistarh, T. Hoefler, M. Johansson, N. H. Konstantinov, S. Khirirat,
and C. Renggli, “The convergence of sparsified gradient methods,” in Advances
in Neural Information Processing Systems 31, Montreal, Canada, 2018, vol.
Volume 2018, pp. 5973–5983.
ista: 'Alistarh D-A, Hoefler T, Johansson M, Konstantinov NH, Khirirat S, Renggli
C. 2018. The convergence of sparsified gradient methods. Advances in Neural Information
Processing Systems 31. NeurIPS: Conference on Neural Information Processing Systems
vol. Volume 2018, 5973–5983.'
mla: Alistarh, Dan-Adrian, et al. “The Convergence of Sparsified Gradient Methods.”
Advances in Neural Information Processing Systems 31, vol. Volume 2018,
Neural Information Processing Systems Foundation, 2018, pp. 5973–83.
short: D.-A. Alistarh, T. Hoefler, M. Johansson, N.H. Konstantinov, S. Khirirat,
C. Renggli, in:, Advances in Neural Information Processing Systems 31, Neural
Information Processing Systems Foundation, 2018, pp. 5973–5983.
conference:
end_date: 2018-12-08
location: Montreal, Canada
name: 'NeurIPS: Conference on Neural Information Processing Systems'
start_date: 2018-12-02
date_created: 2019-06-27T09:32:55Z
date_published: 2018-12-01T00:00:00Z
date_updated: 2023-10-17T11:47:20Z
day: '01'
department:
- _id: DaAl
- _id: ChLa
ec_funded: 1
external_id:
arxiv:
- '1809.10505'
isi:
- '000461852000047'
isi: 1
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/1809.10505
month: '12'
oa: 1
oa_version: Preprint
page: 5973-5983
project:
- _id: 2564DBCA-B435-11E9-9278-68D0E5697425
call_identifier: H2020
grant_number: '665385'
name: International IST Doctoral Program
publication: Advances in Neural Information Processing Systems 31
publication_status: published
publisher: Neural Information Processing Systems Foundation
quality_controlled: '1'
scopus_import: '1'
status: public
title: The convergence of sparsified gradient methods
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: Volume 2018
year: '2018'
...
---
_id: '5584'
abstract:
- lang: eng
text: "This package contains data for the publication \"Nonlinear decoding of a
complex movie from the mammalian retina\" by Deny S. et al, PLOS Comput Biol (2018).
\r\n\r\nThe data consists of\r\n(i) 91 spike sorted, isolated rat retinal ganglion
cells that pass stability and quality criteria, recorded on the multi-electrode
array, in response to the presentation of the complex movie with many randomly
moving dark discs. The responses are represented as 648000 x 91 binary matrix,
where the first index indicates the timebin of duration 12.5 ms, and the second
index the neural identity. The matrix entry is 0/1 if the neuron didn't/did spike
in the particular time bin.\r\n(ii) README file and a graphical illustration of
the structure of the experiment, specifying how the 648000 timebins are split
into epochs where 1, 2, 4, or 10 discs were displayed, and which stimulus segments
are exact repeats or unique ball trajectories.\r\n(iii) a 648000 x 400 matrix
of luminance traces for each of the 20 x 20 positions (\"sites\") in the movie
frame, with time that is locked to the recorded raster. The luminance traces are
produced as described in the manuscript by filtering the raw disc movie with a
small gaussian spatial kernel. "
article_processing_charge: No
author:
- first_name: Stephane
full_name: Deny, Stephane
last_name: Deny
- first_name: Olivier
full_name: Marre, Olivier
last_name: Marre
- first_name: Vicente
full_name: Botella-Soler, Vicente
last_name: Botella-Soler
- first_name: Georg S
full_name: Martius, Georg S
id: 3A276B68-F248-11E8-B48F-1D18A9856A87
last_name: Martius
- first_name: Gasper
full_name: Tkacik, Gasper
id: 3D494DCA-F248-11E8-B48F-1D18A9856A87
last_name: Tkacik
orcid: 0000-0002-6699-1455
citation:
ama: Deny S, Marre O, Botella-Soler V, Martius GS, Tkačik G. Nonlinear decoding
of a complex movie from the mammalian retina. 2018. doi:10.15479/AT:ISTA:98
apa: Deny, S., Marre, O., Botella-Soler, V., Martius, G. S., & Tkačik, G. (2018).
Nonlinear decoding of a complex movie from the mammalian retina. Institute of
Science and Technology Austria. https://doi.org/10.15479/AT:ISTA:98
chicago: Deny, Stephane, Olivier Marre, Vicente Botella-Soler, Georg S Martius,
and Gašper Tkačik. “Nonlinear Decoding of a Complex Movie from the Mammalian Retina.”
Institute of Science and Technology Austria, 2018. https://doi.org/10.15479/AT:ISTA:98.
ieee: S. Deny, O. Marre, V. Botella-Soler, G. S. Martius, and G. Tkačik, “Nonlinear
decoding of a complex movie from the mammalian retina.” Institute of Science and
Technology Austria, 2018.
ista: Deny S, Marre O, Botella-Soler V, Martius GS, Tkačik G. 2018. Nonlinear decoding
of a complex movie from the mammalian retina, Institute of Science and Technology
Austria, 10.15479/AT:ISTA:98.
mla: Deny, Stephane, et al. Nonlinear Decoding of a Complex Movie from the Mammalian
Retina. Institute of Science and Technology Austria, 2018, doi:10.15479/AT:ISTA:98.
short: S. Deny, O. Marre, V. Botella-Soler, G.S. Martius, G. Tkačik, (2018).
datarep_id: '98'
date_created: 2018-12-12T12:31:39Z
date_published: 2018-03-29T00:00:00Z
date_updated: 2024-02-21T13:45:26Z
day: '29'
ddc:
- '570'
department:
- _id: ChLa
- _id: GaTk
doi: 10.15479/AT:ISTA:98
file:
- access_level: open_access
checksum: 6808748837b9afbbbabc2a356ca2b88a
content_type: application/octet-stream
creator: system
date_created: 2018-12-12T13:02:24Z
date_updated: 2020-07-14T12:47:07Z
file_id: '5590'
file_name: IST-2018-98-v1+1_BBalls_area2_tile2_20x20.mat
file_size: 1142543971
relation: main_file
- access_level: open_access
checksum: d6d6cd07743038fe3a12352983fcf9dd
content_type: application/pdf
creator: system
date_created: 2018-12-12T13:02:25Z
date_updated: 2020-07-14T12:47:07Z
file_id: '5591'
file_name: IST-2018-98-v1+2_ExperimentStructure.pdf
file_size: 702336
relation: main_file
- access_level: open_access
checksum: 0c9cfb4dab35bb3dc25a04395600b1c8
content_type: application/octet-stream
creator: system
date_created: 2018-12-12T13:02:26Z
date_updated: 2020-07-14T12:47:07Z
file_id: '5592'
file_name: IST-2018-98-v1+3_GoodLocations_area2_20x20.mat
file_size: 432
relation: main_file
- access_level: open_access
checksum: 2a83b011012e21e934b4596285b1a183
content_type: text/plain
creator: system
date_created: 2018-12-12T13:02:26Z
date_updated: 2020-07-14T12:47:07Z
file_id: '5593'
file_name: IST-2018-98-v1+4_README.txt
file_size: 986
relation: main_file
file_date_updated: 2020-07-14T12:47:07Z
has_accepted_license: '1'
keyword:
- retina
- decoding
- regression
- neural networks
- complex stimulus
license: https://creativecommons.org/publicdomain/zero/1.0/
month: '03'
oa: 1
oa_version: Published Version
project:
- _id: 254D1A94-B435-11E9-9278-68D0E5697425
call_identifier: FWF
grant_number: P 25651-N26
name: Sensitivity to higher-order statistics in natural scenes
publisher: Institute of Science and Technology Austria
related_material:
record:
- id: '292'
relation: used_in_publication
status: public
status: public
title: Nonlinear decoding of a complex movie from the mammalian retina
tmp:
image: /images/cc_0.png
legal_code_url: https://creativecommons.org/publicdomain/zero/1.0/legalcode
name: Creative Commons Public Domain Dedication (CC0 1.0)
short: CC0 (1.0)
type: research_data
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2018'
...
---
_id: '652'
abstract:
- lang: eng
text: 'We present an approach that enables robots to self-organize their sensorimotor
behavior from scratch without providing specific information about neither the
robot nor its environment. This is achieved by a simple neural control law that
increases the consistency between external sensor dynamics and internal neural
dynamics of the utterly simple controller. In this way, the embodiment and the
agent-environment coupling are the only source of individual development. We show
how an anthropomorphic tendon driven arm-shoulder system develops different behaviors
depending on that coupling. For instance: Given a bottle half-filled with water,
the arm starts to shake it, driven by the physical response of the water. When
attaching a brush, the arm can be manipulated into wiping a table, and when connected
to a revolvable wheel it finds out how to rotate it. Thus, the robot may be said
to discover the affordances of the world. When allowing two (simulated) humanoid
robots to interact physically, they engage into a joint behavior development leading
to, for instance, spontaneous cooperation. More social effects are observed if
the robots can visually perceive each other. Although, as an observer, it is tempting
to attribute an apparent intentionality, there is nothing of the kind put in.
As a conclusion, we argue that emergent behavior may be much less rooted in explicit
intentions, internal motivations, or specific reward systems than is commonly
believed.'
article_number: '7846789'
author:
- first_name: Ralf
full_name: Der, Ralf
last_name: Der
- first_name: Georg S
full_name: Martius, Georg S
id: 3A276B68-F248-11E8-B48F-1D18A9856A87
last_name: Martius
citation:
ama: 'Der R, Martius GS. Dynamical self consistency leads to behavioral development
and emergent social interactions in robots. In: IEEE; 2017. doi:10.1109/DEVLRN.2016.7846789'
apa: 'Der, R., & Martius, G. S. (2017). Dynamical self consistency leads to
behavioral development and emergent social interactions in robots. Presented at
the ICDL EpiRob: International Conference on Development and Learning and Epigenetic
Robotics , Cergy-Pontoise, France: IEEE. https://doi.org/10.1109/DEVLRN.2016.7846789'
chicago: Der, Ralf, and Georg S Martius. “Dynamical Self Consistency Leads to Behavioral
Development and Emergent Social Interactions in Robots.” IEEE, 2017. https://doi.org/10.1109/DEVLRN.2016.7846789.
ieee: 'R. Der and G. S. Martius, “Dynamical self consistency leads to behavioral
development and emergent social interactions in robots,” presented at the ICDL
EpiRob: International Conference on Development and Learning and Epigenetic Robotics
, Cergy-Pontoise, France, 2017.'
ista: 'Der R, Martius GS. 2017. Dynamical self consistency leads to behavioral development
and emergent social interactions in robots. ICDL EpiRob: International Conference
on Development and Learning and Epigenetic Robotics , 7846789.'
mla: Der, Ralf, and Georg S. Martius. Dynamical Self Consistency Leads to Behavioral
Development and Emergent Social Interactions in Robots. 7846789, IEEE, 2017,
doi:10.1109/DEVLRN.2016.7846789.
short: R. Der, G.S. Martius, in:, IEEE, 2017.
conference:
end_date: 2016-09-22
location: Cergy-Pontoise, France
name: 'ICDL EpiRob: International Conference on Development and Learning and Epigenetic
Robotics '
start_date: 2016-09-19
date_created: 2018-12-11T11:47:43Z
date_published: 2017-02-07T00:00:00Z
date_updated: 2021-01-12T08:07:51Z
day: '07'
department:
- _id: ChLa
- _id: GaTk
doi: 10.1109/DEVLRN.2016.7846789
language:
- iso: eng
month: '02'
oa_version: None
publication_identifier:
isbn:
- 978-150905069-7
publication_status: published
publisher: IEEE
publist_id: '7100'
quality_controlled: '1'
scopus_import: 1
status: public
title: Dynamical self consistency leads to behavioral development and emergent social
interactions in robots
type: conference
user_id: 3E5EF7F0-F248-11E8-B48F-1D18A9856A87
year: '2017'
...
---
_id: '658'
abstract:
- lang: eng
text: 'With the accelerated development of robot technologies, control becomes one
of the central themes of research. In traditional approaches, the controller,
by its internal functionality, finds appropriate actions on the basis of specific
objectives for the task at hand. While very successful in many applications, self-organized
control schemes seem to be favored in large complex systems with unknown dynamics
or which are difficult to model. Reasons are the expected scalability, robustness,
and resilience of self-organizing systems. The paper presents a self-learning
neurocontroller based on extrinsic differential plasticity introduced recently,
applying it to an anthropomorphic musculoskeletal robot arm with attached objects
of unknown physical dynamics. The central finding of the paper is the following
effect: by the mere feedback through the internal dynamics of the object, the
robot is learning to relate each of the objects with a very specific sensorimotor
pattern. Specifically, an attached pendulum pilots the arm into a circular motion,
a half-filled bottle produces axis oriented shaking behavior, a wheel is getting
rotated, and wiping patterns emerge automatically in a table-plus-brush setting.
By these object-specific dynamical patterns, the robot may be said to recognize
the object''s identity, or in other words, it discovers dynamical affordances
of objects. Furthermore, when including hand coordinates obtained from a camera,
a dedicated hand-eye coordination self-organizes spontaneously. These phenomena
are discussed from a specific dynamical system perspective. Central is the dedicated
working regime at the border to instability with its potentially infinite reservoir
of (limit cycle) attractors "waiting" to be excited. Besides converging
toward one of these attractors, variate behavior is also arising from a self-induced
attractor morphing driven by the learning rule. We claim that experimental investigations
with this anthropomorphic, self-learning robot not only generate interesting and
potentially useful behaviors, but may also help to better understand what subjective
human muscle feelings are, how they can be rooted in sensorimotor patterns, and
how these concepts may feed back on robotics.'
article_number: '00008'
article_processing_charge: Yes
author:
- first_name: Ralf
full_name: Der, Ralf
last_name: Der
- first_name: Georg S
full_name: Martius, Georg S
id: 3A276B68-F248-11E8-B48F-1D18A9856A87
last_name: Martius
citation:
ama: Der R, Martius GS. Self organized behavior generation for musculoskeletal robots.
Frontiers in Neurorobotics. 2017;11(MAR). doi:10.3389/fnbot.2017.00008
apa: Der, R., & Martius, G. S. (2017). Self organized behavior generation for
musculoskeletal robots. Frontiers in Neurorobotics. Frontiers Research
Foundation. https://doi.org/10.3389/fnbot.2017.00008
chicago: Der, Ralf, and Georg S Martius. “Self Organized Behavior Generation for
Musculoskeletal Robots.” Frontiers in Neurorobotics. Frontiers Research
Foundation, 2017. https://doi.org/10.3389/fnbot.2017.00008.
ieee: R. Der and G. S. Martius, “Self organized behavior generation for musculoskeletal
robots,” Frontiers in Neurorobotics, vol. 11, no. MAR. Frontiers Research
Foundation, 2017.
ista: Der R, Martius GS. 2017. Self organized behavior generation for musculoskeletal
robots. Frontiers in Neurorobotics. 11(MAR), 00008.
mla: Der, Ralf, and Georg S. Martius. “Self Organized Behavior Generation for Musculoskeletal
Robots.” Frontiers in Neurorobotics, vol. 11, no. MAR, 00008, Frontiers
Research Foundation, 2017, doi:10.3389/fnbot.2017.00008.
short: R. Der, G.S. Martius, Frontiers in Neurorobotics 11 (2017).
date_created: 2018-12-11T11:47:45Z
date_published: 2017-03-16T00:00:00Z
date_updated: 2021-01-12T08:08:04Z
day: '16'
ddc:
- '006'
department:
- _id: ChLa
- _id: GaTk
doi: 10.3389/fnbot.2017.00008
ec_funded: 1
file:
- access_level: open_access
checksum: b1bc43f96d1df3313c03032c2a46388d
content_type: application/pdf
creator: system
date_created: 2018-12-12T10:18:49Z
date_updated: 2020-07-14T12:47:33Z
file_id: '5371'
file_name: IST-2017-903-v1+1_fnbot-11-00008.pdf
file_size: 8439566
relation: main_file
file_date_updated: 2020-07-14T12:47:33Z
has_accepted_license: '1'
intvolume: ' 11'
issue: MAR
language:
- iso: eng
month: '03'
oa: 1
oa_version: Published Version
project:
- _id: 25681D80-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '291734'
name: International IST Postdoc Fellowship Programme
publication: Frontiers in Neurorobotics
publication_identifier:
issn:
- '16625218'
publication_status: published
publisher: Frontiers Research Foundation
publist_id: '7078'
pubrep_id: '903'
quality_controlled: '1'
scopus_import: 1
status: public
title: Self organized behavior generation for musculoskeletal robots
tmp:
image: /images/cc_by.png
legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
short: CC BY (4.0)
type: journal_article
user_id: 2EBD1598-F248-11E8-B48F-1D18A9856A87
volume: 11
year: '2017'
...
---
_id: '6841'
abstract:
- lang: eng
text: In classical machine learning, regression is treated as a black box process
of identifying a suitable function from a hypothesis set without attempting to
gain insight into the mechanism connecting inputs and outputs. In the natural
sciences, however, finding an interpretable function for a phenomenon is the prime
goal as it allows to understand and generalize results. This paper proposes a
novel type of function learning network, called equation learner (EQL), that can
learn analytical expressions and is able to extrapolate to unseen domains. It
is implemented as an end-to-end differentiable feed-forward network and allows
for efficient gradient based training. Due to sparsity regularization concise
interpretable expressions can be obtained. Often the true underlying source expression
is identified.
author:
- first_name: Georg S
full_name: Martius, Georg S
id: 3A276B68-F248-11E8-B48F-1D18A9856A87
last_name: Martius
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Martius GS, Lampert C. Extrapolation and learning equations. In: 5th International
Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings.
International Conference on Learning Representations; 2017.'
apa: 'Martius, G. S., & Lampert, C. (2017). Extrapolation and learning equations.
In 5th International Conference on Learning Representations, ICLR 2017 - Workshop
Track Proceedings. Toulon, France: International Conference on Learning Representations.'
chicago: Martius, Georg S, and Christoph Lampert. “Extrapolation and Learning Equations.”
In 5th International Conference on Learning Representations, ICLR 2017 - Workshop
Track Proceedings. International Conference on Learning Representations, 2017.
ieee: G. S. Martius and C. Lampert, “Extrapolation and learning equations,” in 5th
International Conference on Learning Representations, ICLR 2017 - Workshop Track
Proceedings, Toulon, France, 2017.
ista: 'Martius GS, Lampert C. 2017. Extrapolation and learning equations. 5th International
Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings.
ICLR: International Conference on Learning Representations.'
mla: Martius, Georg S., and Christoph Lampert. “Extrapolation and Learning Equations.”
5th International Conference on Learning Representations, ICLR 2017 - Workshop
Track Proceedings, International Conference on Learning Representations, 2017.
short: G.S. Martius, C. Lampert, in:, 5th International Conference on Learning Representations,
ICLR 2017 - Workshop Track Proceedings, International Conference on Learning Representations,
2017.
conference:
end_date: 2017-04-26
location: Toulon, France
name: 'ICLR: International Conference on Learning Representations'
start_date: 2017-04-24
date_created: 2019-09-01T22:01:00Z
date_published: 2017-02-21T00:00:00Z
date_updated: 2021-01-12T08:09:17Z
day: '21'
department:
- _id: ChLa
ec_funded: 1
external_id:
arxiv:
- '1610.02995'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/1610.02995
month: '02'
oa: 1
oa_version: Preprint
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '308036'
name: Lifelong Learning of Visual Scene Understanding
publication: 5th International Conference on Learning Representations, ICLR 2017 -
Workshop Track Proceedings
publication_status: published
publisher: International Conference on Learning Representations
quality_controlled: '1'
scopus_import: 1
status: public
title: Extrapolation and learning equations
type: conference
user_id: 3E5EF7F0-F248-11E8-B48F-1D18A9856A87
year: '2017'
...
---
_id: '750'
abstract:
- lang: eng
text: Modern communication technologies allow first responders to contact thousands
of potential volunteers simultaneously for support during a crisis or disaster
event. However, such volunteer efforts must be well coordinated and monitored,
in order to offer an effective relief to the professionals. In this paper we extend
earlier work on optimally assigning volunteers to selected landmark locations.
In particular, we emphasize the aspect that obtaining good assignments requires
not only advanced computational tools, but also a realistic measure of distance
between volunteers and landmarks. Specifically, we propose the use of the Open
Street Map (OSM) driving distance instead of he previously used flight distance.
We find the OSM driving distance to be better aligned with the interests of volunteers
and first responders. Furthermore, we show that relying on the flying distance
leads to a substantial underestimation of the number of required volunteers, causing
negative side effects in case of an actual crisis situation.
author:
- first_name: Jasmin
full_name: Pielorz, Jasmin
id: 49BC895A-F248-11E8-B48F-1D18A9856A87
last_name: Pielorz
- first_name: Matthias
full_name: Prandtstetter, Matthias
last_name: Prandtstetter
- first_name: Markus
full_name: Straub, Markus
last_name: Straub
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Pielorz J, Prandtstetter M, Straub M, Lampert C. Optimal geospatial volunteer
allocation needs realistic distances. In: 2017 IEEE International Conference
on Big Data. IEEE; 2017:3760-3763. doi:10.1109/BigData.2017.8258375'
apa: 'Pielorz, J., Prandtstetter, M., Straub, M., & Lampert, C. (2017). Optimal
geospatial volunteer allocation needs realistic distances. In 2017 IEEE International
Conference on Big Data (pp. 3760–3763). Boston, MA, United States: IEEE. https://doi.org/10.1109/BigData.2017.8258375'
chicago: Pielorz, Jasmin, Matthias Prandtstetter, Markus Straub, and Christoph Lampert.
“Optimal Geospatial Volunteer Allocation Needs Realistic Distances.” In 2017
IEEE International Conference on Big Data, 3760–63. IEEE, 2017. https://doi.org/10.1109/BigData.2017.8258375.
ieee: J. Pielorz, M. Prandtstetter, M. Straub, and C. Lampert, “Optimal geospatial
volunteer allocation needs realistic distances,” in 2017 IEEE International
Conference on Big Data, Boston, MA, United States, 2017, pp. 3760–3763.
ista: Pielorz J, Prandtstetter M, Straub M, Lampert C. 2017. Optimal geospatial
volunteer allocation needs realistic distances. 2017 IEEE International Conference
on Big Data. Big Data, 3760–3763.
mla: Pielorz, Jasmin, et al. “Optimal Geospatial Volunteer Allocation Needs Realistic
Distances.” 2017 IEEE International Conference on Big Data, IEEE, 2017,
pp. 3760–63, doi:10.1109/BigData.2017.8258375.
short: J. Pielorz, M. Prandtstetter, M. Straub, C. Lampert, in:, 2017 IEEE International
Conference on Big Data, IEEE, 2017, pp. 3760–3763.
conference:
end_date: 2017-12-14
location: Boston, MA, United States
name: Big Data
start_date: 2017-12-11
date_created: 2018-12-11T11:48:18Z
date_published: 2017-12-01T00:00:00Z
date_updated: 2021-01-12T08:13:55Z
day: '01'
department:
- _id: ChLa
doi: 10.1109/BigData.2017.8258375
language:
- iso: eng
month: '12'
oa_version: None
page: 3760 - 3763
publication: 2017 IEEE International Conference on Big Data
publication_identifier:
isbn:
- 978-153862714-3
publication_status: published
publisher: IEEE
publist_id: '6906'
quality_controlled: '1'
scopus_import: 1
status: public
title: Optimal geospatial volunteer allocation needs realistic distances
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2017'
...