---
_id: '13053'
abstract:
- lang: eng
text: 'Deep neural networks (DNNs) often have to be compressed, via pruning and/or
quantization, before they can be deployed in practical settings. In this work
we propose a new compression-aware minimizer dubbed CrAM that modifies the optimization
step in a principled way, in order to produce models whose local loss behavior
is stable under compression operations such as pruning. Thus, dense models trained
via CrAM should be compressible post-training, in a single step, without significant
accuracy loss. Experimental results on standard benchmarks, such as residual networks
for ImageNet classification and BERT models for language modelling, show that
CrAM produces dense models that can be more accurate than the standard SGD/Adam-based
baselines, but which are stable under weight pruning: specifically, we can prune
models in one-shot to 70-80% sparsity with almost no accuracy loss, and to 90%
with reasonable (∼1%) accuracy loss, which is competitive with gradual compression
methods. Additionally, CrAM can produce sparse models which perform well for transfer
learning, and it also works for semi-structured 2:4 pruning patterns supported
by GPU hardware. The code for reproducing the results is available at this https
URL .'
acknowledged_ssus:
- _id: ScienComp
acknowledgement: "AP, EK, DA received funding from the European Research Council (ERC)
under the European\r\nUnion’s Horizon 2020 research and innovation programme (grant
agreement No 805223 ScaleML). AV acknowledges the support of the French Agence Nationale
de la Recherche (ANR), under grant ANR-21-CE48-0016 (project COMCOPT). We further
acknowledge the support from the Scientific Service Units (SSU) of ISTA through
resources provided by Scientific Computing (SciComp)-"
article_processing_charge: No
author:
- first_name: Elena-Alexandra
full_name: Peste, Elena-Alexandra
id: 32D78294-F248-11E8-B48F-1D18A9856A87
last_name: Peste
- first_name: Adrian
full_name: Vladu, Adrian
last_name: Vladu
- first_name: Eldar
full_name: Kurtic, Eldar
id: 47beb3a5-07b5-11eb-9b87-b108ec578218
last_name: Kurtic
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
- first_name: Dan-Adrian
full_name: Alistarh, Dan-Adrian
id: 4A899BFC-F248-11E8-B48F-1D18A9856A87
last_name: Alistarh
orcid: 0000-0003-3650-940X
citation:
ama: 'Peste E-A, Vladu A, Kurtic E, Lampert C, Alistarh D-A. CrAM: A Compression-Aware
Minimizer. In: 11th International Conference on Learning Representations .'
apa: 'Peste, E.-A., Vladu, A., Kurtic, E., Lampert, C., & Alistarh, D.-A. (n.d.).
CrAM: A Compression-Aware Minimizer. In 11th International Conference on Learning
Representations . Kigali, Rwanda .'
chicago: 'Peste, Elena-Alexandra, Adrian Vladu, Eldar Kurtic, Christoph Lampert,
and Dan-Adrian Alistarh. “CrAM: A Compression-Aware Minimizer.” In 11th International
Conference on Learning Representations , n.d.'
ieee: 'E.-A. Peste, A. Vladu, E. Kurtic, C. Lampert, and D.-A. Alistarh, “CrAM:
A Compression-Aware Minimizer,” in 11th International Conference on Learning
Representations , Kigali, Rwanda .'
ista: 'Peste E-A, Vladu A, Kurtic E, Lampert C, Alistarh D-A. CrAM: A Compression-Aware
Minimizer. 11th International Conference on Learning Representations . ICLR: International
Conference on Learning Representations.'
mla: 'Peste, Elena-Alexandra, et al. “CrAM: A Compression-Aware Minimizer.” 11th
International Conference on Learning Representations .'
short: E.-A. Peste, A. Vladu, E. Kurtic, C. Lampert, D.-A. Alistarh, in:, 11th International
Conference on Learning Representations , n.d.
conference:
end_date: 2023-05-05
location: 'Kigali, Rwanda '
name: 'ICLR: International Conference on Learning Representations'
start_date: 2023-05-01
date_created: 2023-05-23T11:36:18Z
date_published: 2023-05-01T00:00:00Z
date_updated: 2023-06-01T12:54:45Z
department:
- _id: GradSch
- _id: DaAl
- _id: ChLa
ec_funded: 1
external_id:
arxiv:
- '2207.14200'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://openreview.net/pdf?id=_eTZBs-yedr
month: '05'
oa: 1
oa_version: Preprint
project:
- _id: 268A44D6-B435-11E9-9278-68D0E5697425
call_identifier: H2020
grant_number: '805223'
name: Elastic Coordination for Scalable Machine Learning
publication: '11th International Conference on Learning Representations '
publication_status: accepted
quality_controlled: '1'
related_material:
record:
- id: '13074'
relation: dissertation_contains
status: public
status: public
title: 'CrAM: A Compression-Aware Minimizer'
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2023'
...
---
_id: '14410'
abstract:
- lang: eng
text: This paper focuses on the implementation details of the baseline methods and
a recent lightweight conditional model extrapolation algorithm LIMES [5] for streaming
data under class-prior shift. LIMES achieves superior performance over the baseline
methods, especially concerning the minimum-across-day accuracy, which is important
for the users of the system. In this work, the key measures to facilitate reproducibility
and enhance the credibility of the results are described.
alternative_title:
- LNCS
article_processing_charge: No
author:
- first_name: Paulina
full_name: Tomaszewska, Paulina
last_name: Tomaszewska
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Tomaszewska P, Lampert C. On the implementation of baselines and lightweight
conditional model extrapolation (LIMES) under class-prior shift. In: International
Workshop on Reproducible Research in Pattern Recognition. Vol 14068. Springer
Nature; 2023:67-73. doi:10.1007/978-3-031-40773-4_6'
apa: 'Tomaszewska, P., & Lampert, C. (2023). On the implementation of baselines
and lightweight conditional model extrapolation (LIMES) under class-prior shift.
In International Workshop on Reproducible Research in Pattern Recognition
(Vol. 14068, pp. 67–73). Montreal, Canada: Springer Nature. https://doi.org/10.1007/978-3-031-40773-4_6'
chicago: Tomaszewska, Paulina, and Christoph Lampert. “On the Implementation of Baselines
and Lightweight Conditional Model Extrapolation (LIMES) under Class-Prior Shift.”
In International Workshop on Reproducible Research in Pattern Recognition,
14068:67–73. Springer Nature, 2023. https://doi.org/10.1007/978-3-031-40773-4_6.
ieee: P. Tomaszewska and C. Lampert, “On the implementation of baselines and lightweight
conditional model extrapolation (LIMES) under class-prior shift,” in International
Workshop on Reproducible Research in Pattern Recognition, Montreal, Canada,
2023, vol. 14068, pp. 67–73.
ista: 'Tomaszewska P, Lampert C. 2023. On the implementation of baselines and lightweight
conditional model extrapolation (LIMES) under class-prior shift. International
Workshop on Reproducible Research in Pattern Recognition. RRPR: Reproducible Research
in Pattern Recognition, LNCS, vol. 14068, 67–73.'
mla: Tomaszewska, Paulina, and Christoph Lampert. “On the Implementation of Baselines
and Lightweight Conditional Model Extrapolation (LIMES) under Class-Prior Shift.”
International Workshop on Reproducible Research in Pattern Recognition,
vol. 14068, Springer Nature, 2023, pp. 67–73, doi:10.1007/978-3-031-40773-4_6.
short: P. Tomaszewska, C. Lampert, in:, International Workshop on Reproducible Research
in Pattern Recognition, Springer Nature, 2023, pp. 67–73.
conference:
end_date: 2022-08-21
location: Montreal, Canada
name: 'RRPR: Reproducible Research in Pattern Recognition'
start_date: 2022-08-21
date_created: 2023-10-08T22:01:18Z
date_published: 2023-08-20T00:00:00Z
date_updated: 2023-10-09T06:48:02Z
day: '20'
department:
- _id: ChLa
doi: 10.1007/978-3-031-40773-4_6
intvolume: ' 14068'
language:
- iso: eng
month: '08'
oa_version: None
page: 67-73
publication: International Workshop on Reproducible Research in Pattern Recognition
publication_identifier:
eissn:
- 1611-3349
isbn:
- '9783031407727'
issn:
- 0302-9743
publication_status: published
publisher: Springer Nature
quality_controlled: '1'
scopus_import: '1'
status: public
title: On the implementation of baselines and lightweight conditional model extrapolation
(LIMES) under class-prior shift
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 14068
year: '2023'
...
---
_id: '14921'
abstract:
- lang: eng
text: Neural collapse (NC) refers to the surprising structure of the last layer
of deep neural networks in the terminal phase of gradient descent training. Recently,
an increasing amount of experimental evidence has pointed to the propagation of
NC to earlier layers of neural networks. However, while the NC in the last layer
is well studied theoretically, much less is known about its multi-layered counterpart
- deep neural collapse (DNC). In particular, existing work focuses either on linear
layers or only on the last two layers at the price of an extra assumption. Our
paper fills this gap by generalizing the established analytical framework for
NC - the unconstrained features model - to multiple non-linear layers. Our key
technical contribution is to show that, in a deep unconstrained features model,
the unique global optimum for binary classification exhibits all the properties
typical of DNC. This explains the existing experimental evidence of DNC. We also
empirically show that (i) by optimizing deep unconstrained features models via
gradient descent, the resulting solution agrees well with our theory, and (ii)
trained networks recover the unconstrained features suitable for the occurrence
of DNC, thus supporting the validity of this modeling principle.
acknowledgement: M. M. is partially supported by the 2019 Lopez-Loreta Prize. The
authors would like to thank Eugenia Iofinova, Bernd Prach and Simone Bombari for
valuable feedback on the manuscript.
alternative_title:
- NeurIPS
article_processing_charge: No
author:
- first_name: Peter
full_name: Súkeník, Peter
id: d64d6a8d-eb8e-11eb-b029-96fd216dec3c
last_name: Súkeník
- first_name: Marco
full_name: Mondelli, Marco
id: 27EB676C-8706-11E9-9510-7717E6697425
last_name: Mondelli
orcid: 0000-0002-3242-7020
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Súkeník P, Mondelli M, Lampert C. Deep neural collapse is provably optimal
for the deep unconstrained features model. In: 37th Annual Conference on Neural
Information Processing Systems.'
apa: Súkeník, P., Mondelli, M., & Lampert, C. (n.d.). Deep neural collapse is
provably optimal for the deep unconstrained features model. In 37th Annual
Conference on Neural Information Processing Systems. New Orleans, LA, United
States.
chicago: Súkeník, Peter, Marco Mondelli, and Christoph Lampert. “Deep Neural Collapse
Is Provably Optimal for the Deep Unconstrained Features Model.” In 37th Annual
Conference on Neural Information Processing Systems, n.d.
ieee: P. Súkeník, M. Mondelli, and C. Lampert, “Deep neural collapse is provably
optimal for the deep unconstrained features model,” in 37th Annual Conference
on Neural Information Processing Systems, New Orleans, LA, United States.
ista: 'Súkeník P, Mondelli M, Lampert C. Deep neural collapse is provably optimal
for the deep unconstrained features model. 37th Annual Conference on Neural Information
Processing Systems. NeurIPS: Neural Information Processing Systems, NeurIPS, .'
mla: Súkeník, Peter, et al. “Deep Neural Collapse Is Provably Optimal for the Deep
Unconstrained Features Model.” 37th Annual Conference on Neural Information
Processing Systems.
short: P. Súkeník, M. Mondelli, C. Lampert, in:, 37th Annual Conference on Neural
Information Processing Systems, n.d.
conference:
end_date: 2023-12-16
location: New Orleans, LA, United States
name: 'NeurIPS: Neural Information Processing Systems'
start_date: 2023-12-10
date_created: 2024-02-02T11:17:41Z
date_published: 2023-12-15T00:00:00Z
date_updated: 2024-02-06T07:53:26Z
day: '15'
department:
- _id: MaMo
- _id: ChLa
external_id:
arxiv:
- '2305.13165'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: ' https://doi.org/10.48550/arXiv.2305.13165'
month: '12'
oa: 1
oa_version: Preprint
project:
- _id: 059876FA-7A3F-11EA-A408-12923DDC885E
name: Prix Lopez-Loretta 2019 - Marco Mondelli
publication: 37th Annual Conference on Neural Information Processing Systems
publication_status: inpress
quality_controlled: '1'
status: public
title: Deep neural collapse is provably optimal for the deep unconstrained features
model
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2023'
...
---
_id: '15039'
abstract:
- lang: eng
text: 'A crucial property for achieving secure, trustworthy and interpretable deep
learning systems is their robustness: small changes to a system''s inputs should
not result in large changes to its outputs. Mathematically, this means one strives
for networks with a small Lipschitz constant. Several recent works have focused
on how to construct such Lipschitz networks, typically by imposing constraints
on the weight matrices. In this work, we study an orthogonal aspect, namely the
role of the activation function. We show that commonly used activation functions,
such as MaxMin, as well as all piece-wise linear ones with two segments unnecessarily
restrict the class of representable functions, even in the simplest one-dimensional
setting. We furthermore introduce the new N-activation function that is provably
more expressive than currently popular activation functions. We provide code at
this https URL.'
article_number: '2311.06103'
article_processing_charge: No
author:
- first_name: Bernd
full_name: Prach, Bernd
id: 2D561D42-C427-11E9-89B4-9C1AE6697425
last_name: Prach
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: Prach B, Lampert C. 1-Lipschitz neural networks are more expressive with N-activations.
arXiv. doi:10.48550/ARXIV.2311.06103
apa: Prach, B., & Lampert, C. (n.d.). 1-Lipschitz neural networks are more expressive
with N-activations. arXiv. https://doi.org/10.48550/ARXIV.2311.06103
chicago: Prach, Bernd, and Christoph Lampert. “1-Lipschitz Neural Networks Are More
Expressive with N-Activations.” ArXiv, n.d. https://doi.org/10.48550/ARXIV.2311.06103.
ieee: B. Prach and C. Lampert, “1-Lipschitz neural networks are more expressive
with N-activations,” arXiv. .
ista: Prach B, Lampert C. 1-Lipschitz neural networks are more expressive with N-activations.
arXiv, 2311.06103.
mla: Prach, Bernd, and Christoph Lampert. “1-Lipschitz Neural Networks Are More
Expressive with N-Activations.” ArXiv, 2311.06103, doi:10.48550/ARXIV.2311.06103.
short: B. Prach, C. Lampert, ArXiv (n.d.).
date_created: 2024-02-28T17:59:32Z
date_published: 2023-11-10T00:00:00Z
date_updated: 2024-03-04T07:02:39Z
day: '10'
department:
- _id: GradSch
- _id: ChLa
doi: 10.48550/ARXIV.2311.06103
external_id:
arxiv:
- '2311.06103'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://doi.org/10.48550/arXiv.2311.06103
month: '11'
oa: 1
oa_version: Preprint
publication: arXiv
publication_status: submitted
status: public
title: 1-Lipschitz neural networks are more expressive with N-activations
type: preprint
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2023'
...
---
_id: '12660'
abstract:
- lang: eng
text: 'We present Cross-Client Label Propagation(XCLP), a new method for transductive
federated learning. XCLP estimates a data graph jointly from the data of multiple
clients and computes labels for the unlabeled data by propagating label information
across the graph. To avoid clients having to share their data with anyone, XCLP
employs two cryptographically secure protocols: secure Hamming distance computation
and secure summation. We demonstrate two distinct applications of XCLP within
federated learning. In the first, we use it in a one-shot way to predict labels
for unseen test points. In the second, we use it to repeatedly pseudo-label unlabeled
training data in a federated semi-supervised setting. Experiments on both real
federated and standard benchmark datasets show that in both applications XCLP
achieves higher classification accuracy than alternative approaches.'
article_number: '2210.06434'
article_processing_charge: No
author:
- first_name: Jonathan A
full_name: Scott, Jonathan A
id: e499926b-f6e0-11ea-865d-9c63db0031e8
last_name: Scott
- first_name: Michelle X
full_name: Yeo, Michelle X
id: 2D82B818-F248-11E8-B48F-1D18A9856A87
last_name: Yeo
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: Scott JA, Yeo MX, Lampert C. Cross-client Label Propagation for transductive
federated learning. arXiv. doi:10.48550/arXiv.2210.06434
apa: Scott, J. A., Yeo, M. X., & Lampert, C. (n.d.). Cross-client Label Propagation
for transductive federated learning. arXiv. https://doi.org/10.48550/arXiv.2210.06434
chicago: Scott, Jonathan A, Michelle X Yeo, and Christoph Lampert. “Cross-Client
Label Propagation for Transductive Federated Learning.” ArXiv, n.d. https://doi.org/10.48550/arXiv.2210.06434.
ieee: J. A. Scott, M. X. Yeo, and C. Lampert, “Cross-client Label Propagation for
transductive federated learning,” arXiv. .
ista: Scott JA, Yeo MX, Lampert C. Cross-client Label Propagation for transductive
federated learning. arXiv, 2210.06434.
mla: Scott, Jonathan A., et al. “Cross-Client Label Propagation for Transductive
Federated Learning.” ArXiv, 2210.06434, doi:10.48550/arXiv.2210.06434.
short: J.A. Scott, M.X. Yeo, C. Lampert, ArXiv (n.d.).
date_created: 2023-02-20T08:21:50Z
date_published: 2022-10-12T00:00:00Z
date_updated: 2023-02-21T08:20:18Z
day: '12'
ddc:
- '004'
department:
- _id: ChLa
doi: 10.48550/arXiv.2210.06434
external_id:
arxiv:
- '2210.06434'
file:
- access_level: open_access
checksum: 7ab20543fd4393f14fb857ce2e4f03c6
content_type: application/pdf
creator: chl
date_created: 2023-02-20T08:21:35Z
date_updated: 2023-02-20T08:21:35Z
file_id: '12661'
file_name: 2210.06434.pdf
file_size: 291893
relation: main_file
success: 1
file_date_updated: 2023-02-20T08:21:35Z
has_accepted_license: '1'
language:
- iso: eng
month: '10'
oa: 1
oa_version: Preprint
publication: arXiv
publication_status: submitted
status: public
title: Cross-client Label Propagation for transductive federated learning
tmp:
image: /images/cc_by.png
legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
short: CC BY (4.0)
type: preprint
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2022'
...
---
_id: '12662'
abstract:
- lang: eng
text: 'Modern machine learning tasks often require considering not just one but
multiple objectives. For example, besides the prediction quality, this could be
the efficiency, robustness or fairness of the learned models, or any of their
combinations. Multi-objective learning offers a natural framework for handling
such problems without having to commit to early trade-offs. Surprisingly, statistical
learning theory so far offers almost no insight into the generalization properties
of multi-objective learning. In this work, we make first steps to fill this gap:
we establish foundational generalization bounds for the multi-objective setting
as well as generalization and excess bounds for learning with scalarizations.
We also provide the first theoretical analysis of the relation between the Pareto-optimal
sets of the true objectives and the Pareto-optimal sets of their empirical approximations
from training data. In particular, we show a surprising asymmetry: all Pareto-optimal
solutions can be approximated by empirically Pareto-optimal ones, but not vice
versa.'
article_number: '2208.13499'
article_processing_charge: No
author:
- first_name: Peter
full_name: Súkeník, Peter
id: d64d6a8d-eb8e-11eb-b029-96fd216dec3c
last_name: Súkeník
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: Súkeník P, Lampert C. Generalization in Multi-objective machine learning. arXiv.
doi:10.48550/arXiv.2208.13499
apa: Súkeník, P., & Lampert, C. (n.d.). Generalization in Multi-objective machine
learning. arXiv. https://doi.org/10.48550/arXiv.2208.13499
chicago: Súkeník, Peter, and Christoph Lampert. “Generalization in Multi-Objective
Machine Learning.” ArXiv, n.d. https://doi.org/10.48550/arXiv.2208.13499.
ieee: P. Súkeník and C. Lampert, “Generalization in Multi-objective machine learning,”
arXiv. .
ista: Súkeník P, Lampert C. Generalization in Multi-objective machine learning.
arXiv, 2208.13499.
mla: Súkeník, Peter, and Christoph Lampert. “Generalization in Multi-Objective Machine
Learning.” ArXiv, 2208.13499, doi:10.48550/arXiv.2208.13499.
short: P. Súkeník, C. Lampert, ArXiv (n.d.).
date_created: 2023-02-20T08:23:06Z
date_published: 2022-08-29T00:00:00Z
date_updated: 2023-02-21T08:24:55Z
day: '29'
ddc:
- '004'
department:
- _id: ChLa
doi: 10.48550/arXiv.2208.13499
external_id:
arxiv:
- '2208.13499'
has_accepted_license: '1'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: ' https://doi.org/10.48550/arXiv.2208.13499'
month: '08'
oa: 1
oa_version: Preprint
publication: arXiv
publication_status: submitted
status: public
title: Generalization in Multi-objective machine learning
type: preprint
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2022'
...
---
_id: '12495'
abstract:
- lang: eng
text: "Fairness-aware learning aims at constructing classifiers that not only make
accurate predictions, but also do not discriminate against specific groups. It
is a fast-growing area of\r\nmachine learning with far-reaching societal impact.
However, existing fair learning methods\r\nare vulnerable to accidental or malicious
artifacts in the training data, which can cause\r\nthem to unknowingly produce
unfair classifiers. In this work we address the problem of\r\nfair learning from
unreliable training data in the robust multisource setting, where the\r\navailable
training data comes from multiple sources, a fraction of which might not be representative
of the true data distribution. We introduce FLEA, a filtering-based algorithm\r\nthat
identifies and suppresses those data sources that would have a negative impact
on\r\nfairness or accuracy if they were used for training. As such, FLEA is not
a replacement of\r\nprior fairness-aware learning methods but rather an augmentation
that makes any of them\r\nrobust against unreliable training data. We show the
effectiveness of our approach by a\r\ndiverse range of experiments on multiple
datasets. Additionally, we prove formally that\r\n–given enough data– FLEA protects
the learner against corruptions as long as the fraction of\r\naffected data sources
is less than half. Our source code and documentation are available at\r\nhttps://github.com/ISTAustria-CVML/FLEA."
acknowledged_ssus:
- _id: ScienComp
acknowledgement: 'The authors would like to thank Bernd Prach, Elias Frantar, Alexandra
Peste, Mahdi Nikdan, and Peter Súkeník for their helpful feedback. This research
was supported by the Scientific Service Units (SSU) of IST Austria through resources
provided by Scientific Computing (SciComp). This publication was made possible by
an ETH AI Center postdoctoral fellowship granted to Nikola Konstantinov. Eugenia
Iofinova was supported in part by the FWF DK VGSCO, grant agreement number W1260-N35. '
article_processing_charge: No
article_type: original
author:
- first_name: Eugenia B
full_name: Iofinova, Eugenia B
id: f9a17499-f6e0-11ea-865d-fdf9a3f77117
last_name: Iofinova
orcid: 0000-0002-7778-3221
- first_name: Nikola H
full_name: Konstantinov, Nikola H
id: 4B9D76E4-F248-11E8-B48F-1D18A9856A87
last_name: Konstantinov
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Iofinova EB, Konstantinov NH, Lampert C. FLEA: Provably robust fair multisource
learning from unreliable training data. Transactions on Machine Learning Research.
2022.'
apa: 'Iofinova, E. B., Konstantinov, N. H., & Lampert, C. (2022). FLEA: Provably
robust fair multisource learning from unreliable training data. Transactions
on Machine Learning Research. ML Research Press.'
chicago: 'Iofinova, Eugenia B, Nikola H Konstantinov, and Christoph Lampert. “FLEA:
Provably Robust Fair Multisource Learning from Unreliable Training Data.” Transactions
on Machine Learning Research. ML Research Press, 2022.'
ieee: 'E. B. Iofinova, N. H. Konstantinov, and C. Lampert, “FLEA: Provably robust
fair multisource learning from unreliable training data,” Transactions on Machine
Learning Research. ML Research Press, 2022.'
ista: 'Iofinova EB, Konstantinov NH, Lampert C. 2022. FLEA: Provably robust fair
multisource learning from unreliable training data. Transactions on Machine Learning
Research.'
mla: 'Iofinova, Eugenia B., et al. “FLEA: Provably Robust Fair Multisource Learning
from Unreliable Training Data.” Transactions on Machine Learning Research,
ML Research Press, 2022.'
short: E.B. Iofinova, N.H. Konstantinov, C. Lampert, Transactions on Machine Learning
Research (2022).
date_created: 2023-02-02T20:29:57Z
date_published: 2022-12-22T00:00:00Z
date_updated: 2023-02-23T10:30:54Z
day: '22'
ddc:
- '000'
department:
- _id: ChLa
external_id:
arxiv:
- '2106.11732'
file:
- access_level: open_access
checksum: 97c8a8470759cab597abb973ca137a3b
content_type: application/pdf
creator: dernst
date_created: 2023-02-23T10:30:04Z
date_updated: 2023-02-23T10:30:04Z
file_id: '12673'
file_name: 2022_TMLR_Iofinova.pdf
file_size: 1948063
relation: main_file
success: 1
file_date_updated: 2023-02-23T10:30:04Z
has_accepted_license: '1'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://openreview.net/forum?id=XsPopigZXV
month: '12'
oa: 1
oa_version: Published Version
project:
- _id: 9B9290DE-BA93-11EA-9121-9846C619BF3A
grant_number: ' W1260-N35'
name: Vienna Graduate School on Computational Optimization
publication: Transactions on Machine Learning Research
publication_identifier:
issn:
- 2835-8856
publication_status: published
publisher: ML Research Press
quality_controlled: '1'
related_material:
link:
- description: source code
relation: software
url: https://github.com/ISTAustria-CVML/FLEA
status: public
title: 'FLEA: Provably robust fair multisource learning from unreliable training data'
tmp:
image: /images/cc_by.png
legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
short: CC BY (4.0)
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2022'
...
---
_id: '11839'
abstract:
- lang: eng
text: "It is a highly desirable property for deep networks to be robust against\r\nsmall
input changes. One popular way to achieve this property is by designing\r\nnetworks
with a small Lipschitz constant. In this work, we propose a new\r\ntechnique for
constructing such Lipschitz networks that has a number of\r\ndesirable properties:
it can be applied to any linear network layer\r\n(fully-connected or convolutional),
it provides formal guarantees on the\r\nLipschitz constant, it is easy to implement
and efficient to run, and it can be\r\ncombined with any training objective and
optimization method. In fact, our\r\ntechnique is the first one in the literature
that achieves all of these\r\nproperties simultaneously. Our main contribution
is a rescaling-based weight\r\nmatrix parametrization that guarantees each network
layer to have a Lipschitz\r\nconstant of at most 1 and results in the learned
weight matrices to be close to\r\northogonal. Hence we call such layers almost-orthogonal
Lipschitz (AOL).\r\nExperiments and ablation studies in the context of image classification
with\r\ncertified robust accuracy confirm that AOL layers achieve results that
are on\r\npar with most existing methods. Yet, they are simpler to implement and
more\r\nbroadly applicable, because they do not require computationally expensive\r\nmatrix
orthogonalization or inversion steps as part of the network\r\narchitecture. We
provide code at https://github.com/berndprach/AOL."
alternative_title:
- LNCS
article_processing_charge: No
author:
- first_name: Bernd
full_name: Prach, Bernd
id: 2D561D42-C427-11E9-89B4-9C1AE6697425
last_name: Prach
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Prach B, Lampert C. Almost-orthogonal layers for efficient general-purpose
Lipschitz networks. In: Computer Vision – ECCV 2022. Vol 13681. Springer
Nature; 2022:350-365. doi:10.1007/978-3-031-19803-8_21'
apa: 'Prach, B., & Lampert, C. (2022). Almost-orthogonal layers for efficient
general-purpose Lipschitz networks. In Computer Vision – ECCV 2022 (Vol.
13681, pp. 350–365). Tel Aviv, Israel: Springer Nature. https://doi.org/10.1007/978-3-031-19803-8_21'
chicago: Prach, Bernd, and Christoph Lampert. “Almost-Orthogonal Layers for Efficient
General-Purpose Lipschitz Networks.” In Computer Vision – ECCV 2022, 13681:350–65.
Springer Nature, 2022. https://doi.org/10.1007/978-3-031-19803-8_21.
ieee: B. Prach and C. Lampert, “Almost-orthogonal layers for efficient general-purpose
Lipschitz networks,” in Computer Vision – ECCV 2022, Tel Aviv, Israel,
2022, vol. 13681, pp. 350–365.
ista: 'Prach B, Lampert C. 2022. Almost-orthogonal layers for efficient general-purpose
Lipschitz networks. Computer Vision – ECCV 2022. ECCV: European Conference on
Computer Vision, LNCS, vol. 13681, 350–365.'
mla: Prach, Bernd, and Christoph Lampert. “Almost-Orthogonal Layers for Efficient
General-Purpose Lipschitz Networks.” Computer Vision – ECCV 2022, vol.
13681, Springer Nature, 2022, pp. 350–65, doi:10.1007/978-3-031-19803-8_21.
short: B. Prach, C. Lampert, in:, Computer Vision – ECCV 2022, Springer Nature,
2022, pp. 350–365.
conference:
end_date: 2022-10-27
location: Tel Aviv, Israel
name: 'ECCV: European Conference on Computer Vision'
start_date: 2022-10-23
date_created: 2022-08-12T15:09:47Z
date_published: 2022-10-23T00:00:00Z
date_updated: 2023-05-03T08:00:46Z
day: '23'
department:
- _id: GradSch
- _id: ChLa
doi: 10.1007/978-3-031-19803-8_21
external_id:
arxiv:
- '2208.03160'
intvolume: ' 13681'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: ' https://doi.org/10.48550/arXiv.2208.03160'
month: '10'
oa: 1
oa_version: Preprint
page: 350-365
publication: Computer Vision – ECCV 2022
publication_identifier:
eisbn:
- '9783031198038'
isbn:
- '9783031198021'
publication_status: published
publisher: Springer Nature
quality_controlled: '1'
scopus_import: '1'
status: public
title: Almost-orthogonal layers for efficient general-purpose Lipschitz networks
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 13681
year: '2022'
...
---
_id: '10752'
abstract:
- lang: eng
text: 'The digitalization of almost all aspects of our everyday lives has led to
unprecedented amounts of data being freely available on the Internet. In particular
social media platforms provide rich sources of user-generated data, though typically
in unstructured form, and with high diversity, such as written in many different
languages. Automatically identifying meaningful information in such big data resources
and extracting it efficiently is one of the ongoing challenges of our time. A
common step for this is sentiment analysis, which forms the foundation for tasks
such as opinion mining or trend prediction. Unfortunately, publicly available
tools for this task are almost exclusively available for English-language texts.
Consequently, a large fraction of the Internet users, who do not communicate in
English, are ignored in automatized studies, a phenomenon called rare-language
discrimination.In this work we propose a technique to overcome this problem by
a truly multi-lingual model, which can be trained automatically without linguistic
knowledge or even the ability to read the many target languages. The main step
is to combine self-annotation, specifically the use of emoticons as a proxy for
labels, with multi-lingual sentence representations.To evaluate our method we
curated several large datasets from data obtained via the free Twitter streaming
API. The results show that our proposed multi-lingual training is able to achieve
sentiment predictions at the same quality level for rare languages as for frequent
ones, and in particular clearly better than what mono-lingual training achieves
on the same data. '
article_processing_charge: No
author:
- first_name: Jasmin
full_name: Lampert, Jasmin
last_name: Lampert
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0002-4561-241X
citation:
ama: 'Lampert J, Lampert C. Overcoming rare-language discrimination in multi-lingual
sentiment analysis. In: 2021 IEEE International Conference on Big Data.
IEEE; 2022:5185-5192. doi:10.1109/bigdata52589.2021.9672003'
apa: 'Lampert, J., & Lampert, C. (2022). Overcoming rare-language discrimination
in multi-lingual sentiment analysis. In 2021 IEEE International Conference
on Big Data (pp. 5185–5192). Orlando, FL, United States: IEEE. https://doi.org/10.1109/bigdata52589.2021.9672003'
chicago: Lampert, Jasmin, and Christoph Lampert. “Overcoming Rare-Language Discrimination
in Multi-Lingual Sentiment Analysis.” In 2021 IEEE International Conference
on Big Data, 5185–92. IEEE, 2022. https://doi.org/10.1109/bigdata52589.2021.9672003.
ieee: J. Lampert and C. Lampert, “Overcoming rare-language discrimination in multi-lingual
sentiment analysis,” in 2021 IEEE International Conference on Big Data,
Orlando, FL, United States, 2022, pp. 5185–5192.
ista: 'Lampert J, Lampert C. 2022. Overcoming rare-language discrimination in multi-lingual
sentiment analysis. 2021 IEEE International Conference on Big Data. Big Data:
International Conference on Big Data, 5185–5192.'
mla: Lampert, Jasmin, and Christoph Lampert. “Overcoming Rare-Language Discrimination
in Multi-Lingual Sentiment Analysis.” 2021 IEEE International Conference on
Big Data, IEEE, 2022, pp. 5185–92, doi:10.1109/bigdata52589.2021.9672003.
short: J. Lampert, C. Lampert, in:, 2021 IEEE International Conference on Big Data,
IEEE, 2022, pp. 5185–5192.
conference:
end_date: 2021-12-18
location: Orlando, FL, United States
name: 'Big Data: International Conference on Big Data'
start_date: 2021-12-15
date_created: 2022-02-10T14:08:23Z
date_published: 2022-01-13T00:00:00Z
date_updated: 2023-08-02T14:27:50Z
day: '13'
department:
- _id: ChLa
doi: 10.1109/bigdata52589.2021.9672003
external_id:
isi:
- '000800559505036'
isi: 1
language:
- iso: eng
month: '01'
oa_version: None
page: 5185-5192
publication: 2021 IEEE International Conference on Big Data
publication_identifier:
isbn:
- '9781665439022'
publication_status: published
publisher: IEEE
quality_controlled: '1'
status: public
title: Overcoming rare-language discrimination in multi-lingual sentiment analysis
type: conference
user_id: 4359f0d1-fa6c-11eb-b949-802e58b17ae8
year: '2022'
...
---
_id: '12161'
abstract:
- lang: eng
text: 'We introduce LIMES, a new method for learning with non-stationary streaming
data, inspired by the recent success of meta-learning. The main idea is not to
attempt to learn a single classifier that would have to work well across all occurring
data distributions, nor many separate classifiers, but to exploit a hybrid strategy:
we learn a single set of model parameters from which a specific classifier for
any specific data distribution is derived via classifier adaptation. Assuming
a multiclass classification setting with class-prior shift, the adaptation step
can be performed analytically with only the classifier’s bias terms being affected.
Another contribution of our work is an extrapolation step that predicts suitable
adaptation parameters for future time steps based on the previous data. In combination,
we obtain a lightweight procedure for learning from streaming data with varying
class distribution that adds no trainable parameters and almost no memory or computational
overhead compared to training a single model. Experiments on a set of exemplary
tasks using Twitter data show that LIMES achieves higher accuracy than alternative
approaches, especially with respect to the relevant real-world metric of lowest
within-day accuracy.'
article_processing_charge: No
author:
- first_name: Paulina
full_name: Tomaszewska, Paulina
last_name: Tomaszewska
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Tomaszewska P, Lampert C. Lightweight conditional model extrapolation for
streaming data under class-prior shift. In: 26th International Conference on
Pattern Recognition. Vol 2022. Institute of Electrical and Electronics Engineers;
2022:2128-2134. doi:10.1109/icpr56361.2022.9956195'
apa: 'Tomaszewska, P., & Lampert, C. (2022). Lightweight conditional model extrapolation
for streaming data under class-prior shift. In 26th International Conference
on Pattern Recognition (Vol. 2022, pp. 2128–2134). Montreal, Canada: Institute
of Electrical and Electronics Engineers. https://doi.org/10.1109/icpr56361.2022.9956195'
chicago: Tomaszewska, Paulina, and Christoph Lampert. “Lightweight Conditional Model
Extrapolation for Streaming Data under Class-Prior Shift.” In 26th International
Conference on Pattern Recognition, 2022:2128–34. Institute of Electrical and
Electronics Engineers, 2022. https://doi.org/10.1109/icpr56361.2022.9956195.
ieee: P. Tomaszewska and C. Lampert, “Lightweight conditional model extrapolation
for streaming data under class-prior shift,” in 26th International Conference
on Pattern Recognition, Montreal, Canada, 2022, vol. 2022, pp. 2128–2134.
ista: 'Tomaszewska P, Lampert C. 2022. Lightweight conditional model extrapolation
for streaming data under class-prior shift. 26th International Conference on Pattern
Recognition. ICPR: International Conference on Pattern Recognition vol. 2022,
2128–2134.'
mla: Tomaszewska, Paulina, and Christoph Lampert. “Lightweight Conditional Model
Extrapolation for Streaming Data under Class-Prior Shift.” 26th International
Conference on Pattern Recognition, vol. 2022, Institute of Electrical and
Electronics Engineers, 2022, pp. 2128–34, doi:10.1109/icpr56361.2022.9956195.
short: P. Tomaszewska, C. Lampert, in:, 26th International Conference on Pattern
Recognition, Institute of Electrical and Electronics Engineers, 2022, pp. 2128–2134.
conference:
end_date: 2022-08-25
location: Montreal, Canada
name: 'ICPR: International Conference on Pattern Recognition'
start_date: 2022-08-21
date_created: 2023-01-12T12:09:38Z
date_published: 2022-11-29T00:00:00Z
date_updated: 2023-08-04T09:06:34Z
day: '29'
department:
- _id: ChLa
doi: 10.1109/icpr56361.2022.9956195
external_id:
arxiv:
- '2206.05181'
isi:
- '000897707602018'
intvolume: ' 2022'
isi: 1
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://doi.org/10.48550/arXiv.2206.05181
month: '11'
oa: 1
oa_version: Preprint
page: 2128-2134
publication: 26th International Conference on Pattern Recognition
publication_identifier:
eisbn:
- '9781665490627'
eissn:
- 2831-7475
publication_status: published
publisher: Institute of Electrical and Electronics Engineers
quality_controlled: '1'
scopus_import: '1'
status: public
title: Lightweight conditional model extrapolation for streaming data under class-prior
shift
type: conference
user_id: 4359f0d1-fa6c-11eb-b949-802e58b17ae8
volume: 2022
year: '2022'
...
---
_id: '10802'
abstract:
- lang: eng
text: "Addressing fairness concerns about machine learning models is a crucial step
towards their long-term adoption in real-world automated systems. While many approaches
have been developed for training fair models from data, little is known about
the robustness of these methods to data corruption. In this work we consider fairness-aware
learning under worst-case data manipulations. We show that an adversary can in
some situations force any learner to return an overly biased classifier, regardless
of the sample size and with or without degrading\r\naccuracy, and that the strength
of the excess bias increases for learning problems with underrepresented protected
groups in the data. We also prove that our hardness results are tight up to constant
factors. To this end, we study two natural learning algorithms that optimize for
both accuracy and fairness and show that these algorithms enjoy guarantees that
are order-optimal in terms of the corruption ratio and the protected groups frequencies
in the large data\r\nlimit."
acknowledgement: The authors thank Eugenia Iofinova and Bernd Prach for providing
feedback on early versions of this paper. This publication was made possible by
an ETH AI Center postdoctoral fellowship to Nikola Konstantinov.
article_processing_charge: No
article_type: original
author:
- first_name: Nikola H
full_name: Konstantinov, Nikola H
id: 4B9D76E4-F248-11E8-B48F-1D18A9856A87
last_name: Konstantinov
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0002-4561-241X
citation:
ama: Konstantinov NH, Lampert C. Fairness-aware PAC learning from corrupted data.
Journal of Machine Learning Research. 2022;23:1-60.
apa: Konstantinov, N. H., & Lampert, C. (2022). Fairness-aware PAC learning
from corrupted data. Journal of Machine Learning Research. ML Research
Press.
chicago: Konstantinov, Nikola H, and Christoph Lampert. “Fairness-Aware PAC Learning
from Corrupted Data.” Journal of Machine Learning Research. ML Research
Press, 2022.
ieee: N. H. Konstantinov and C. Lampert, “Fairness-aware PAC learning from corrupted
data,” Journal of Machine Learning Research, vol. 23. ML Research Press,
pp. 1–60, 2022.
ista: Konstantinov NH, Lampert C. 2022. Fairness-aware PAC learning from corrupted
data. Journal of Machine Learning Research. 23, 1–60.
mla: Konstantinov, Nikola H., and Christoph Lampert. “Fairness-Aware PAC Learning
from Corrupted Data.” Journal of Machine Learning Research, vol. 23, ML
Research Press, 2022, pp. 1–60.
short: N.H. Konstantinov, C. Lampert, Journal of Machine Learning Research 23 (2022)
1–60.
date_created: 2022-02-28T14:05:42Z
date_published: 2022-05-01T00:00:00Z
date_updated: 2023-09-26T10:44:37Z
day: '01'
ddc:
- '004'
department:
- _id: ChLa
external_id:
arxiv:
- '2102.06004'
file:
- access_level: open_access
checksum: 9cac897b54a0ddf3a553a2c33e88cfda
content_type: application/pdf
creator: kschuh
date_created: 2022-07-12T15:08:28Z
date_updated: 2022-07-12T15:08:28Z
file_id: '11570'
file_name: 2022_JournalMachineLearningResearch_Konstantinov.pdf
file_size: 551862
relation: main_file
success: 1
file_date_updated: 2022-07-12T15:08:28Z
has_accepted_license: '1'
intvolume: ' 23'
keyword:
- Fairness
- robustness
- data poisoning
- trustworthy machine learning
- PAC learning
language:
- iso: eng
month: '05'
oa: 1
oa_version: Published Version
page: 1-60
publication: Journal of Machine Learning Research
publication_identifier:
eissn:
- 1533-7928
issn:
- 1532-4435
publication_status: published
publisher: ML Research Press
quality_controlled: '1'
related_material:
record:
- id: '10799'
relation: dissertation_contains
status: public
- id: '13241'
relation: shorter_version
status: public
scopus_import: '1'
status: public
title: Fairness-aware PAC learning from corrupted data
tmp:
image: /images/cc_by.png
legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
short: CC BY (4.0)
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 23
year: '2022'
...
---
_id: '13241'
abstract:
- lang: eng
text: Addressing fairness concerns about machine learning models is a crucial step
towards their long-term adoption in real-world automated systems. Many approaches
for training fair models from data have been developed and an implicit assumption
about such algorithms is that they are able to recover a fair model, despite potential
historical biases in the data. In this work we show a number of impossibility
results that indicate that there is no learning algorithm that can recover a fair
model when a proportion of the dataset is subject to arbitrary manipulations.
Specifically, we prove that there are situations in which an adversary can force
any learner to return a biased classifier, with or without degrading accuracy,
and that the strength of this bias increases for learning problems with underrepresented
protected groups in the data. Our results emphasize on the importance of studying
further data corruption models of various strength and of establishing stricter
data collection practices for fairness-aware learning.
acknowledgement: "This paper is a shortened, workshop version of Konstantinov and
Lampert (2021),\r\nhttps://arxiv.org/abs/2102.06004. For further results, including
an analysis of algorithms achieving the lower bounds from this paper, we refer to
the full version."
article_processing_charge: No
author:
- first_name: Nikola H
full_name: Konstantinov, Nikola H
id: 4B9D76E4-F248-11E8-B48F-1D18A9856A87
last_name: Konstantinov
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Konstantinov NH, Lampert C. On the impossibility of fairness-aware learning
from corrupted data. In: Proceedings of Machine Learning Research. Vol
171. ML Research Press; 2022:59-83.'
apa: Konstantinov, N. H., & Lampert, C. (2022). On the impossibility of fairness-aware
learning from corrupted data. In Proceedings of Machine Learning Research
(Vol. 171, pp. 59–83). ML Research Press.
chicago: Konstantinov, Nikola H, and Christoph Lampert. “On the Impossibility of
Fairness-Aware Learning from Corrupted Data.” In Proceedings of Machine Learning
Research, 171:59–83. ML Research Press, 2022.
ieee: N. H. Konstantinov and C. Lampert, “On the impossibility of fairness-aware
learning from corrupted data,” in Proceedings of Machine Learning Research,
2022, vol. 171, pp. 59–83.
ista: Konstantinov NH, Lampert C. 2022. On the impossibility of fairness-aware learning
from corrupted data. Proceedings of Machine Learning Research. vol. 171, 59–83.
mla: Konstantinov, Nikola H., and Christoph Lampert. “On the Impossibility of Fairness-Aware
Learning from Corrupted Data.” Proceedings of Machine Learning Research,
vol. 171, ML Research Press, 2022, pp. 59–83.
short: N.H. Konstantinov, C. Lampert, in:, Proceedings of Machine Learning Research,
ML Research Press, 2022, pp. 59–83.
date_created: 2023-07-16T22:01:13Z
date_published: 2022-12-01T00:00:00Z
date_updated: 2023-09-26T10:44:37Z
day: '01'
department:
- _id: ChLa
external_id:
arxiv:
- '2102.06004'
intvolume: ' 171'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/2102.06004
month: '12'
oa: 1
oa_version: Preprint
page: 59-83
publication: Proceedings of Machine Learning Research
publication_identifier:
eissn:
- 2640-3498
publication_status: published
publisher: ML Research Press
quality_controlled: '1'
related_material:
record:
- id: '10802'
relation: extended_version
status: public
scopus_import: '1'
status: public
title: On the impossibility of fairness-aware learning from corrupted data
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 171
year: '2022'
...
---
_id: '9210'
abstract:
- lang: eng
text: "Modern neural networks can easily fit their training set perfectly. Surprisingly,
despite being “overfit” in this way, they tend to generalize well to future data,
thereby defying the classic bias–variance trade-off of machine learning theory.
Of the many possible explanations, a prevalent one is that training by stochastic
gradient descent (SGD) imposes an implicit bias that leads it to learn simple
functions, and these simple functions generalize well. However, the specifics
of this implicit bias are not well understood.\r\nIn this work, we explore the
smoothness conjecture which states that SGD is implicitly biased towards learning
functions that are smooth. We propose several measures to formalize the intuitive
notion of smoothness, and we conduct experiments to determine whether SGD indeed
implicitly optimizes for these measures. Our findings rule out the possibility
that smoothness measures based on first-order derivatives are being implicitly
enforced. They are supportive, though, of the smoothness conjecture for measures
based on second-order derivatives."
article_processing_charge: No
author:
- first_name: Vaclav
full_name: Volhejn, Vaclav
id: d5235fb4-7a6d-11eb-b254-f25d12d631a8
last_name: Volhejn
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Volhejn V, Lampert C. Does SGD implicitly optimize for smoothness? In: 42nd
German Conference on Pattern Recognition. Vol 12544. LNCS. Springer; 2021:246-259.
doi:10.1007/978-3-030-71278-5_18'
apa: 'Volhejn, V., & Lampert, C. (2021). Does SGD implicitly optimize for smoothness?
In 42nd German Conference on Pattern Recognition (Vol. 12544, pp. 246–259).
Tübingen, Germany: Springer. https://doi.org/10.1007/978-3-030-71278-5_18'
chicago: Volhejn, Vaclav, and Christoph Lampert. “Does SGD Implicitly Optimize for
Smoothness?” In 42nd German Conference on Pattern Recognition, 12544:246–59.
LNCS. Springer, 2021. https://doi.org/10.1007/978-3-030-71278-5_18.
ieee: V. Volhejn and C. Lampert, “Does SGD implicitly optimize for smoothness?,”
in 42nd German Conference on Pattern Recognition, Tübingen, Germany, 2021,
vol. 12544, pp. 246–259.
ista: 'Volhejn V, Lampert C. 2021. Does SGD implicitly optimize for smoothness?
42nd German Conference on Pattern Recognition. DAGM GCPR: German Conference on
Pattern Recognition LNCS vol. 12544, 246–259.'
mla: Volhejn, Vaclav, and Christoph Lampert. “Does SGD Implicitly Optimize for Smoothness?”
42nd German Conference on Pattern Recognition, vol. 12544, Springer, 2021,
pp. 246–59, doi:10.1007/978-3-030-71278-5_18.
short: V. Volhejn, C. Lampert, in:, 42nd German Conference on Pattern Recognition,
Springer, 2021, pp. 246–259.
conference:
end_date: 2020-10-01
location: Tübingen, Germany
name: 'DAGM GCPR: German Conference on Pattern Recognition '
start_date: 2020-09-28
date_created: 2021-03-01T09:01:16Z
date_published: 2021-03-17T00:00:00Z
date_updated: 2022-08-12T07:28:47Z
day: '17'
ddc:
- '510'
department:
- _id: ChLa
doi: 10.1007/978-3-030-71278-5_18
file:
- access_level: open_access
checksum: 3e3628ab1cf658d82524963f808004ea
content_type: application/pdf
creator: dernst
date_created: 2022-08-12T07:27:58Z
date_updated: 2022-08-12T07:27:58Z
file_id: '11820'
file_name: 2020_GCPR_submitted_Volhejn.pdf
file_size: 420234
relation: main_file
success: 1
file_date_updated: 2022-08-12T07:27:58Z
has_accepted_license: '1'
intvolume: ' 12544'
language:
- iso: eng
month: '03'
oa: 1
oa_version: Submitted Version
page: 246-259
publication: 42nd German Conference on Pattern Recognition
publication_identifier:
eissn:
- 1611-3349
isbn:
- '9783030712778'
issn:
- 0302-9743
publication_status: published
publisher: Springer
quality_controlled: '1'
scopus_import: '1'
series_title: LNCS
status: public
title: Does SGD implicitly optimize for smoothness?
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 12544
year: '2021'
...
---
_id: '9416'
abstract:
- lang: eng
text: 'We study the inductive bias of two-layer ReLU networks trained by gradient
flow. We identify a class of easy-to-learn (`orthogonally separable'') datasets,
and characterise the solution that ReLU networks trained on such datasets converge
to. Irrespective of network width, the solution turns out to be a combination
of two max-margin classifiers: one corresponding to the positive data subset and
one corresponding to the negative data subset. The proof is based on the recently
introduced concept of extremal sectors, for which we prove a number of properties
in the context of orthogonal separability. In particular, we prove stationarity
of activation patterns from some time onwards, which enables a reduction of the
ReLU network to an ensemble of linear subnetworks.'
article_processing_charge: No
author:
- first_name: Phuong
full_name: Bui Thi Mai, Phuong
id: 3EC6EE64-F248-11E8-B48F-1D18A9856A87
last_name: Bui Thi Mai
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Phuong M, Lampert C. The inductive bias of ReLU networks on orthogonally separable
data. In: 9th International Conference on Learning Representations. ; 2021.'
apa: Phuong, M., & Lampert, C. (2021). The inductive bias of ReLU networks on
orthogonally separable data. In 9th International Conference on Learning Representations.
Virtual.
chicago: Phuong, Mary, and Christoph Lampert. “The Inductive Bias of ReLU Networks
on Orthogonally Separable Data.” In 9th International Conference on Learning
Representations, 2021.
ieee: M. Phuong and C. Lampert, “The inductive bias of ReLU networks on orthogonally
separable data,” in 9th International Conference on Learning Representations,
Virtual, 2021.
ista: 'Phuong M, Lampert C. 2021. The inductive bias of ReLU networks on orthogonally
separable data. 9th International Conference on Learning Representations. ICLR:
International Conference on Learning Representations.'
mla: Phuong, Mary, and Christoph Lampert. “The Inductive Bias of ReLU Networks on
Orthogonally Separable Data.” 9th International Conference on Learning Representations,
2021.
short: M. Phuong, C. Lampert, in:, 9th International Conference on Learning Representations,
2021.
conference:
end_date: 2021-05-07
location: Virtual
name: ' ICLR: International Conference on Learning Representations'
start_date: 2021-05-03
date_created: 2021-05-24T11:16:46Z
date_published: 2021-05-01T00:00:00Z
date_updated: 2023-09-07T13:29:50Z
day: '01'
ddc:
- '000'
department:
- _id: GradSch
- _id: ChLa
file:
- access_level: open_access
checksum: f34ff17017527db5ba6927f817bdd125
content_type: application/pdf
creator: bphuong
date_created: 2021-05-24T11:15:57Z
date_updated: 2021-05-24T11:15:57Z
file_id: '9417'
file_name: iclr2021_conference.pdf
file_size: 502356
relation: main_file
file_date_updated: 2021-05-24T11:15:57Z
has_accepted_license: '1'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://openreview.net/pdf?id=krz7T0xU9Z_
month: '05'
oa: 1
oa_version: Published Version
publication: 9th International Conference on Learning Representations
publication_status: published
quality_controlled: '1'
related_material:
record:
- id: '9418'
relation: dissertation_contains
status: public
scopus_import: '1'
status: public
title: The inductive bias of ReLU networks on orthogonally separable data
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2021'
...
---
_id: '10803'
abstract:
- lang: eng
text: Given the abundance of applications of ranking in recent years, addressing
fairness concerns around automated ranking systems becomes necessary for increasing
the trust among end-users. Previous work on fair ranking has mostly focused on
application-specific fairness notions, often tailored to online advertising, and
it rarely considers learning as part of the process. In this work, we show how
to transfer numerous fairness notions from binary classification to a learning
to rank setting. Our formalism allows us to design methods for incorporating fairness
objectives with provable generalization guarantees. An extensive experimental
evaluation shows that our method can improve ranking fairness substantially with
no or only little loss of model quality.
article_number: '2102.05996'
article_processing_charge: No
author:
- first_name: Nikola H
full_name: Konstantinov, Nikola H
id: 4B9D76E4-F248-11E8-B48F-1D18A9856A87
last_name: Konstantinov
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0002-4561-241X
citation:
ama: Konstantinov NH, Lampert C. Fairness through regularization for learning to
rank. arXiv. doi:10.48550/arXiv.2102.05996
apa: Konstantinov, N. H., & Lampert, C. (n.d.). Fairness through regularization
for learning to rank. arXiv. https://doi.org/10.48550/arXiv.2102.05996
chicago: Konstantinov, Nikola H, and Christoph Lampert. “Fairness through Regularization
for Learning to Rank.” ArXiv, n.d. https://doi.org/10.48550/arXiv.2102.05996.
ieee: N. H. Konstantinov and C. Lampert, “Fairness through regularization for learning
to rank,” arXiv. .
ista: Konstantinov NH, Lampert C. Fairness through regularization for learning to
rank. arXiv, 2102.05996.
mla: Konstantinov, Nikola H., and Christoph Lampert. “Fairness through Regularization
for Learning to Rank.” ArXiv, 2102.05996, doi:10.48550/arXiv.2102.05996.
short: N.H. Konstantinov, C. Lampert, ArXiv (n.d.).
date_created: 2022-02-28T14:13:59Z
date_published: 2021-06-07T00:00:00Z
date_updated: 2023-09-07T13:42:08Z
day: '07'
department:
- _id: ChLa
doi: 10.48550/arXiv.2102.05996
external_id:
arxiv:
- '2102.05996'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/2102.05996
month: '06'
oa: 1
oa_version: Preprint
publication: arXiv
publication_status: submitted
related_material:
record:
- id: '10799'
relation: dissertation_contains
status: public
status: public
title: Fairness through regularization for learning to rank
type: preprint
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2021'
...
---
_id: '14987'
abstract:
- lang: eng
text: "The goal of zero-shot learning is to construct a classifier that can identify
object classes for which no training examples are available. When training data
for some of the object classes is available but not for others, the name generalized
zero-shot learning is commonly used.\r\nIn a wider sense, the phrase zero-shot
is also used to describe other machine learning-based approaches that require
no training data from the problem of interest, such as zero-shot action recognition
or zero-shot machine translation."
article_processing_charge: No
author:
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Lampert C. Zero-Shot Learning. In: Ikeuchi K, ed. Computer Vision.
2nd ed. Cham: Springer; 2021:1395-1397. doi:10.1007/978-3-030-63416-2_874'
apa: 'Lampert, C. (2021). Zero-Shot Learning. In K. Ikeuchi (Ed.), Computer Vision
(2nd ed., pp. 1395–1397). Cham: Springer. https://doi.org/10.1007/978-3-030-63416-2_874'
chicago: 'Lampert, Christoph. “Zero-Shot Learning.” In Computer Vision, edited
by Katsushi Ikeuchi, 2nd ed., 1395–97. Cham: Springer, 2021. https://doi.org/10.1007/978-3-030-63416-2_874.'
ieee: 'C. Lampert, “Zero-Shot Learning,” in Computer Vision, 2nd ed., K.
Ikeuchi, Ed. Cham: Springer, 2021, pp. 1395–1397.'
ista: 'Lampert C. 2021.Zero-Shot Learning. In: Computer Vision. , 1395–1397.'
mla: Lampert, Christoph. “Zero-Shot Learning.” Computer Vision, edited by
Katsushi Ikeuchi, 2nd ed., Springer, 2021, pp. 1395–97, doi:10.1007/978-3-030-63416-2_874.
short: C. Lampert, in:, K. Ikeuchi (Ed.), Computer Vision, 2nd ed., Springer, Cham,
2021, pp. 1395–1397.
date_created: 2024-02-14T14:05:32Z
date_published: 2021-10-13T00:00:00Z
date_updated: 2024-02-19T10:59:04Z
day: '13'
department:
- _id: ChLa
doi: 10.1007/978-3-030-63416-2_874
edition: '2'
editor:
- first_name: Katsushi
full_name: Ikeuchi, Katsushi
last_name: Ikeuchi
language:
- iso: eng
month: '10'
oa_version: None
page: 1395-1397
place: Cham
publication: Computer Vision
publication_identifier:
eisbn:
- '9783030634162'
isbn:
- '9783030634155'
publication_status: published
publisher: Springer
quality_controlled: '1'
status: public
title: Zero-Shot Learning
type: book_chapter
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2021'
...
---
_id: '8063'
abstract:
- lang: eng
text: "We present a generative model of images that explicitly reasons over the
set\r\nof objects they show. Our model learns a structured latent representation
that\r\nseparates objects from each other and from the background; unlike prior
works,\r\nit explicitly represents the 2D position and depth of each object, as
well as\r\nan embedding of its segmentation mask and appearance. The model can
be trained\r\nfrom images alone in a purely unsupervised fashion without the need
for object\r\nmasks or depth information. Moreover, it always generates complete
objects,\r\neven though a significant fraction of training images contain occlusions.\r\nFinally,
we show that our model can infer decompositions of novel images into\r\ntheir
constituent objects, including accurate prediction of depth ordering and\r\nsegmentation
of occluded parts."
article_number: '2004.00642'
article_processing_charge: No
author:
- first_name: Titas
full_name: Anciukevicius, Titas
last_name: Anciukevicius
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
- first_name: Paul M
full_name: Henderson, Paul M
id: 13C09E74-18D9-11E9-8878-32CFE5697425
last_name: Henderson
orcid: 0000-0002-5198-7445
citation:
ama: Anciukevicius T, Lampert C, Henderson PM. Object-centric image generation with
factored depths, locations, and appearances. arXiv.
apa: Anciukevicius, T., Lampert, C., & Henderson, P. M. (n.d.). Object-centric
image generation with factored depths, locations, and appearances. arXiv.
chicago: Anciukevicius, Titas, Christoph Lampert, and Paul M Henderson. “Object-Centric
Image Generation with Factored Depths, Locations, and Appearances.” ArXiv,
n.d.
ieee: T. Anciukevicius, C. Lampert, and P. M. Henderson, “Object-centric image generation
with factored depths, locations, and appearances,” arXiv. .
ista: Anciukevicius T, Lampert C, Henderson PM. Object-centric image generation
with factored depths, locations, and appearances. arXiv, 2004.00642.
mla: Anciukevicius, Titas, et al. “Object-Centric Image Generation with Factored
Depths, Locations, and Appearances.” ArXiv, 2004.00642.
short: T. Anciukevicius, C. Lampert, P.M. Henderson, ArXiv (n.d.).
date_created: 2020-06-29T23:55:23Z
date_published: 2020-04-01T00:00:00Z
date_updated: 2021-01-12T08:16:44Z
day: '01'
ddc:
- '004'
department:
- _id: ChLa
external_id:
arxiv:
- '2004.00642'
language:
- iso: eng
license: https://creativecommons.org/licenses/by-sa/4.0/
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/2004.00642
month: '04'
oa: 1
oa_version: Preprint
publication: arXiv
publication_status: submitted
status: public
title: Object-centric image generation with factored depths, locations, and appearances
tmp:
image: /images/cc_by_sa.png
legal_code_url: https://creativecommons.org/licenses/by-sa/4.0/legalcode
name: Creative Commons Attribution-ShareAlike 4.0 International Public License (CC
BY-SA 4.0)
short: CC BY-SA (4.0)
type: preprint
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2020'
...
---
_id: '8188'
abstract:
- lang: eng
text: "A natural approach to generative modeling of videos is to represent them
as a composition of moving objects. Recent works model a set of 2D sprites over
a slowly-varying background, but without considering the underlying 3D scene that\r\ngives
rise to them. We instead propose to model a video as the view seen while moving
through a scene with multiple 3D objects and a 3D background. Our model is trained
from monocular videos without any supervision, yet learns to\r\ngenerate coherent
3D scenes containing several moving objects. We conduct detailed experiments on
two datasets, going beyond the visual complexity supported by state-of-the-art
generative approaches. We evaluate our method on\r\ndepth-prediction and 3D object
detection---tasks which cannot be addressed by those earlier works---and show
it out-performs them even on 2D instance segmentation and tracking."
acknowledged_ssus:
- _id: ScienComp
acknowledgement: "This research was supported by the Scientific Service Units (SSU)
of IST Austria through resources\r\nprovided by Scientific Computing (SciComp).
PH is employed part-time by Blackford Analysis, but\r\nthey did not support this
project in any way."
article_processing_charge: No
author:
- first_name: Paul M
full_name: Henderson, Paul M
id: 13C09E74-18D9-11E9-8878-32CFE5697425
last_name: Henderson
orcid: 0000-0002-5198-7445
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Henderson PM, Lampert C. Unsupervised object-centric video generation and
decomposition in 3D. In: 34th Conference on Neural Information Processing Systems.
Vol 33. Curran Associates; 2020:3106–3117.'
apa: 'Henderson, P. M., & Lampert, C. (2020). Unsupervised object-centric video
generation and decomposition in 3D. In 34th Conference on Neural Information
Processing Systems (Vol. 33, pp. 3106–3117). Vancouver, Canada: Curran Associates.'
chicago: Henderson, Paul M, and Christoph Lampert. “Unsupervised Object-Centric
Video Generation and Decomposition in 3D.” In 34th Conference on Neural Information
Processing Systems, 33:3106–3117. Curran Associates, 2020.
ieee: P. M. Henderson and C. Lampert, “Unsupervised object-centric video generation
and decomposition in 3D,” in 34th Conference on Neural Information Processing
Systems, Vancouver, Canada, 2020, vol. 33, pp. 3106–3117.
ista: 'Henderson PM, Lampert C. 2020. Unsupervised object-centric video generation
and decomposition in 3D. 34th Conference on Neural Information Processing Systems.
NeurIPS: Neural Information Processing Systems vol. 33, 3106–3117.'
mla: Henderson, Paul M., and Christoph Lampert. “Unsupervised Object-Centric Video
Generation and Decomposition in 3D.” 34th Conference on Neural Information
Processing Systems, vol. 33, Curran Associates, 2020, pp. 3106–3117.
short: P.M. Henderson, C. Lampert, in:, 34th Conference on Neural Information Processing
Systems, Curran Associates, 2020, pp. 3106–3117.
conference:
end_date: 2020-12-12
location: Vancouver, Canada
name: 'NeurIPS: Neural Information Processing Systems'
start_date: 2020-12-06
date_created: 2020-07-31T16:59:19Z
date_published: 2020-07-07T00:00:00Z
date_updated: 2023-04-25T09:49:58Z
day: '07'
department:
- _id: ChLa
external_id:
arxiv:
- '2007.06705'
intvolume: ' 33'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/2007.06705
month: '07'
oa: 1
oa_version: Preprint
page: 3106–3117
publication: 34th Conference on Neural Information Processing Systems
publication_identifier:
isbn:
- '9781713829546'
publication_status: published
publisher: Curran Associates
quality_controlled: '1'
status: public
title: Unsupervised object-centric video generation and decomposition in 3D
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 33
year: '2020'
...
---
_id: '7936'
abstract:
- lang: eng
text: 'State-of-the-art detection systems are generally evaluated on their ability
to exhaustively retrieve objects densely distributed in the image, across a wide
variety of appearances and semantic categories. Orthogonal to this, many real-life
object detection applications, for example in remote sensing, instead require
dealing with large images that contain only a few small objects of a single class,
scattered heterogeneously across the space. In addition, they are often subject
to strict computational constraints, such as limited battery capacity and computing
power.To tackle these more practical scenarios, we propose a novel flexible detection
scheme that efficiently adapts to variable object sizes and densities: We rely
on a sequence of detection stages, each of which has the ability to predict groups
of objects as well as individuals. Similar to a detection cascade, this multi-stage
architecture spares computational effort by discarding large irrelevant regions
of the image early during the detection process. The ability to group objects
provides further computational and memory savings, as it allows working with lower
image resolutions in early stages, where groups are more easily detected than
individuals, as they are more salient. We report experimental results on two aerial
image datasets, and show that the proposed method is as accurate yet computationally
more efficient than standard single-shot detectors, consistently across three
different backbone architectures.'
article_number: 1716-1725
article_processing_charge: No
author:
- first_name: Amélie
full_name: Royer, Amélie
id: 3811D890-F248-11E8-B48F-1D18A9856A87
last_name: Royer
orcid: 0000-0002-8407-0705
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Royer A, Lampert C. Localizing grouped instances for efficient detection in
low-resource scenarios. In: IEEE Winter Conference on Applications of Computer
Vision. IEEE; 2020. doi:10.1109/WACV45572.2020.9093288'
apa: 'Royer, A., & Lampert, C. (2020). Localizing grouped instances for efficient
detection in low-resource scenarios. In IEEE Winter Conference on Applications
of Computer Vision. Snowmass Village, CO, United States: IEEE. https://doi.org/10.1109/WACV45572.2020.9093288'
chicago: Royer, Amélie, and Christoph Lampert. “Localizing Grouped Instances for
Efficient Detection in Low-Resource Scenarios.” In IEEE Winter Conference on
Applications of Computer Vision. IEEE, 2020. https://doi.org/10.1109/WACV45572.2020.9093288.
ieee: A. Royer and C. Lampert, “Localizing grouped instances for efficient detection
in low-resource scenarios,” in IEEE Winter Conference on Applications of Computer
Vision, Snowmass Village, CO, United States, 2020.
ista: 'Royer A, Lampert C. 2020. Localizing grouped instances for efficient detection
in low-resource scenarios. IEEE Winter Conference on Applications of Computer
Vision. WACV: Winter Conference on Applications of Computer Vision, 1716–1725.'
mla: Royer, Amélie, and Christoph Lampert. “Localizing Grouped Instances for Efficient
Detection in Low-Resource Scenarios.” IEEE Winter Conference on Applications
of Computer Vision, 1716–1725, IEEE, 2020, doi:10.1109/WACV45572.2020.9093288.
short: A. Royer, C. Lampert, in:, IEEE Winter Conference on Applications of Computer
Vision, IEEE, 2020.
conference:
end_date: 2020-03-05
location: ' Snowmass Village, CO, United States'
name: 'WACV: Winter Conference on Applications of Computer Vision'
start_date: 2020-03-01
date_created: 2020-06-07T22:00:53Z
date_published: 2020-03-01T00:00:00Z
date_updated: 2023-09-07T13:16:17Z
day: '01'
department:
- _id: ChLa
doi: 10.1109/WACV45572.2020.9093288
external_id:
arxiv:
- '2004.12623'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/2004.12623
month: '03'
oa: 1
oa_version: Preprint
publication: IEEE Winter Conference on Applications of Computer Vision
publication_identifier:
isbn:
- '9781728165530'
publication_status: published
publisher: IEEE
quality_controlled: '1'
related_material:
record:
- id: '8331'
relation: dissertation_contains
status: deleted
- id: '8390'
relation: dissertation_contains
status: public
scopus_import: 1
status: public
title: Localizing grouped instances for efficient detection in low-resource scenarios
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2020'
...
---
_id: '7937'
abstract:
- lang: eng
text: 'Fine-tuning is a popular way of exploiting knowledge contained in a pre-trained
convolutional network for a new visual recognition task. However, the orthogonal
setting of transferring knowledge from a pretrained network to a visually different
yet semantically close source is rarely considered: This commonly happens with
real-life data, which is not necessarily as clean as the training source (noise,
geometric transformations, different modalities, etc.).To tackle such scenarios,
we introduce a new, generalized form of fine-tuning, called flex-tuning, in which
any individual unit (e.g. layer) of a network can be tuned, and the most promising
one is chosen automatically. In order to make the method appealing for practical
use, we propose two lightweight and faster selection procedures that prove to
be good approximations in practice. We study these selection criteria empirically
across a variety of domain shifts and data scarcity scenarios, and show that fine-tuning
individual units, despite its simplicity, yields very good results as an adaptation
technique. As it turns out, in contrast to common practice, rather than the last
fully-connected unit it is best to tune an intermediate or early one in many domain-
shift scenarios, which is accurately detected by flex-tuning.'
article_number: 2180-2189
article_processing_charge: No
author:
- first_name: Amélie
full_name: Royer, Amélie
id: 3811D890-F248-11E8-B48F-1D18A9856A87
last_name: Royer
orcid: 0000-0002-8407-0705
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Royer A, Lampert C. A flexible selection scheme for minimum-effort transfer
learning. In: 2020 IEEE Winter Conference on Applications of Computer Vision.
IEEE; 2020. doi:10.1109/WACV45572.2020.9093635'
apa: 'Royer, A., & Lampert, C. (2020). A flexible selection scheme for minimum-effort
transfer learning. In 2020 IEEE Winter Conference on Applications of Computer
Vision. Snowmass Village, CO, United States: IEEE. https://doi.org/10.1109/WACV45572.2020.9093635'
chicago: Royer, Amélie, and Christoph Lampert. “A Flexible Selection Scheme for
Minimum-Effort Transfer Learning.” In 2020 IEEE Winter Conference on Applications
of Computer Vision. IEEE, 2020. https://doi.org/10.1109/WACV45572.2020.9093635.
ieee: A. Royer and C. Lampert, “A flexible selection scheme for minimum-effort transfer
learning,” in 2020 IEEE Winter Conference on Applications of Computer Vision,
Snowmass Village, CO, United States, 2020.
ista: 'Royer A, Lampert C. 2020. A flexible selection scheme for minimum-effort
transfer learning. 2020 IEEE Winter Conference on Applications of Computer Vision.
WACV: Winter Conference on Applications of Computer Vision, 2180–2189.'
mla: Royer, Amélie, and Christoph Lampert. “A Flexible Selection Scheme for Minimum-Effort
Transfer Learning.” 2020 IEEE Winter Conference on Applications of Computer
Vision, 2180–2189, IEEE, 2020, doi:10.1109/WACV45572.2020.9093635.
short: A. Royer, C. Lampert, in:, 2020 IEEE Winter Conference on Applications of
Computer Vision, IEEE, 2020.
conference:
end_date: 2020-03-05
location: Snowmass Village, CO, United States
name: 'WACV: Winter Conference on Applications of Computer Vision'
start_date: 2020-03-01
date_created: 2020-06-07T22:00:53Z
date_published: 2020-03-01T00:00:00Z
date_updated: 2023-09-07T13:16:17Z
day: '01'
department:
- _id: ChLa
doi: 10.1109/WACV45572.2020.9093635
external_id:
arxiv:
- '2008.11995'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: http://arxiv.org/abs/2008.11995
month: '03'
oa: 1
oa_version: Preprint
publication: 2020 IEEE Winter Conference on Applications of Computer Vision
publication_identifier:
isbn:
- '9781728165530'
publication_status: published
publisher: IEEE
quality_controlled: '1'
related_material:
record:
- id: '8331'
relation: dissertation_contains
status: deleted
- id: '8390'
relation: dissertation_contains
status: public
scopus_import: '1'
status: public
title: A flexible selection scheme for minimum-effort transfer learning
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2020'
...
---
_id: '7481'
abstract:
- lang: eng
text: 'We address the following question: How redundant is the parameterisation
of ReLU networks? Specifically, we consider transformations of the weight space
which leave the function implemented by the network intact. Two such transformations
are known for feed-forward architectures: permutation of neurons within a layer,
and positive scaling of all incoming weights of a neuron coupled with inverse
scaling of its outgoing weights. In this work, we show for architectures with
non-increasing widths that permutation and scaling are in fact the only function-preserving
weight transformations. For any eligible architecture we give an explicit construction
of a neural network such that any other network that implements the same function
can be obtained from the original one by the application of permutations and rescaling. The
proof relies on a geometric understanding of boundaries between linear regions
of ReLU networks, and we hope the developed mathematical tools are of independent
interest.'
article_processing_charge: No
author:
- first_name: Phuong
full_name: Bui Thi Mai, Phuong
id: 3EC6EE64-F248-11E8-B48F-1D18A9856A87
last_name: Bui Thi Mai
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Phuong M, Lampert C. Functional vs. parametric equivalence of ReLU networks.
In: 8th International Conference on Learning Representations. ; 2020.'
apa: Phuong, M., & Lampert, C. (2020). Functional vs. parametric equivalence
of ReLU networks. In 8th International Conference on Learning Representations.
Online.
chicago: Phuong, Mary, and Christoph Lampert. “Functional vs. Parametric Equivalence
of ReLU Networks.” In 8th International Conference on Learning Representations,
2020.
ieee: M. Phuong and C. Lampert, “Functional vs. parametric equivalence of ReLU networks,”
in 8th International Conference on Learning Representations, Online, 2020.
ista: 'Phuong M, Lampert C. 2020. Functional vs. parametric equivalence of ReLU
networks. 8th International Conference on Learning Representations. ICLR: International
Conference on Learning Representations.'
mla: Phuong, Mary, and Christoph Lampert. “Functional vs. Parametric Equivalence
of ReLU Networks.” 8th International Conference on Learning Representations,
2020.
short: M. Phuong, C. Lampert, in:, 8th International Conference on Learning Representations,
2020.
conference:
end_date: 2020-04-30
location: Online
name: 'ICLR: International Conference on Learning Representations'
start_date: 2020-04-27
date_created: 2020-02-11T09:07:37Z
date_published: 2020-04-26T00:00:00Z
date_updated: 2023-09-07T13:29:50Z
day: '26'
ddc:
- '000'
department:
- _id: ChLa
file:
- access_level: open_access
checksum: 8d372ea5defd8cb8fdc430111ed754a9
content_type: application/pdf
creator: bphuong
date_created: 2020-02-11T09:07:27Z
date_updated: 2020-07-14T12:47:59Z
file_id: '7482'
file_name: main.pdf
file_size: 405469
relation: main_file
file_date_updated: 2020-07-14T12:47:59Z
has_accepted_license: '1'
language:
- iso: eng
month: '04'
oa: 1
oa_version: Published Version
publication: 8th International Conference on Learning Representations
publication_status: published
quality_controlled: '1'
related_material:
link:
- relation: supplementary_material
url: https://iclr.cc/virtual_2020/poster_Bylx-TNKvH.html
record:
- id: '9418'
relation: dissertation_contains
status: public
status: public
title: Functional vs. parametric equivalence of ReLU networks
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2020'
...
---
_id: '8724'
abstract:
- lang: eng
text: "We study the problem of learning from multiple untrusted data sources, a
scenario of increasing practical relevance given the recent emergence of crowdsourcing
and collaborative learning paradigms. Specifically, we analyze the situation in
which a learning system obtains datasets from multiple sources, some of which
might be biased or even adversarially perturbed. It is\r\nknown that in the single-source
case, an adversary with the power to corrupt a fixed fraction of the training
data can prevent PAC-learnability, that is, even in the limit of infinitely much
training data, no learning system can approach the optimal test error. In this
work we show that, surprisingly, the same is not true in the multi-source setting,
where the adversary can arbitrarily\r\ncorrupt a fixed fraction of the data sources.
Our main results are a generalization bound that provides finite-sample guarantees
for this learning setting, as well as corresponding lower bounds. Besides establishing
PAC-learnability our results also show that in a cooperative learning setting
sharing data with other parties has provable benefits, even if some\r\nparticipants
are malicious. "
acknowledged_ssus:
- _id: ScienComp
acknowledgement: Dan Alistarh is supported in part by the European Research Council
(ERC) under the European Union’s Horizon 2020 research and innovation programme
(grant agreement No 805223 ScaleML). This research was supported by the Scientific
Service Units (SSU) of IST Austria through resources provided by Scientific Computing
(SciComp).
article_processing_charge: No
author:
- first_name: Nikola H
full_name: Konstantinov, Nikola H
id: 4B9D76E4-F248-11E8-B48F-1D18A9856A87
last_name: Konstantinov
- first_name: Elias
full_name: Frantar, Elias
id: 09a8f98d-ec99-11ea-ae11-c063a7b7fe5f
last_name: Frantar
- first_name: Dan-Adrian
full_name: Alistarh, Dan-Adrian
id: 4A899BFC-F248-11E8-B48F-1D18A9856A87
last_name: Alistarh
orcid: 0000-0003-3650-940X
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Konstantinov NH, Frantar E, Alistarh D-A, Lampert C. On the sample complexity
of adversarial multi-source PAC learning. In: Proceedings of the 37th International
Conference on Machine Learning. Vol 119. ML Research Press; 2020:5416-5425.'
apa: 'Konstantinov, N. H., Frantar, E., Alistarh, D.-A., & Lampert, C. (2020).
On the sample complexity of adversarial multi-source PAC learning. In Proceedings
of the 37th International Conference on Machine Learning (Vol. 119, pp. 5416–5425).
Online: ML Research Press.'
chicago: Konstantinov, Nikola H, Elias Frantar, Dan-Adrian Alistarh, and Christoph
Lampert. “On the Sample Complexity of Adversarial Multi-Source PAC Learning.”
In Proceedings of the 37th International Conference on Machine Learning,
119:5416–25. ML Research Press, 2020.
ieee: N. H. Konstantinov, E. Frantar, D.-A. Alistarh, and C. Lampert, “On the sample
complexity of adversarial multi-source PAC learning,” in Proceedings of the
37th International Conference on Machine Learning, Online, 2020, vol. 119,
pp. 5416–5425.
ista: 'Konstantinov NH, Frantar E, Alistarh D-A, Lampert C. 2020. On the sample
complexity of adversarial multi-source PAC learning. Proceedings of the 37th International
Conference on Machine Learning. ICML: International Conference on Machine Learning
vol. 119, 5416–5425.'
mla: Konstantinov, Nikola H., et al. “On the Sample Complexity of Adversarial Multi-Source
PAC Learning.” Proceedings of the 37th International Conference on Machine
Learning, vol. 119, ML Research Press, 2020, pp. 5416–25.
short: N.H. Konstantinov, E. Frantar, D.-A. Alistarh, C. Lampert, in:, Proceedings
of the 37th International Conference on Machine Learning, ML Research Press, 2020,
pp. 5416–5425.
conference:
end_date: 2020-07-18
location: Online
name: 'ICML: International Conference on Machine Learning'
start_date: 2020-07-12
date_created: 2020-11-05T15:25:58Z
date_published: 2020-07-12T00:00:00Z
date_updated: 2023-09-07T13:42:08Z
day: '12'
ddc:
- '000'
department:
- _id: DaAl
- _id: ChLa
ec_funded: 1
external_id:
arxiv:
- '2002.10384'
file:
- access_level: open_access
checksum: cc755d0054bc4b2be778ea7aa7884d2f
content_type: application/pdf
creator: dernst
date_created: 2021-02-15T09:00:01Z
date_updated: 2021-02-15T09:00:01Z
file_id: '9120'
file_name: 2020_PMLR_Konstantinov.pdf
file_size: 281286
relation: main_file
success: 1
file_date_updated: 2021-02-15T09:00:01Z
has_accepted_license: '1'
intvolume: ' 119'
language:
- iso: eng
month: '07'
oa: 1
oa_version: Published Version
page: 5416-5425
project:
- _id: 268A44D6-B435-11E9-9278-68D0E5697425
call_identifier: H2020
grant_number: '805223'
name: Elastic Coordination for Scalable Machine Learning
publication: Proceedings of the 37th International Conference on Machine Learning
publication_identifier:
issn:
- 2640-3498
publication_status: published
publisher: ML Research Press
quality_controlled: '1'
related_material:
link:
- relation: supplementary_material
url: http://proceedings.mlr.press/v119/konstantinov20a/konstantinov20a-supp.pdf
record:
- id: '10799'
relation: dissertation_contains
status: public
scopus_import: '1'
status: public
title: On the sample complexity of adversarial multi-source PAC learning
type: conference
user_id: 3E5EF7F0-F248-11E8-B48F-1D18A9856A87
volume: 119
year: '2020'
...
---
_id: '8186'
abstract:
- lang: eng
text: "Numerous methods have been proposed for probabilistic generative modelling
of\r\n3D objects. However, none of these is able to produce textured objects,
which\r\nrenders them of limited use for practical tasks. In this work, we present
the\r\nfirst generative model of textured 3D meshes. Training such a model would\r\ntraditionally
require a large dataset of textured meshes, but unfortunately,\r\nexisting datasets
of meshes lack detailed textures. We instead propose a new\r\ntraining methodology
that allows learning from collections of 2D images without\r\nany 3D information.
To do so, we train our model to explain a distribution of\r\nimages by modelling
each image as a 3D foreground object placed in front of a\r\n2D background. Thus,
it learns to generate meshes that when rendered, produce\r\nimages similar to
those in its training set.\r\n A well-known problem when generating meshes with
deep networks is the\r\nemergence of self-intersections, which are problematic
for many use-cases. As a\r\nsecond contribution we therefore introduce a new generation
process for 3D\r\nmeshes that guarantees no self-intersections arise, based on
the physical\r\nintuition that faces should push one another out of the way as
they move.\r\n We conduct extensive experiments on our approach, reporting quantitative
and\r\nqualitative results on both synthetic data and natural images. These show
our\r\nmethod successfully learns to generate plausible and diverse textured 3D\r\nsamples
for five challenging object classes."
article_processing_charge: No
author:
- first_name: Paul M
full_name: Henderson, Paul M
id: 13C09E74-18D9-11E9-8878-32CFE5697425
last_name: Henderson
orcid: 0000-0002-5198-7445
- first_name: Vagia
full_name: Tsiminaki, Vagia
last_name: Tsiminaki
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Henderson PM, Tsiminaki V, Lampert C. Leveraging 2D data to learn textured
3D mesh generation. In: Proceedings of the IEEE/CVF Conference on Computer
Vision and Pattern Recognition. IEEE; 2020:7498-7507. doi:10.1109/CVPR42600.2020.00752'
apa: 'Henderson, P. M., Tsiminaki, V., & Lampert, C. (2020). Leveraging 2D data
to learn textured 3D mesh generation. In Proceedings of the IEEE/CVF Conference
on Computer Vision and Pattern Recognition (pp. 7498–7507). Virtual: IEEE.
https://doi.org/10.1109/CVPR42600.2020.00752'
chicago: Henderson, Paul M, Vagia Tsiminaki, and Christoph Lampert. “Leveraging
2D Data to Learn Textured 3D Mesh Generation.” In Proceedings of the IEEE/CVF
Conference on Computer Vision and Pattern Recognition, 7498–7507. IEEE, 2020.
https://doi.org/10.1109/CVPR42600.2020.00752.
ieee: P. M. Henderson, V. Tsiminaki, and C. Lampert, “Leveraging 2D data to learn
textured 3D mesh generation,” in Proceedings of the IEEE/CVF Conference on
Computer Vision and Pattern Recognition, Virtual, 2020, pp. 7498–7507.
ista: 'Henderson PM, Tsiminaki V, Lampert C. 2020. Leveraging 2D data to learn textured
3D mesh generation. Proceedings of the IEEE/CVF Conference on Computer Vision
and Pattern Recognition. CVPR: Conference on Computer Vision and Pattern Recognition,
7498–7507.'
mla: Henderson, Paul M., et al. “Leveraging 2D Data to Learn Textured 3D Mesh Generation.”
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,
IEEE, 2020, pp. 7498–507, doi:10.1109/CVPR42600.2020.00752.
short: P.M. Henderson, V. Tsiminaki, C. Lampert, in:, Proceedings of the IEEE/CVF
Conference on Computer Vision and Pattern Recognition, IEEE, 2020, pp. 7498–7507.
conference:
end_date: 2020-06-19
location: Virtual
name: 'CVPR: Conference on Computer Vision and Pattern Recognition'
start_date: 2020-06-14
date_created: 2020-07-31T16:53:49Z
date_published: 2020-07-01T00:00:00Z
date_updated: 2023-10-17T07:37:11Z
day: '01'
ddc:
- '004'
department:
- _id: ChLa
doi: 10.1109/CVPR42600.2020.00752
external_id:
arxiv:
- '2004.04180'
file:
- access_level: open_access
content_type: application/pdf
creator: phenders
date_created: 2020-07-31T16:57:12Z
date_updated: 2020-07-31T16:57:12Z
file_id: '8187'
file_name: paper.pdf
file_size: 10262773
relation: main_file
success: 1
file_date_updated: 2020-07-31T16:57:12Z
has_accepted_license: '1'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://openaccess.thecvf.com/content_CVPR_2020/papers/Henderson_Leveraging_2D_Data_to_Learn_Textured_3D_Mesh_Generation_CVPR_2020_paper.pdf
month: '07'
oa: 1
oa_version: Submitted Version
page: 7498-7507
publication: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern
Recognition
publication_identifier:
eisbn:
- '9781728171685'
eissn:
- 2575-7075
publication_status: published
publisher: IEEE
quality_controlled: '1'
scopus_import: '1'
status: public
title: Leveraging 2D data to learn textured 3D mesh generation
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2020'
...
---
_id: '6944'
abstract:
- lang: eng
text: 'We study the problem of automatically detecting if a given multi-class classifier
operates outside of its specifications (out-of-specs), i.e. on input data from
a different distribution than what it was trained for. This is an important problem
to solve on the road towards creating reliable computer vision systems for real-world
applications, because the quality of a classifier’s predictions cannot be guaranteed
if it operates out-of-specs. Previously proposed methods for out-of-specs detection
make decisions on the level of single inputs. This, however, is insufficient to
achieve low false positive rate and high false negative rates at the same time.
In this work, we describe a new procedure named KS(conf), based on statistical
reasoning. Its main component is a classical Kolmogorov–Smirnov test that is applied
to the set of predicted confidence values for batches of samples. Working with
batches instead of single samples allows increasing the true positive rate without
negatively affecting the false positive rate, thereby overcoming a crucial limitation
of single sample tests. We show by extensive experiments using a variety of convolutional
network architectures and datasets that KS(conf) reliably detects out-of-specs
situations even under conditions where other tests fail. It furthermore has a
number of properties that make it an excellent candidate for practical deployment:
it is easy to implement, adds almost no overhead to the system, works with any
classifier that outputs confidence scores, and requires no a priori knowledge
about how the data distribution could change.'
article_processing_charge: Yes (via OA deal)
article_type: original
author:
- first_name: Rémy
full_name: Sun, Rémy
last_name: Sun
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Sun R, Lampert C. KS(conf): A light-weight test if a multiclass classifier
operates outside of its specifications. International Journal of Computer Vision.
2020;128(4):970-995. doi:10.1007/s11263-019-01232-x'
apa: 'Sun, R., & Lampert, C. (2020). KS(conf): A light-weight test if a multiclass
classifier operates outside of its specifications. International Journal of
Computer Vision. Springer Nature. https://doi.org/10.1007/s11263-019-01232-x'
chicago: 'Sun, Rémy, and Christoph Lampert. “KS(Conf): A Light-Weight Test If a
Multiclass Classifier Operates Outside of Its Specifications.” International
Journal of Computer Vision. Springer Nature, 2020. https://doi.org/10.1007/s11263-019-01232-x.'
ieee: 'R. Sun and C. Lampert, “KS(conf): A light-weight test if a multiclass classifier
operates outside of its specifications,” International Journal of Computer
Vision, vol. 128, no. 4. Springer Nature, pp. 970–995, 2020.'
ista: 'Sun R, Lampert C. 2020. KS(conf): A light-weight test if a multiclass classifier
operates outside of its specifications. International Journal of Computer Vision.
128(4), 970–995.'
mla: 'Sun, Rémy, and Christoph Lampert. “KS(Conf): A Light-Weight Test If a Multiclass
Classifier Operates Outside of Its Specifications.” International Journal of
Computer Vision, vol. 128, no. 4, Springer Nature, 2020, pp. 970–95, doi:10.1007/s11263-019-01232-x.'
short: R. Sun, C. Lampert, International Journal of Computer Vision 128 (2020) 970–995.
date_created: 2019-10-14T09:14:28Z
date_published: 2020-04-01T00:00:00Z
date_updated: 2024-02-22T14:57:30Z
day: '01'
ddc:
- '004'
department:
- _id: ChLa
doi: 10.1007/s11263-019-01232-x
ec_funded: 1
external_id:
isi:
- '000494406800001'
file:
- access_level: open_access
checksum: 155e63edf664dcacb3bdc1c2223e606f
content_type: application/pdf
creator: dernst
date_created: 2019-11-26T10:30:02Z
date_updated: 2020-07-14T12:47:45Z
file_id: '7110'
file_name: 2019_IJCV_Sun.pdf
file_size: 1715072
relation: main_file
file_date_updated: 2020-07-14T12:47:45Z
has_accepted_license: '1'
intvolume: ' 128'
isi: 1
issue: '4'
language:
- iso: eng
month: '04'
oa: 1
oa_version: Published Version
page: 970-995
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '308036'
name: Lifelong Learning of Visual Scene Understanding
- _id: B67AFEDC-15C9-11EA-A837-991A96BB2854
name: IST Austria Open Access Fund
publication: International Journal of Computer Vision
publication_identifier:
eissn:
- 1573-1405
issn:
- 0920-5691
publication_status: published
publisher: Springer Nature
quality_controlled: '1'
related_material:
link:
- relation: erratum
url: https://doi.org/10.1007/s11263-019-01262-5
record:
- id: '6482'
relation: earlier_version
status: public
scopus_import: '1'
status: public
title: 'KS(conf): A light-weight test if a multiclass classifier operates outside
of its specifications'
tmp:
image: /images/cc_by.png
legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
short: CC BY (4.0)
type: journal_article
user_id: 3E5EF7F0-F248-11E8-B48F-1D18A9856A87
volume: 128
year: '2020'
...
---
_id: '7171'
abstract:
- lang: ger
text: "Wissen Sie, was sich hinter künstlicher Intelligenz und maschinellem Lernen
verbirgt? \r\nDieses Sachbuch erklärt Ihnen leicht verständlich und ohne komplizierte
Formeln die grundlegenden Methoden und Vorgehensweisen des maschinellen Lernens.
Mathematisches Vorwissen ist dafür nicht nötig. Kurzweilig und informativ illustriert
Lisa, die Protagonistin des Buches, diese anhand von Alltagssituationen. \r\nEin
Buch für alle, die in Diskussionen über Chancen und Risiken der aktuellen Entwicklung
der künstlichen Intelligenz und des maschinellen Lernens mit Faktenwissen punkten
möchten. Auch für Schülerinnen und Schüler geeignet!"
article_processing_charge: No
citation:
ama: 'Kersting K, Lampert C, Rothkopf C, eds. Wie Maschinen Lernen: Künstliche
Intelligenz Verständlich Erklärt. 1st ed. Wiesbaden: Springer Nature; 2019.
doi:10.1007/978-3-658-26763-6'
apa: 'Kersting, K., Lampert, C., & Rothkopf, C. (Eds.). (2019). Wie Maschinen
Lernen: Künstliche Intelligenz Verständlich Erklärt (1st ed.). Wiesbaden:
Springer Nature. https://doi.org/10.1007/978-3-658-26763-6'
chicago: 'Kersting, Kristian, Christoph Lampert, and Constantin Rothkopf, eds. Wie
Maschinen Lernen: Künstliche Intelligenz Verständlich Erklärt. 1st ed. Wiesbaden:
Springer Nature, 2019. https://doi.org/10.1007/978-3-658-26763-6.'
ieee: 'K. Kersting, C. Lampert, and C. Rothkopf, Eds., Wie Maschinen Lernen:
Künstliche Intelligenz Verständlich Erklärt, 1st ed. Wiesbaden: Springer Nature,
2019.'
ista: 'Kersting K, Lampert C, Rothkopf C eds. 2019. Wie Maschinen Lernen: Künstliche
Intelligenz Verständlich Erklärt 1st ed., Wiesbaden: Springer Nature, XIV, 245p.'
mla: 'Kersting, Kristian, et al., editors. Wie Maschinen Lernen: Künstliche Intelligenz
Verständlich Erklärt. 1st ed., Springer Nature, 2019, doi:10.1007/978-3-658-26763-6.'
short: 'K. Kersting, C. Lampert, C. Rothkopf, eds., Wie Maschinen Lernen: Künstliche
Intelligenz Verständlich Erklärt, 1st ed., Springer Nature, Wiesbaden, 2019.'
date_created: 2019-12-11T14:15:56Z
date_published: 2019-10-30T00:00:00Z
date_updated: 2021-12-22T14:40:58Z
day: '30'
department:
- _id: ChLa
doi: 10.1007/978-3-658-26763-6
edition: '1'
editor:
- first_name: Kristian
full_name: Kersting, Kristian
last_name: Kersting
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
- first_name: Constantin
full_name: Rothkopf, Constantin
last_name: Rothkopf
language:
- iso: ger
month: '10'
oa_version: None
page: XIV, 245
place: Wiesbaden
publication_identifier:
eisbn:
- 978-3-658-26763-6
isbn:
- 978-3-658-26762-9
publication_status: published
publisher: Springer Nature
quality_controlled: '1'
related_material:
link:
- description: News on IST Website
relation: press_release
url: https://ist.ac.at/en/news/book-release-how-machines-learn/
status: public
title: 'Wie Maschinen Lernen: Künstliche Intelligenz Verständlich Erklärt'
type: book_editor
user_id: 8b945eb4-e2f2-11eb-945a-df72226e66a9
year: '2019'
...
---
_id: '6942'
abstract:
- lang: eng
text: "Graph games and Markov decision processes (MDPs) are standard models in reactive
synthesis and verification of probabilistic systems with nondeterminism. The class
of \U0001D714 -regular winning conditions; e.g., safety, reachability, liveness,
parity conditions; provides a robust and expressive specification formalism for
properties that arise in analysis of reactive systems. The resolutions of nondeterminism
in games and MDPs are represented as strategies, and we consider succinct representation
of such strategies. The decision-tree data structure from machine learning retains
the flavor of decisions of strategies and allows entropy-based minimization to
obtain succinct trees. However, in contrast to traditional machine-learning problems
where small errors are allowed, for winning strategies in graph games and MDPs
no error is allowed, and the decision tree must represent the entire strategy.
In this work we propose decision trees with linear classifiers for representation
of strategies in graph games and MDPs. We have implemented strategy representation
using this data structure and we present experimental results for problems on
graph games and MDPs, which show that this new data structure presents a much
more efficient strategy representation as compared to standard decision trees."
alternative_title:
- LNCS
article_processing_charge: No
author:
- first_name: Pranav
full_name: Ashok, Pranav
last_name: Ashok
- first_name: Tomáš
full_name: Brázdil, Tomáš
last_name: Brázdil
- first_name: Krishnendu
full_name: Chatterjee, Krishnendu
id: 2E5DCA20-F248-11E8-B48F-1D18A9856A87
last_name: Chatterjee
orcid: 0000-0002-4561-241X
- first_name: Jan
full_name: Křetínský, Jan
last_name: Křetínský
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
- first_name: Viktor
full_name: Toman, Viktor
id: 3AF3DA7C-F248-11E8-B48F-1D18A9856A87
last_name: Toman
orcid: 0000-0001-9036-063X
citation:
ama: 'Ashok P, Brázdil T, Chatterjee K, Křetínský J, Lampert C, Toman V. Strategy
representation by decision trees with linear classifiers. In: 16th International
Conference on Quantitative Evaluation of Systems. Vol 11785. Springer Nature;
2019:109-128. doi:10.1007/978-3-030-30281-8_7'
apa: 'Ashok, P., Brázdil, T., Chatterjee, K., Křetínský, J., Lampert, C., &
Toman, V. (2019). Strategy representation by decision trees with linear classifiers.
In 16th International Conference on Quantitative Evaluation of Systems
(Vol. 11785, pp. 109–128). Glasgow, United Kingdom: Springer Nature. https://doi.org/10.1007/978-3-030-30281-8_7'
chicago: Ashok, Pranav, Tomáš Brázdil, Krishnendu Chatterjee, Jan Křetínský, Christoph
Lampert, and Viktor Toman. “Strategy Representation by Decision Trees with Linear
Classifiers.” In 16th International Conference on Quantitative Evaluation of
Systems, 11785:109–28. Springer Nature, 2019. https://doi.org/10.1007/978-3-030-30281-8_7.
ieee: P. Ashok, T. Brázdil, K. Chatterjee, J. Křetínský, C. Lampert, and V. Toman,
“Strategy representation by decision trees with linear classifiers,” in 16th
International Conference on Quantitative Evaluation of Systems, Glasgow, United
Kingdom, 2019, vol. 11785, pp. 109–128.
ista: 'Ashok P, Brázdil T, Chatterjee K, Křetínský J, Lampert C, Toman V. 2019.
Strategy representation by decision trees with linear classifiers. 16th International
Conference on Quantitative Evaluation of Systems. QEST: Quantitative Evaluation
of Systems, LNCS, vol. 11785, 109–128.'
mla: Ashok, Pranav, et al. “Strategy Representation by Decision Trees with Linear
Classifiers.” 16th International Conference on Quantitative Evaluation of Systems,
vol. 11785, Springer Nature, 2019, pp. 109–28, doi:10.1007/978-3-030-30281-8_7.
short: P. Ashok, T. Brázdil, K. Chatterjee, J. Křetínský, C. Lampert, V. Toman,
in:, 16th International Conference on Quantitative Evaluation of Systems, Springer
Nature, 2019, pp. 109–128.
conference:
end_date: 2019-09-12
location: Glasgow, United Kingdom
name: 'QEST: Quantitative Evaluation of Systems'
start_date: 2019-09-10
date_created: 2019-10-14T06:57:49Z
date_published: 2019-09-04T00:00:00Z
date_updated: 2023-08-30T06:59:36Z
day: '04'
department:
- _id: KrCh
- _id: ChLa
doi: 10.1007/978-3-030-30281-8_7
external_id:
arxiv:
- '1906.08178'
isi:
- '000679281300007'
intvolume: ' 11785'
isi: 1
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/1906.08178
month: '09'
oa: 1
oa_version: Preprint
page: 109-128
project:
- _id: 25863FF4-B435-11E9-9278-68D0E5697425
call_identifier: FWF
grant_number: S11407
name: Game Theory
- _id: 25F2ACDE-B435-11E9-9278-68D0E5697425
call_identifier: FWF
grant_number: S11402-N23
name: Rigorous Systems Engineering
- _id: 25892FC0-B435-11E9-9278-68D0E5697425
grant_number: ICT15-003
name: Efficient Algorithms for Computer Aided Verification
publication: 16th International Conference on Quantitative Evaluation of Systems
publication_identifier:
eisbn:
- '9783030302818'
isbn:
- '9783030302801'
issn:
- 0302-9743
publication_status: published
publisher: Springer Nature
quality_controlled: '1'
scopus_import: '1'
status: public
title: Strategy representation by decision trees with linear classifiers
type: conference
user_id: 4359f0d1-fa6c-11eb-b949-802e58b17ae8
volume: 11785
year: '2019'
...
---
_id: '6554'
abstract:
- lang: eng
text: Due to the importance of zero-shot learning, i.e. classifying images where
there is a lack of labeled training data, the number of proposed approaches has
recently increased steadily. We argue that it is time to take a step back and
to analyze the status quo of the area. The purpose of this paper is three-fold.
First, given the fact that there is no agreed upon zero-shot learning benchmark,
we first define a new benchmark by unifying both the evaluation protocols and
data splits of publicly available datasets used for this task. This is an important
contribution as published results are often not comparable and sometimes even
flawed due to, e.g. pre-training on zero-shot test classes. Moreover, we propose
a new zero-shot learning dataset, the Animals with Attributes 2 (AWA2) dataset
which we make publicly available both in terms of image features and the images
themselves. Second, we compare and analyze a significant number of the state-of-the-art
methods in depth, both in the classic zero-shot setting but also in the more realistic
generalized zero-shot setting. Finally, we discuss in detail the limitations of
the current status of the area which can be taken as a basis for advancing it.
article_processing_charge: No
article_type: original
author:
- first_name: Yongqin
full_name: Xian, Yongqin
last_name: Xian
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0002-4561-241X
- first_name: Bernt
full_name: Schiele, Bernt
last_name: Schiele
- first_name: Zeynep
full_name: Akata, Zeynep
last_name: Akata
citation:
ama: Xian Y, Lampert C, Schiele B, Akata Z. Zero-shot learning - A comprehensive
evaluation of the good, the bad and the ugly. IEEE Transactions on Pattern
Analysis and Machine Intelligence. 2019;41(9):2251-2265. doi:10.1109/tpami.2018.2857768
apa: Xian, Y., Lampert, C., Schiele, B., & Akata, Z. (2019). Zero-shot learning
- A comprehensive evaluation of the good, the bad and the ugly. IEEE Transactions
on Pattern Analysis and Machine Intelligence. Institute of Electrical and
Electronics Engineers (IEEE). https://doi.org/10.1109/tpami.2018.2857768
chicago: Xian, Yongqin, Christoph Lampert, Bernt Schiele, and Zeynep Akata. “Zero-Shot
Learning - A Comprehensive Evaluation of the Good, the Bad and the Ugly.” IEEE
Transactions on Pattern Analysis and Machine Intelligence. Institute of Electrical
and Electronics Engineers (IEEE), 2019. https://doi.org/10.1109/tpami.2018.2857768.
ieee: Y. Xian, C. Lampert, B. Schiele, and Z. Akata, “Zero-shot learning - A comprehensive
evaluation of the good, the bad and the ugly,” IEEE Transactions on Pattern
Analysis and Machine Intelligence, vol. 41, no. 9. Institute of Electrical
and Electronics Engineers (IEEE), pp. 2251–2265, 2019.
ista: Xian Y, Lampert C, Schiele B, Akata Z. 2019. Zero-shot learning - A comprehensive
evaluation of the good, the bad and the ugly. IEEE Transactions on Pattern Analysis
and Machine Intelligence. 41(9), 2251–2265.
mla: Xian, Yongqin, et al. “Zero-Shot Learning - A Comprehensive Evaluation of the
Good, the Bad and the Ugly.” IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 41, no. 9, Institute of Electrical and Electronics Engineers
(IEEE), 2019, pp. 2251–65, doi:10.1109/tpami.2018.2857768.
short: Y. Xian, C. Lampert, B. Schiele, Z. Akata, IEEE Transactions on Pattern Analysis
and Machine Intelligence 41 (2019) 2251–2265.
date_created: 2019-06-11T14:05:59Z
date_published: 2019-09-01T00:00:00Z
date_updated: 2023-09-05T13:18:09Z
day: '01'
department:
- _id: ChLa
doi: 10.1109/tpami.2018.2857768
external_id:
arxiv:
- '1707.00600'
isi:
- '000480343900015'
intvolume: ' 41'
isi: 1
issue: '9'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/1707.00600
month: '09'
oa: 1
oa_version: Preprint
page: 2251 - 2265
publication: IEEE Transactions on Pattern Analysis and Machine Intelligence
publication_identifier:
eissn:
- 1939-3539
issn:
- 0162-8828
publication_status: published
publisher: Institute of Electrical and Electronics Engineers (IEEE)
quality_controlled: '1'
scopus_import: '1'
status: public
title: Zero-shot learning - A comprehensive evaluation of the good, the bad and the
ugly
type: journal_article
user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1
volume: 41
year: '2019'
...
---
_id: '7479'
abstract:
- lang: eng
text: "Multi-exit architectures, in which a stack of processing layers is interleaved
with early output layers, allow the processing of a test example to stop early
and thus save computation time and/or energy. In this work, we propose a new
training procedure for multi-exit architectures based on the principle of knowledge
distillation. The method encourage searly exits to mimic later, more accurate
exits, by matching their output probabilities.\r\nExperiments on CIFAR100 and
\ ImageNet show that distillation-based training significantly improves the
accuracy of early exits while maintaining state-of-the-art accuracy for late
\ ones. The method is particularly beneficial when training data is limited
\ and it allows a straightforward extension to semi-supervised learning,i.e.
making use of unlabeled data at training time. Moreover, it takes only afew lines
to implement and incurs almost no computational overhead at training time, and
none at all at test time."
article_processing_charge: No
author:
- first_name: Phuong
full_name: Bui Thi Mai, Phuong
id: 3EC6EE64-F248-11E8-B48F-1D18A9856A87
last_name: Bui Thi Mai
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Phuong M, Lampert C. Distillation-based training for multi-exit architectures.
In: IEEE International Conference on Computer Vision. Vol 2019-October.
IEEE; 2019:1355-1364. doi:10.1109/ICCV.2019.00144'
apa: 'Phuong, M., & Lampert, C. (2019). Distillation-based training for multi-exit
architectures. In IEEE International Conference on Computer Vision (Vol.
2019–October, pp. 1355–1364). Seoul, Korea: IEEE. https://doi.org/10.1109/ICCV.2019.00144'
chicago: Phuong, Mary, and Christoph Lampert. “Distillation-Based Training for Multi-Exit
Architectures.” In IEEE International Conference on Computer Vision, 2019–October:1355–64.
IEEE, 2019. https://doi.org/10.1109/ICCV.2019.00144.
ieee: M. Phuong and C. Lampert, “Distillation-based training for multi-exit architectures,”
in IEEE International Conference on Computer Vision, Seoul, Korea, 2019,
vol. 2019–October, pp. 1355–1364.
ista: 'Phuong M, Lampert C. 2019. Distillation-based training for multi-exit architectures.
IEEE International Conference on Computer Vision. ICCV: International Conference
on Computer Vision vol. 2019–October, 1355–1364.'
mla: Phuong, Mary, and Christoph Lampert. “Distillation-Based Training for Multi-Exit
Architectures.” IEEE International Conference on Computer Vision, vol.
2019–October, IEEE, 2019, pp. 1355–64, doi:10.1109/ICCV.2019.00144.
short: M. Phuong, C. Lampert, in:, IEEE International Conference on Computer Vision,
IEEE, 2019, pp. 1355–1364.
conference:
end_date: 2019-11-02
location: Seoul, Korea
name: 'ICCV: International Conference on Computer Vision'
start_date: 2019-10-27
date_created: 2020-02-11T09:06:57Z
date_published: 2019-10-01T00:00:00Z
date_updated: 2023-09-08T11:11:12Z
day: '01'
ddc:
- '000'
department:
- _id: ChLa
doi: 10.1109/ICCV.2019.00144
ec_funded: 1
external_id:
isi:
- '000531438101047'
file:
- access_level: open_access
checksum: 7b77fb5c2d27c4c37a7612ba46a66117
content_type: application/pdf
creator: bphuong
date_created: 2020-02-11T09:06:39Z
date_updated: 2020-07-14T12:47:59Z
file_id: '7480'
file_name: main.pdf
file_size: 735768
relation: main_file
file_date_updated: 2020-07-14T12:47:59Z
has_accepted_license: '1'
isi: 1
language:
- iso: eng
month: '10'
oa: 1
oa_version: Submitted Version
page: 1355-1364
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '308036'
name: Lifelong Learning of Visual Scene Understanding
publication: IEEE International Conference on Computer Vision
publication_identifier:
isbn:
- '9781728148038'
issn:
- '15505499'
publication_status: published
publisher: IEEE
quality_controlled: '1'
related_material:
record:
- id: '9418'
relation: dissertation_contains
status: public
scopus_import: '1'
status: public
title: Distillation-based training for multi-exit architectures
type: conference
user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1
volume: 2019-October
year: '2019'
...
---
_id: '7640'
abstract:
- lang: eng
text: We propose a new model for detecting visual relationships, such as "person
riding motorcycle" or "bottle on table". This task is an important step towards
comprehensive structured mage understanding, going beyond detecting individual
objects. Our main novelty is a Box Attention mechanism that allows to model pairwise
interactions between objects using standard object detection pipelines. The resulting
model is conceptually clean, expressive and relies on well-justified training
and prediction procedures. Moreover, unlike previously proposed approaches, our
model does not introduce any additional complex components or hyperparameters
on top of those already required by the underlying detection model. We conduct
an experimental evaluation on two datasets, V-COCO and Open Images, demonstrating
strong quantitative and qualitative results.
article_number: 1749-1753
article_processing_charge: No
author:
- first_name: Alexander
full_name: Kolesnikov, Alexander
id: 2D157DB6-F248-11E8-B48F-1D18A9856A87
last_name: Kolesnikov
- first_name: Alina
full_name: Kuznetsova, Alina
last_name: Kuznetsova
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
- first_name: Vittorio
full_name: Ferrari, Vittorio
last_name: Ferrari
citation:
ama: 'Kolesnikov A, Kuznetsova A, Lampert C, Ferrari V. Detecting visual relationships
using box attention. In: Proceedings of the 2019 International Conference on
Computer Vision Workshop. IEEE; 2019. doi:10.1109/ICCVW.2019.00217'
apa: 'Kolesnikov, A., Kuznetsova, A., Lampert, C., & Ferrari, V. (2019). Detecting
visual relationships using box attention. In Proceedings of the 2019 International
Conference on Computer Vision Workshop. Seoul, South Korea: IEEE. https://doi.org/10.1109/ICCVW.2019.00217'
chicago: Kolesnikov, Alexander, Alina Kuznetsova, Christoph Lampert, and Vittorio
Ferrari. “Detecting Visual Relationships Using Box Attention.” In Proceedings
of the 2019 International Conference on Computer Vision Workshop. IEEE, 2019.
https://doi.org/10.1109/ICCVW.2019.00217.
ieee: A. Kolesnikov, A. Kuznetsova, C. Lampert, and V. Ferrari, “Detecting visual
relationships using box attention,” in Proceedings of the 2019 International
Conference on Computer Vision Workshop, Seoul, South Korea, 2019.
ista: 'Kolesnikov A, Kuznetsova A, Lampert C, Ferrari V. 2019. Detecting visual
relationships using box attention. Proceedings of the 2019 International Conference
on Computer Vision Workshop. ICCVW: International Conference on Computer Vision
Workshop, 1749–1753.'
mla: Kolesnikov, Alexander, et al. “Detecting Visual Relationships Using Box Attention.”
Proceedings of the 2019 International Conference on Computer Vision Workshop,
1749–1753, IEEE, 2019, doi:10.1109/ICCVW.2019.00217.
short: A. Kolesnikov, A. Kuznetsova, C. Lampert, V. Ferrari, in:, Proceedings of
the 2019 International Conference on Computer Vision Workshop, IEEE, 2019.
conference:
end_date: 2019-10-28
location: Seoul, South Korea
name: 'ICCVW: International Conference on Computer Vision Workshop'
start_date: 2019-10-27
date_created: 2020-04-05T22:00:51Z
date_published: 2019-10-01T00:00:00Z
date_updated: 2023-09-08T11:18:37Z
day: '01'
department:
- _id: ChLa
doi: 10.1109/ICCVW.2019.00217
ec_funded: 1
external_id:
arxiv:
- '1807.02136'
isi:
- '000554591601098'
isi: 1
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/1807.02136
month: '10'
oa: 1
oa_version: Preprint
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '308036'
name: Lifelong Learning of Visual Scene Understanding
publication: Proceedings of the 2019 International Conference on Computer Vision Workshop
publication_identifier:
isbn:
- '9781728150239'
publication_status: published
publisher: IEEE
quality_controlled: '1'
scopus_import: '1'
status: public
title: Detecting visual relationships using box attention
type: conference
user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1
year: '2019'
...
---
_id: '6569'
abstract:
- lang: eng
text: 'Knowledge distillation, i.e. one classifier being trained on the outputs
of another classifier, is an empirically very successful technique for knowledge
transfer between classifiers. It has even been observed that classifiers learn
much faster and more reliably if trained with the outputs of another classifier
as soft labels, instead of from ground truth data. So far, however, there is no
satisfactory theoretical explanation of this phenomenon. In this work, we provide
the first insights into the working mechanisms of distillation by studying the
special case of linear and deep linear classifiers. Specifically, we prove a
generalization bound that establishes fast convergence of the expected risk of
a distillation-trained linear classifier. From the bound and its proof we extract
three keyfactors that determine the success of distillation: data geometry – geometric
properties of the datadistribution, in particular class separation, has an immediate
influence on the convergence speed of the risk; optimization bias– gradient descentoptimization
finds a very favorable minimum of the distillation objective; and strong monotonicity–
the expected risk of the student classifier always decreases when the size of
the training set grows.'
article_processing_charge: No
author:
- first_name: Phuong
full_name: Bui Thi Mai, Phuong
id: 3EC6EE64-F248-11E8-B48F-1D18A9856A87
last_name: Bui Thi Mai
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Phuong M, Lampert C. Towards understanding knowledge distillation. In: Proceedings
of the 36th International Conference on Machine Learning. Vol 97. ML Research
Press; 2019:5142-5151.'
apa: 'Phuong, M., & Lampert, C. (2019). Towards understanding knowledge distillation.
In Proceedings of the 36th International Conference on Machine Learning
(Vol. 97, pp. 5142–5151). Long Beach, CA, United States: ML Research Press.'
chicago: Phuong, Mary, and Christoph Lampert. “Towards Understanding Knowledge Distillation.”
In Proceedings of the 36th International Conference on Machine Learning,
97:5142–51. ML Research Press, 2019.
ieee: M. Phuong and C. Lampert, “Towards understanding knowledge distillation,”
in Proceedings of the 36th International Conference on Machine Learning,
Long Beach, CA, United States, 2019, vol. 97, pp. 5142–5151.
ista: 'Phuong M, Lampert C. 2019. Towards understanding knowledge distillation.
Proceedings of the 36th International Conference on Machine Learning. ICML: International
Conference on Machine Learning vol. 97, 5142–5151.'
mla: Phuong, Mary, and Christoph Lampert. “Towards Understanding Knowledge Distillation.”
Proceedings of the 36th International Conference on Machine Learning, vol.
97, ML Research Press, 2019, pp. 5142–51.
short: M. Phuong, C. Lampert, in:, Proceedings of the 36th International Conference
on Machine Learning, ML Research Press, 2019, pp. 5142–5151.
conference:
end_date: 2019-06-15
location: Long Beach, CA, United States
name: 'ICML: International Conference on Machine Learning'
start_date: 2019-06-10
date_created: 2019-06-20T18:23:03Z
date_published: 2019-06-13T00:00:00Z
date_updated: 2023-10-17T12:31:38Z
day: '13'
ddc:
- '000'
department:
- _id: ChLa
file:
- access_level: open_access
checksum: a66d00e2694d749250f8507f301320ca
content_type: application/pdf
creator: bphuong
date_created: 2019-06-20T18:22:56Z
date_updated: 2020-07-14T12:47:33Z
file_id: '6570'
file_name: paper.pdf
file_size: 686432
relation: main_file
file_date_updated: 2020-07-14T12:47:33Z
has_accepted_license: '1'
intvolume: ' 97'
language:
- iso: eng
month: '06'
oa: 1
oa_version: Published Version
page: 5142-5151
publication: Proceedings of the 36th International Conference on Machine Learning
publication_status: published
publisher: ML Research Press
quality_controlled: '1'
scopus_import: '1'
status: public
title: Towards understanding knowledge distillation
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 97
year: '2019'
...
---
_id: '6590'
abstract:
- lang: eng
text: 'Modern machine learning methods often require more data for training than
a single expert can provide. Therefore, it has become a standard procedure to
collect data from external sources, e.g. via crowdsourcing. Unfortunately, the
quality of these sources is not always guaranteed. As additional complications,
the data might be stored in a distributed way, or might even have to remain private.
In this work, we address the question of how to learn robustly in such scenarios.
Studying the problem through the lens of statistical learning theory, we derive
a procedure that allows for learning from all available sources, yet automatically
suppresses irrelevant or corrupted data. We show by extensive experiments that
our method provides significant improvements over alternative approaches from
robust statistics and distributed optimization. '
article_processing_charge: No
author:
- first_name: Nikola H
full_name: Konstantinov, Nikola H
id: 4B9D76E4-F248-11E8-B48F-1D18A9856A87
last_name: Konstantinov
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Konstantinov NH, Lampert C. Robust learning from untrusted sources. In: Proceedings
of the 36th International Conference on Machine Learning. Vol 97. ML Research
Press; 2019:3488-3498.'
apa: 'Konstantinov, N. H., & Lampert, C. (2019). Robust learning from untrusted
sources. In Proceedings of the 36th International Conference on Machine Learning
(Vol. 97, pp. 3488–3498). Long Beach, CA, USA: ML Research Press.'
chicago: Konstantinov, Nikola H, and Christoph Lampert. “Robust Learning from Untrusted
Sources.” In Proceedings of the 36th International Conference on Machine Learning,
97:3488–98. ML Research Press, 2019.
ieee: N. H. Konstantinov and C. Lampert, “Robust learning from untrusted sources,”
in Proceedings of the 36th International Conference on Machine Learning,
Long Beach, CA, USA, 2019, vol. 97, pp. 3488–3498.
ista: 'Konstantinov NH, Lampert C. 2019. Robust learning from untrusted sources.
Proceedings of the 36th International Conference on Machine Learning. ICML: International
Conference on Machine Learning vol. 97, 3488–3498.'
mla: Konstantinov, Nikola H., and Christoph Lampert. “Robust Learning from Untrusted
Sources.” Proceedings of the 36th International Conference on Machine Learning,
vol. 97, ML Research Press, 2019, pp. 3488–98.
short: N.H. Konstantinov, C. Lampert, in:, Proceedings of the 36th International
Conference on Machine Learning, ML Research Press, 2019, pp. 3488–3498.
conference:
end_date: 2919-06-15
location: Long Beach, CA, USA
name: 'ICML: International Conference on Machine Learning'
start_date: 2019-06-10
date_created: 2019-06-27T14:18:23Z
date_published: 2019-06-01T00:00:00Z
date_updated: 2023-10-17T12:31:55Z
day: '01'
department:
- _id: ChLa
ec_funded: 1
external_id:
arxiv:
- '1901.10310'
intvolume: ' 97'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/1901.10310
month: '06'
oa: 1
oa_version: Preprint
page: 3488-3498
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '308036'
name: Lifelong Learning of Visual Scene Understanding
- _id: 2564DBCA-B435-11E9-9278-68D0E5697425
call_identifier: H2020
grant_number: '665385'
name: International IST Doctoral Program
publication: Proceedings of the 36th International Conference on Machine Learning
publication_status: published
publisher: ML Research Press
quality_controlled: '1'
related_material:
record:
- id: '10799'
relation: dissertation_contains
status: public
scopus_import: '1'
status: public
title: Robust learning from untrusted sources
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 97
year: '2019'
...
---
_id: '6482'
abstract:
- lang: eng
text: 'Computer vision systems for automatic image categorization have become accurate
and reliable enough that they can run continuously for days or even years as components
of real-world commercial applications. A major open problem in this context, however,
is quality control. Good classification performance can only be expected if systems
run under the specific conditions, in particular data distributions, that they
were trained for. Surprisingly, none of the currently used deep network architectures
have a built-in functionality that could detect if a network operates on data
from a distribution it was not trained for, such that potentially a warning to
the human users could be triggered. In this work, we describe KS(conf), a procedure
for detecting such outside of specifications (out-of-specs) operation, based on
statistical testing of the network outputs. We show by extensive experiments using
the ImageNet, AwA2 and DAVIS datasets on a variety of ConvNets architectures that
KS(conf) reliably detects out-of-specs situations. It furthermore has a number
of properties that make it a promising candidate for practical deployment: it
is easy to implement, adds almost no overhead to the system, works with all networks,
including pretrained ones, and requires no a priori knowledge of how the data
distribution could change. '
alternative_title:
- LNCS
article_processing_charge: No
author:
- first_name: Rémy
full_name: Sun, Rémy
last_name: Sun
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Sun R, Lampert C. KS(conf): A light-weight test if a ConvNet operates outside
of Its specifications. In: Vol 11269. Springer Nature; 2019:244-259. doi:10.1007/978-3-030-12939-2_18'
apa: 'Sun, R., & Lampert, C. (2019). KS(conf): A light-weight test if a ConvNet
operates outside of Its specifications (Vol. 11269, pp. 244–259). Presented at
the GCPR: Conference on Pattern Recognition, Stuttgart, Germany: Springer Nature.
https://doi.org/10.1007/978-3-030-12939-2_18'
chicago: 'Sun, Rémy, and Christoph Lampert. “KS(Conf): A Light-Weight Test If a
ConvNet Operates Outside of Its Specifications,” 11269:244–59. Springer Nature,
2019. https://doi.org/10.1007/978-3-030-12939-2_18.'
ieee: 'R. Sun and C. Lampert, “KS(conf): A light-weight test if a ConvNet operates
outside of Its specifications,” presented at the GCPR: Conference on Pattern Recognition,
Stuttgart, Germany, 2019, vol. 11269, pp. 244–259.'
ista: 'Sun R, Lampert C. 2019. KS(conf): A light-weight test if a ConvNet operates
outside of Its specifications. GCPR: Conference on Pattern Recognition, LNCS,
vol. 11269, 244–259.'
mla: 'Sun, Rémy, and Christoph Lampert. KS(Conf): A Light-Weight Test If a ConvNet
Operates Outside of Its Specifications. Vol. 11269, Springer Nature, 2019,
pp. 244–59, doi:10.1007/978-3-030-12939-2_18.'
short: R. Sun, C. Lampert, in:, Springer Nature, 2019, pp. 244–259.
conference:
end_date: 2018-10-12
location: Stuttgart, Germany
name: 'GCPR: Conference on Pattern Recognition'
start_date: 2018-10-09
date_created: 2019-05-24T09:48:36Z
date_published: 2019-02-14T00:00:00Z
date_updated: 2024-02-22T14:57:29Z
day: '14'
department:
- _id: ChLa
doi: 10.1007/978-3-030-12939-2_18
ec_funded: 1
external_id:
arxiv:
- '1804.04171'
intvolume: ' 11269'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/1804.04171
month: '02'
oa: 1
oa_version: Preprint
page: 244-259
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '308036'
name: Lifelong Learning of Visual Scene Understanding
publication_identifier:
eissn:
- 1611-3349
isbn:
- '9783030129385'
- '9783030129392'
issn:
- 0302-9743
publication_status: published
publisher: Springer Nature
quality_controlled: '1'
related_material:
record:
- id: '6944'
relation: later_version
status: public
scopus_import: '1'
status: public
title: 'KS(conf): A light-weight test if a ConvNet operates outside of Its specifications'
type: conference
user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1
volume: 11269
year: '2019'
...
---
_id: '321'
abstract:
- lang: eng
text: The twelve papers in this special section focus on learning systems with shared
information for computer vision and multimedia communication analysis. In the
real world, a realistic setting for computer vision or multimedia recognition
problems is that we have some classes containing lots of training data and many
classes containing a small amount of training data. Therefore, how to use frequent
classes to help learning rare classes for which it is harder to collect the training
data is an open question. Learning with shared information is an emerging topic
in machine learning, computer vision and multimedia analysis. There are different
levels of components that can be shared during concept modeling and machine learning
stages, such as sharing generic object parts, sharing attributes, sharing transformations,
sharing regularization parameters and sharing training examples, etc. Regarding
the specific methods, multi-task learning, transfer learning and deep learning
can be seen as using different strategies to share information. These learning
with shared information methods are very effective in solving real-world large-scale
problems.
article_processing_charge: No
article_type: original
author:
- first_name: Trevor
full_name: Darrell, Trevor
last_name: Darrell
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
- first_name: Nico
full_name: Sebe, Nico
last_name: Sebe
- first_name: Ying
full_name: Wu, Ying
last_name: Wu
- first_name: Yan
full_name: Yan, Yan
last_name: Yan
citation:
ama: Darrell T, Lampert C, Sebe N, Wu Y, Yan Y. Guest editors’ introduction to the
special section on learning with Shared information for computer vision and multimedia
analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence.
2018;40(5):1029-1031. doi:10.1109/TPAMI.2018.2804998
apa: Darrell, T., Lampert, C., Sebe, N., Wu, Y., & Yan, Y. (2018). Guest editors’
introduction to the special section on learning with Shared information for computer
vision and multimedia analysis. IEEE Transactions on Pattern Analysis and Machine
Intelligence. IEEE. https://doi.org/10.1109/TPAMI.2018.2804998
chicago: Darrell, Trevor, Christoph Lampert, Nico Sebe, Ying Wu, and Yan Yan. “Guest
Editors’ Introduction to the Special Section on Learning with Shared Information
for Computer Vision and Multimedia Analysis.” IEEE Transactions on Pattern
Analysis and Machine Intelligence. IEEE, 2018. https://doi.org/10.1109/TPAMI.2018.2804998.
ieee: T. Darrell, C. Lampert, N. Sebe, Y. Wu, and Y. Yan, “Guest editors’ introduction
to the special section on learning with Shared information for computer vision
and multimedia analysis,” IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 40, no. 5. IEEE, pp. 1029–1031, 2018.
ista: Darrell T, Lampert C, Sebe N, Wu Y, Yan Y. 2018. Guest editors’ introduction
to the special section on learning with Shared information for computer vision
and multimedia analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence.
40(5), 1029–1031.
mla: Darrell, Trevor, et al. “Guest Editors’ Introduction to the Special Section
on Learning with Shared Information for Computer Vision and Multimedia Analysis.”
IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40,
no. 5, IEEE, 2018, pp. 1029–31, doi:10.1109/TPAMI.2018.2804998.
short: T. Darrell, C. Lampert, N. Sebe, Y. Wu, Y. Yan, IEEE Transactions on Pattern
Analysis and Machine Intelligence 40 (2018) 1029–1031.
date_created: 2018-12-11T11:45:48Z
date_published: 2018-05-01T00:00:00Z
date_updated: 2023-09-11T14:07:54Z
day: '01'
ddc:
- '000'
department:
- _id: ChLa
doi: 10.1109/TPAMI.2018.2804998
external_id:
isi:
- '000428901200001'
file:
- access_level: open_access
checksum: b19c75da06faf3291a3ca47dfa50ef63
content_type: application/pdf
creator: dernst
date_created: 2020-05-14T12:50:48Z
date_updated: 2020-07-14T12:46:03Z
file_id: '7835'
file_name: 2018_IEEE_Darrell.pdf
file_size: 141724
relation: main_file
file_date_updated: 2020-07-14T12:46:03Z
has_accepted_license: '1'
intvolume: ' 40'
isi: 1
issue: '5'
language:
- iso: eng
month: '05'
oa: 1
oa_version: Published Version
page: 1029 - 1031
publication: IEEE Transactions on Pattern Analysis and Machine Intelligence
publication_status: published
publisher: IEEE
publist_id: '7544'
quality_controlled: '1'
scopus_import: '1'
status: public
title: Guest editors' introduction to the special section on learning with Shared
information for computer vision and multimedia analysis
type: journal_article
user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1
volume: 40
year: '2018'
...
---
_id: '10882'
abstract:
- lang: eng
text: 'We introduce Intelligent Annotation Dialogs for bounding box annotation.
We train an agent to automatically choose a sequence of actions for a human annotator
to produce a bounding box in a minimal amount of time. Specifically, we consider
two actions: box verification [34], where the annotator verifies a box generated
by an object detector, and manual box drawing. We explore two kinds of agents,
one based on predicting the probability that a box will be positively verified,
and the other based on reinforcement learning. We demonstrate that (1) our agents
are able to learn efficient annotation strategies in several scenarios, automatically
adapting to the image difficulty, the desired quality of the boxes, and the detector
strength; (2) in all scenarios the resulting annotation dialogs speed up annotation
compared to manual box drawing alone and box verification alone, while also outperforming
any fixed combination of verification and drawing in most scenarios; (3) in a
realistic scenario where the detector is iteratively re-trained, our agents evolve
a series of strategies that reflect the shifting trade-off between verification
and drawing as the detector grows stronger.'
article_processing_charge: No
author:
- first_name: Jasper
full_name: Uijlings, Jasper
last_name: Uijlings
- first_name: Ksenia
full_name: Konyushkova, Ksenia
last_name: Konyushkova
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
- first_name: Vittorio
full_name: Ferrari, Vittorio
last_name: Ferrari
citation:
ama: 'Uijlings J, Konyushkova K, Lampert C, Ferrari V. Learning intelligent dialogs
for bounding box annotation. In: 2018 IEEE/CVF Conference on Computer Vision
and Pattern Recognition. IEEE; 2018:9175-9184. doi:10.1109/cvpr.2018.00956'
apa: 'Uijlings, J., Konyushkova, K., Lampert, C., & Ferrari, V. (2018). Learning
intelligent dialogs for bounding box annotation. In 2018 IEEE/CVF Conference
on Computer Vision and Pattern Recognition (pp. 9175–9184). Salt Lake City,
UT, United States: IEEE. https://doi.org/10.1109/cvpr.2018.00956'
chicago: Uijlings, Jasper, Ksenia Konyushkova, Christoph Lampert, and Vittorio Ferrari.
“Learning Intelligent Dialogs for Bounding Box Annotation.” In 2018 IEEE/CVF
Conference on Computer Vision and Pattern Recognition, 9175–84. IEEE, 2018.
https://doi.org/10.1109/cvpr.2018.00956.
ieee: J. Uijlings, K. Konyushkova, C. Lampert, and V. Ferrari, “Learning intelligent
dialogs for bounding box annotation,” in 2018 IEEE/CVF Conference on Computer
Vision and Pattern Recognition, Salt Lake City, UT, United States, 2018, pp.
9175–9184.
ista: 'Uijlings J, Konyushkova K, Lampert C, Ferrari V. 2018. Learning intelligent
dialogs for bounding box annotation. 2018 IEEE/CVF Conference on Computer Vision
and Pattern Recognition. CVF: Conference on Computer Vision and Pattern Recognition,
9175–9184.'
mla: Uijlings, Jasper, et al. “Learning Intelligent Dialogs for Bounding Box Annotation.”
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE,
2018, pp. 9175–84, doi:10.1109/cvpr.2018.00956.
short: J. Uijlings, K. Konyushkova, C. Lampert, V. Ferrari, in:, 2018 IEEE/CVF Conference
on Computer Vision and Pattern Recognition, IEEE, 2018, pp. 9175–9184.
conference:
end_date: 2018-06-23
location: Salt Lake City, UT, United States
name: 'CVF: Conference on Computer Vision and Pattern Recognition'
start_date: 2018-06-18
date_created: 2022-03-18T12:45:09Z
date_published: 2018-12-17T00:00:00Z
date_updated: 2023-09-19T15:11:49Z
day: '17'
department:
- _id: ChLa
doi: 10.1109/cvpr.2018.00956
external_id:
arxiv:
- '1712.08087'
isi:
- '000457843609036'
isi: 1
language:
- iso: eng
main_file_link:
- open_access: '1'
url: ' https://doi.org/10.48550/arXiv.1712.08087'
month: '12'
oa: 1
oa_version: Preprint
page: 9175-9184
publication: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
publication_identifier:
eissn:
- 2575-7075
isbn:
- '9781538664209'
publication_status: published
publisher: IEEE
quality_controlled: '1'
scopus_import: '1'
status: public
title: Learning intelligent dialogs for bounding box annotation
type: conference
user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1
year: '2018'
...
---
_id: '6012'
abstract:
- lang: eng
text: We present an approach to identify concise equations from data using a shallow
neural network approach. In contrast to ordinary black-box regression, this approach
allows understanding functional relations and generalizing them from observed
data to unseen parts of the parameter space. We show how to extend the class of
learnable equations for a recently proposed equation learning network to include
divisions, and we improve the learning and model selection strategy to be useful
for challenging real-world data. For systems governed by analytical expressions,
our method can in many cases identify the true underlying equation and extrapolate
to unseen domains. We demonstrate its effectiveness by experiments on a cart-pendulum
system, where only 2 random rollouts are required to learn the forward dynamics
and successfully achieve the swing-up task.
article_processing_charge: No
author:
- first_name: Subham
full_name: Sahoo, Subham
last_name: Sahoo
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
- first_name: Georg S
full_name: Martius, Georg S
id: 3A276B68-F248-11E8-B48F-1D18A9856A87
last_name: Martius
citation:
ama: 'Sahoo S, Lampert C, Martius GS. Learning equations for extrapolation and control.
In: Proceedings of the 35th International Conference on Machine Learning.
Vol 80. ML Research Press; 2018:4442-4450.'
apa: 'Sahoo, S., Lampert, C., & Martius, G. S. (2018). Learning equations for
extrapolation and control. In Proceedings of the 35th International Conference
on Machine Learning (Vol. 80, pp. 4442–4450). Stockholm, Sweden: ML Research
Press.'
chicago: Sahoo, Subham, Christoph Lampert, and Georg S Martius. “Learning Equations
for Extrapolation and Control.” In Proceedings of the 35th International Conference
on Machine Learning, 80:4442–50. ML Research Press, 2018.
ieee: S. Sahoo, C. Lampert, and G. S. Martius, “Learning equations for extrapolation
and control,” in Proceedings of the 35th International Conference on Machine
Learning, Stockholm, Sweden, 2018, vol. 80, pp. 4442–4450.
ista: 'Sahoo S, Lampert C, Martius GS. 2018. Learning equations for extrapolation
and control. Proceedings of the 35th International Conference on Machine Learning.
ICML: International Conference on Machine Learning vol. 80, 4442–4450.'
mla: Sahoo, Subham, et al. “Learning Equations for Extrapolation and Control.” Proceedings
of the 35th International Conference on Machine Learning, vol. 80, ML Research
Press, 2018, pp. 4442–50.
short: S. Sahoo, C. Lampert, G.S. Martius, in:, Proceedings of the 35th International
Conference on Machine Learning, ML Research Press, 2018, pp. 4442–4450.
conference:
end_date: 2018-07-15
location: Stockholm, Sweden
name: 'ICML: International Conference on Machine Learning'
start_date: 2018-07-10
date_created: 2019-02-14T15:21:07Z
date_published: 2018-02-01T00:00:00Z
date_updated: 2023-10-17T09:50:53Z
day: '01'
department:
- _id: ChLa
ec_funded: 1
external_id:
arxiv:
- '1806.07259'
isi:
- '000683379204058'
intvolume: ' 80'
isi: 1
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/1806.07259
month: '02'
oa: 1
oa_version: Preprint
page: 4442-4450
project:
- _id: 25681D80-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '291734'
name: International IST Postdoc Fellowship Programme
publication: Proceedings of the 35th International Conference on Machine Learning
publication_status: published
publisher: ML Research Press
quality_controlled: '1'
related_material:
link:
- description: News on IST Homepage
relation: press_release
url: https://ist.ac.at/en/news/first-machine-learning-method-capable-of-accurate-extrapolation/
scopus_import: '1'
status: public
title: Learning equations for extrapolation and control
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 80
year: '2018'
...
---
_id: '6011'
abstract:
- lang: eng
text: 'We establish a data-dependent notion of algorithmic stability for Stochastic
Gradient Descent (SGD), and employ it to develop novel generalization bounds.
This is in contrast to previous distribution-free algorithmic stability results
for SGD which depend on the worst-case constants. By virtue of the data-dependent
argument, our bounds provide new insights into learning with SGD on convex and
non-convex problems. In the convex case, we show that the bound on the generalization
error depends on the risk at the initialization point. In the non-convex case,
we prove that the expected curvature of the objective function around the initialization
point has crucial influence on the generalization error. In both cases, our results
suggest a simple data-driven strategy to stabilize SGD by pre-screening its initialization.
As a corollary, our results allow us to show optimistic generalization bounds
that exhibit fast convergence rates for SGD subject to a vanishing empirical risk
and low noise of stochastic gradient. '
article_processing_charge: No
author:
- first_name: Ilja
full_name: Kuzborskij, Ilja
last_name: Kuzborskij
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Kuzborskij I, Lampert C. Data-dependent stability of stochastic gradient descent.
In: Proceedings of the 35 Th International Conference on Machine Learning.
Vol 80. ML Research Press; 2018:2815-2824.'
apa: 'Kuzborskij, I., & Lampert, C. (2018). Data-dependent stability of stochastic
gradient descent. In Proceedings of the 35 th International Conference on Machine
Learning (Vol. 80, pp. 2815–2824). Stockholm, Sweden: ML Research Press.'
chicago: Kuzborskij, Ilja, and Christoph Lampert. “Data-Dependent Stability of Stochastic
Gradient Descent.” In Proceedings of the 35 Th International Conference on
Machine Learning, 80:2815–24. ML Research Press, 2018.
ieee: I. Kuzborskij and C. Lampert, “Data-dependent stability of stochastic gradient
descent,” in Proceedings of the 35 th International Conference on Machine Learning,
Stockholm, Sweden, 2018, vol. 80, pp. 2815–2824.
ista: 'Kuzborskij I, Lampert C. 2018. Data-dependent stability of stochastic gradient
descent. Proceedings of the 35 th International Conference on Machine Learning.
ICML: International Conference on Machine Learning vol. 80, 2815–2824.'
mla: Kuzborskij, Ilja, and Christoph Lampert. “Data-Dependent Stability of Stochastic
Gradient Descent.” Proceedings of the 35 Th International Conference on Machine
Learning, vol. 80, ML Research Press, 2018, pp. 2815–24.
short: I. Kuzborskij, C. Lampert, in:, Proceedings of the 35 Th International Conference
on Machine Learning, ML Research Press, 2018, pp. 2815–2824.
conference:
end_date: 2018-07-15
location: Stockholm, Sweden
name: 'ICML: International Conference on Machine Learning'
start_date: 2018-07-10
date_created: 2019-02-14T14:51:57Z
date_published: 2018-02-01T00:00:00Z
date_updated: 2023-10-17T09:51:13Z
day: '01'
department:
- _id: ChLa
ec_funded: 1
external_id:
arxiv:
- '1703.01678'
isi:
- '000683379202095'
intvolume: ' 80'
isi: 1
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/1703.01678
month: '02'
oa: 1
oa_version: Preprint
page: 2815-2824
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '308036'
name: Lifelong Learning of Visual Scene Understanding
publication: Proceedings of the 35 th International Conference on Machine Learning
publication_status: published
publisher: ML Research Press
quality_controlled: '1'
scopus_import: '1'
status: public
title: Data-dependent stability of stochastic gradient descent
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 80
year: '2018'
...
---
_id: '6841'
abstract:
- lang: eng
text: In classical machine learning, regression is treated as a black box process
of identifying a suitable function from a hypothesis set without attempting to
gain insight into the mechanism connecting inputs and outputs. In the natural
sciences, however, finding an interpretable function for a phenomenon is the prime
goal as it allows to understand and generalize results. This paper proposes a
novel type of function learning network, called equation learner (EQL), that can
learn analytical expressions and is able to extrapolate to unseen domains. It
is implemented as an end-to-end differentiable feed-forward network and allows
for efficient gradient based training. Due to sparsity regularization concise
interpretable expressions can be obtained. Often the true underlying source expression
is identified.
author:
- first_name: Georg S
full_name: Martius, Georg S
id: 3A276B68-F248-11E8-B48F-1D18A9856A87
last_name: Martius
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Martius GS, Lampert C. Extrapolation and learning equations. In: 5th International
Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings.
International Conference on Learning Representations; 2017.'
apa: 'Martius, G. S., & Lampert, C. (2017). Extrapolation and learning equations.
In 5th International Conference on Learning Representations, ICLR 2017 - Workshop
Track Proceedings. Toulon, France: International Conference on Learning Representations.'
chicago: Martius, Georg S, and Christoph Lampert. “Extrapolation and Learning Equations.”
In 5th International Conference on Learning Representations, ICLR 2017 - Workshop
Track Proceedings. International Conference on Learning Representations, 2017.
ieee: G. S. Martius and C. Lampert, “Extrapolation and learning equations,” in 5th
International Conference on Learning Representations, ICLR 2017 - Workshop Track
Proceedings, Toulon, France, 2017.
ista: 'Martius GS, Lampert C. 2017. Extrapolation and learning equations. 5th International
Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings.
ICLR: International Conference on Learning Representations.'
mla: Martius, Georg S., and Christoph Lampert. “Extrapolation and Learning Equations.”
5th International Conference on Learning Representations, ICLR 2017 - Workshop
Track Proceedings, International Conference on Learning Representations, 2017.
short: G.S. Martius, C. Lampert, in:, 5th International Conference on Learning Representations,
ICLR 2017 - Workshop Track Proceedings, International Conference on Learning Representations,
2017.
conference:
end_date: 2017-04-26
location: Toulon, France
name: 'ICLR: International Conference on Learning Representations'
start_date: 2017-04-24
date_created: 2019-09-01T22:01:00Z
date_published: 2017-02-21T00:00:00Z
date_updated: 2021-01-12T08:09:17Z
day: '21'
department:
- _id: ChLa
ec_funded: 1
external_id:
arxiv:
- '1610.02995'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/1610.02995
month: '02'
oa: 1
oa_version: Preprint
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '308036'
name: Lifelong Learning of Visual Scene Understanding
publication: 5th International Conference on Learning Representations, ICLR 2017 -
Workshop Track Proceedings
publication_status: published
publisher: International Conference on Learning Representations
quality_controlled: '1'
scopus_import: 1
status: public
title: Extrapolation and learning equations
type: conference
user_id: 3E5EF7F0-F248-11E8-B48F-1D18A9856A87
year: '2017'
...
---
_id: '750'
abstract:
- lang: eng
text: Modern communication technologies allow first responders to contact thousands
of potential volunteers simultaneously for support during a crisis or disaster
event. However, such volunteer efforts must be well coordinated and monitored,
in order to offer an effective relief to the professionals. In this paper we extend
earlier work on optimally assigning volunteers to selected landmark locations.
In particular, we emphasize the aspect that obtaining good assignments requires
not only advanced computational tools, but also a realistic measure of distance
between volunteers and landmarks. Specifically, we propose the use of the Open
Street Map (OSM) driving distance instead of he previously used flight distance.
We find the OSM driving distance to be better aligned with the interests of volunteers
and first responders. Furthermore, we show that relying on the flying distance
leads to a substantial underestimation of the number of required volunteers, causing
negative side effects in case of an actual crisis situation.
author:
- first_name: Jasmin
full_name: Pielorz, Jasmin
id: 49BC895A-F248-11E8-B48F-1D18A9856A87
last_name: Pielorz
- first_name: Matthias
full_name: Prandtstetter, Matthias
last_name: Prandtstetter
- first_name: Markus
full_name: Straub, Markus
last_name: Straub
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Pielorz J, Prandtstetter M, Straub M, Lampert C. Optimal geospatial volunteer
allocation needs realistic distances. In: 2017 IEEE International Conference
on Big Data. IEEE; 2017:3760-3763. doi:10.1109/BigData.2017.8258375'
apa: 'Pielorz, J., Prandtstetter, M., Straub, M., & Lampert, C. (2017). Optimal
geospatial volunteer allocation needs realistic distances. In 2017 IEEE International
Conference on Big Data (pp. 3760–3763). Boston, MA, United States: IEEE. https://doi.org/10.1109/BigData.2017.8258375'
chicago: Pielorz, Jasmin, Matthias Prandtstetter, Markus Straub, and Christoph Lampert.
“Optimal Geospatial Volunteer Allocation Needs Realistic Distances.” In 2017
IEEE International Conference on Big Data, 3760–63. IEEE, 2017. https://doi.org/10.1109/BigData.2017.8258375.
ieee: J. Pielorz, M. Prandtstetter, M. Straub, and C. Lampert, “Optimal geospatial
volunteer allocation needs realistic distances,” in 2017 IEEE International
Conference on Big Data, Boston, MA, United States, 2017, pp. 3760–3763.
ista: Pielorz J, Prandtstetter M, Straub M, Lampert C. 2017. Optimal geospatial
volunteer allocation needs realistic distances. 2017 IEEE International Conference
on Big Data. Big Data, 3760–3763.
mla: Pielorz, Jasmin, et al. “Optimal Geospatial Volunteer Allocation Needs Realistic
Distances.” 2017 IEEE International Conference on Big Data, IEEE, 2017,
pp. 3760–63, doi:10.1109/BigData.2017.8258375.
short: J. Pielorz, M. Prandtstetter, M. Straub, C. Lampert, in:, 2017 IEEE International
Conference on Big Data, IEEE, 2017, pp. 3760–3763.
conference:
end_date: 2017-12-14
location: Boston, MA, United States
name: Big Data
start_date: 2017-12-11
date_created: 2018-12-11T11:48:18Z
date_published: 2017-12-01T00:00:00Z
date_updated: 2021-01-12T08:13:55Z
day: '01'
department:
- _id: ChLa
doi: 10.1109/BigData.2017.8258375
language:
- iso: eng
month: '12'
oa_version: None
page: 3760 - 3763
publication: 2017 IEEE International Conference on Big Data
publication_identifier:
isbn:
- 978-153862714-3
publication_status: published
publisher: IEEE
publist_id: '6906'
quality_controlled: '1'
scopus_import: 1
status: public
title: Optimal geospatial volunteer allocation needs realistic distances
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2017'
...
---
_id: '1000'
abstract:
- lang: eng
text: 'We study probabilistic models of natural images and extend the autoregressive
family of PixelCNN models by incorporating latent variables. Subsequently, we
describe two new generative image models that exploit different image transformations
as latent variables: a quantized grayscale view of the image or a multi-resolution
image pyramid. The proposed models tackle two known shortcomings of existing PixelCNN
models: 1) their tendency to focus on low-level image details, while largely ignoring
high-level image information, such as object shapes, and 2) their computationally
costly procedure for image sampling. We experimentally demonstrate benefits of
our LatentPixelCNN models, in particular showing that they produce much more realistically
looking image samples than previous state-of-the-art probabilistic models. '
acknowledgement: We thank Tim Salimans for spotting a mistake in our preliminary arXiv
manuscript. This work was funded by the European Research Council under the European
Unions Seventh Framework Programme (FP7/2007-2013)/ERC grant agreement no 308036.
article_processing_charge: No
author:
- first_name: Alexander
full_name: Kolesnikov, Alexander
id: 2D157DB6-F248-11E8-B48F-1D18A9856A87
last_name: Kolesnikov
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Kolesnikov A, Lampert C. PixelCNN models with auxiliary variables for natural
image modeling. In: 34th International Conference on Machine Learning.
Vol 70. JMLR; 2017:1905-1914.'
apa: 'Kolesnikov, A., & Lampert, C. (2017). PixelCNN models with auxiliary variables
for natural image modeling. In 34th International Conference on Machine Learning
(Vol. 70, pp. 1905–1914). Sydney, Australia: JMLR.'
chicago: Kolesnikov, Alexander, and Christoph Lampert. “PixelCNN Models with Auxiliary
Variables for Natural Image Modeling.” In 34th International Conference on
Machine Learning, 70:1905–14. JMLR, 2017.
ieee: A. Kolesnikov and C. Lampert, “PixelCNN models with auxiliary variables for
natural image modeling,” in 34th International Conference on Machine Learning,
Sydney, Australia, 2017, vol. 70, pp. 1905–1914.
ista: 'Kolesnikov A, Lampert C. 2017. PixelCNN models with auxiliary variables for
natural image modeling. 34th International Conference on Machine Learning. ICML:
International Conference on Machine Learning vol. 70, 1905–1914.'
mla: Kolesnikov, Alexander, and Christoph Lampert. “PixelCNN Models with Auxiliary
Variables for Natural Image Modeling.” 34th International Conference on Machine
Learning, vol. 70, JMLR, 2017, pp. 1905–14.
short: A. Kolesnikov, C. Lampert, in:, 34th International Conference on Machine
Learning, JMLR, 2017, pp. 1905–1914.
conference:
end_date: 2017-08-11
location: Sydney, Australia
name: 'ICML: International Conference on Machine Learning'
start_date: 2017-08-06
date_created: 2018-12-11T11:49:37Z
date_published: 2017-08-01T00:00:00Z
date_updated: 2023-09-22T09:50:41Z
day: '01'
department:
- _id: ChLa
ec_funded: 1
external_id:
arxiv:
- '1612.08185'
isi:
- '000683309501102'
has_accepted_license: '1'
intvolume: ' 70'
isi: 1
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/1612.08185
month: '08'
oa: 1
oa_version: Submitted Version
page: 1905 - 1914
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '308036'
name: Lifelong Learning of Visual Scene Understanding
publication: 34th International Conference on Machine Learning
publication_identifier:
isbn:
- 978-151085514-4
publication_status: published
publisher: JMLR
publist_id: '6398'
quality_controlled: '1'
scopus_import: '1'
status: public
title: PixelCNN models with auxiliary variables for natural image modeling
type: conference
user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1
volume: 70
year: '2017'
...
---
_id: '998'
abstract:
- lang: eng
text: 'A major open problem on the road to artificial intelligence is the development
of incrementally learning systems that learn about more and more concepts over
time from a stream of data. In this work, we introduce a new training strategy,
iCaRL, that allows learning in such a class-incremental way: only the training
data for a small number of classes has to be present at the same time and new
classes can be added progressively. iCaRL learns strong classifiers and a data
representation simultaneously. This distinguishes it from earlier works that were
fundamentally limited to fixed data representations and therefore incompatible
with deep learning architectures. We show by experiments on CIFAR-100 and ImageNet
ILSVRC 2012 data that iCaRL can learn many classes incrementally over a long period
of time where other strategies quickly fail. '
article_processing_charge: No
author:
- first_name: Sylvestre Alvise
full_name: Rebuffi, Sylvestre Alvise
last_name: Rebuffi
- first_name: Alexander
full_name: Kolesnikov, Alexander
id: 2D157DB6-F248-11E8-B48F-1D18A9856A87
last_name: Kolesnikov
- first_name: Georg
full_name: Sperl, Georg
id: 4DD40360-F248-11E8-B48F-1D18A9856A87
last_name: Sperl
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Rebuffi SA, Kolesnikov A, Sperl G, Lampert C. iCaRL: Incremental classifier
and representation learning. In: Vol 2017. IEEE; 2017:5533-5542. doi:10.1109/CVPR.2017.587'
apa: 'Rebuffi, S. A., Kolesnikov, A., Sperl, G., & Lampert, C. (2017). iCaRL:
Incremental classifier and representation learning (Vol. 2017, pp. 5533–5542).
Presented at the CVPR: Computer Vision and Pattern Recognition, Honolulu, HA,
United States: IEEE. https://doi.org/10.1109/CVPR.2017.587'
chicago: 'Rebuffi, Sylvestre Alvise, Alexander Kolesnikov, Georg Sperl, and Christoph
Lampert. “ICaRL: Incremental Classifier and Representation Learning,” 2017:5533–42.
IEEE, 2017. https://doi.org/10.1109/CVPR.2017.587.'
ieee: 'S. A. Rebuffi, A. Kolesnikov, G. Sperl, and C. Lampert, “iCaRL: Incremental
classifier and representation learning,” presented at the CVPR: Computer Vision
and Pattern Recognition, Honolulu, HA, United States, 2017, vol. 2017, pp. 5533–5542.'
ista: 'Rebuffi SA, Kolesnikov A, Sperl G, Lampert C. 2017. iCaRL: Incremental classifier
and representation learning. CVPR: Computer Vision and Pattern Recognition vol.
2017, 5533–5542.'
mla: 'Rebuffi, Sylvestre Alvise, et al. ICaRL: Incremental Classifier and Representation
Learning. Vol. 2017, IEEE, 2017, pp. 5533–42, doi:10.1109/CVPR.2017.587.'
short: S.A. Rebuffi, A. Kolesnikov, G. Sperl, C. Lampert, in:, IEEE, 2017, pp. 5533–5542.
conference:
end_date: 2017-07-26
location: Honolulu, HA, United States
name: 'CVPR: Computer Vision and Pattern Recognition'
start_date: 2017-07-21
date_created: 2018-12-11T11:49:37Z
date_published: 2017-04-14T00:00:00Z
date_updated: 2023-09-22T09:51:58Z
day: '14'
department:
- _id: ChLa
- _id: ChWo
doi: 10.1109/CVPR.2017.587
ec_funded: 1
external_id:
isi:
- '000418371405066'
intvolume: ' 2017'
isi: 1
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/1611.07725
month: '04'
oa: 1
oa_version: Submitted Version
page: 5533 - 5542
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '308036'
name: Lifelong Learning of Visual Scene Understanding
publication_identifier:
isbn:
- 978-153860457-1
publication_status: published
publisher: IEEE
publist_id: '6400'
quality_controlled: '1'
scopus_import: '1'
status: public
title: 'iCaRL: Incremental classifier and representation learning'
type: conference
user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1
volume: 2017
year: '2017'
...
---
_id: '911'
abstract:
- lang: eng
text: We develop a probabilistic technique for colorizing grayscale natural images.
In light of the intrinsic uncertainty of this task, the proposed probabilistic
framework has numerous desirable properties. In particular, our model is able
to produce multiple plausible and vivid colorizations for a given grayscale image
and is one of the first colorization models to provide a proper stochastic sampling
scheme. Moreover, our training procedure is supported by a rigorous theoretical
framework that does not require any ad hoc heuristics and allows for efficient
modeling and learning of the joint pixel color distribution.We demonstrate strong
quantitative and qualitative experimental results on the CIFAR-10 dataset and
the challenging ILSVRC 2012 dataset.
article_processing_charge: No
author:
- first_name: Amélie
full_name: Royer, Amélie
id: 3811D890-F248-11E8-B48F-1D18A9856A87
last_name: Royer
orcid: 0000-0002-8407-0705
- first_name: Alexander
full_name: Kolesnikov, Alexander
id: 2D157DB6-F248-11E8-B48F-1D18A9856A87
last_name: Kolesnikov
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Royer A, Kolesnikov A, Lampert C. Probabilistic image colorization. In: BMVA
Press; 2017:85.1-85.12. doi:10.5244/c.31.85'
apa: 'Royer, A., Kolesnikov, A., & Lampert, C. (2017). Probabilistic image colorization
(p. 85.1-85.12). Presented at the BMVC: British Machine Vision Conference, London,
United Kingdom: BMVA Press. https://doi.org/10.5244/c.31.85'
chicago: Royer, Amélie, Alexander Kolesnikov, and Christoph Lampert. “Probabilistic
Image Colorization,” 85.1-85.12. BMVA Press, 2017. https://doi.org/10.5244/c.31.85.
ieee: 'A. Royer, A. Kolesnikov, and C. Lampert, “Probabilistic image colorization,”
presented at the BMVC: British Machine Vision Conference, London, United Kingdom,
2017, p. 85.1-85.12.'
ista: 'Royer A, Kolesnikov A, Lampert C. 2017. Probabilistic image colorization.
BMVC: British Machine Vision Conference, 85.1-85.12.'
mla: Royer, Amélie, et al. Probabilistic Image Colorization. BMVA Press,
2017, p. 85.1-85.12, doi:10.5244/c.31.85.
short: A. Royer, A. Kolesnikov, C. Lampert, in:, BMVA Press, 2017, p. 85.1-85.12.
conference:
end_date: 2017-09-07
location: London, United Kingdom
name: 'BMVC: British Machine Vision Conference'
start_date: 2017-09-04
date_created: 2018-12-11T11:49:09Z
date_published: 2017-09-01T00:00:00Z
date_updated: 2023-10-16T10:04:02Z
day: '01'
ddc:
- '000'
department:
- _id: ChLa
doi: 10.5244/c.31.85
ec_funded: 1
external_id:
arxiv:
- '1705.04258'
file:
- access_level: open_access
content_type: application/pdf
creator: dernst
date_created: 2020-08-10T07:14:33Z
date_updated: 2020-08-10T07:14:33Z
file_id: '8224'
file_name: 2017_BMVC_Royer.pdf
file_size: 1625363
relation: main_file
success: 1
file_date_updated: 2020-08-10T07:14:33Z
has_accepted_license: '1'
language:
- iso: eng
month: '09'
oa: 1
oa_version: Published Version
page: 85.1-85.12
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '308036'
name: Lifelong Learning of Visual Scene Understanding
publication_identifier:
eisbn:
- 190172560X
publication_status: published
publisher: BMVA Press
publist_id: '6532'
quality_controlled: '1'
related_material:
record:
- id: '8390'
relation: dissertation_contains
status: public
scopus_import: '1'
status: public
title: Probabilistic image colorization
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2017'
...
---
_id: '1108'
abstract:
- lang: eng
text: In this work we study the learnability of stochastic processes with respect
to the conditional risk, i.e. the existence of a learning algorithm that improves
its next-step performance with the amount of observed data. We introduce a notion
of pairwise discrepancy between conditional distributions at different times steps
and show how certain properties of these discrepancies can be used to construct
a successful learning algorithm. Our main results are two theorems that establish
criteria for learnability for many classes of stochastic processes, including
all special cases studied previously in the literature.
alternative_title:
- PMLR
article_processing_charge: No
author:
- first_name: Alexander
full_name: Zimin, Alexander
id: 37099E9C-F248-11E8-B48F-1D18A9856A87
last_name: Zimin
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Zimin A, Lampert C. Learning theory for conditional risk minimization. In:
Vol 54. ML Research Press; 2017:213-222.'
apa: 'Zimin, A., & Lampert, C. (2017). Learning theory for conditional risk
minimization (Vol. 54, pp. 213–222). Presented at the AISTATS: Artificial Intelligence
and Statistics, Fort Lauderdale, FL, United States: ML Research Press.'
chicago: Zimin, Alexander, and Christoph Lampert. “Learning Theory for Conditional
Risk Minimization,” 54:213–22. ML Research Press, 2017.
ieee: 'A. Zimin and C. Lampert, “Learning theory for conditional risk minimization,”
presented at the AISTATS: Artificial Intelligence and Statistics, Fort Lauderdale,
FL, United States, 2017, vol. 54, pp. 213–222.'
ista: 'Zimin A, Lampert C. 2017. Learning theory for conditional risk minimization.
AISTATS: Artificial Intelligence and Statistics, PMLR, vol. 54, 213–222.'
mla: Zimin, Alexander, and Christoph Lampert. Learning Theory for Conditional
Risk Minimization. Vol. 54, ML Research Press, 2017, pp. 213–22.
short: A. Zimin, C. Lampert, in:, ML Research Press, 2017, pp. 213–222.
conference:
end_date: 2017-04-22
location: Fort Lauderdale, FL, United States
name: 'AISTATS: Artificial Intelligence and Statistics'
start_date: 2017-04-20
date_created: 2018-12-11T11:50:11Z
date_published: 2017-04-01T00:00:00Z
date_updated: 2023-10-17T10:01:12Z
day: '01'
department:
- _id: ChLa
ec_funded: 1
external_id:
isi:
- '000509368500024'
intvolume: ' 54'
isi: 1
language:
- iso: eng
main_file_link:
- open_access: '1'
url: http://proceedings.mlr.press/v54/zimin17a/zimin17a.pdf
month: '04'
oa: 1
oa_version: Submitted Version
page: 213 - 222
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '308036'
name: Lifelong Learning of Visual Scene Understanding
publication_status: published
publisher: ML Research Press
publist_id: '6261'
quality_controlled: '1'
status: public
title: Learning theory for conditional risk minimization
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 54
year: '2017'
...
---
_id: '999'
abstract:
- lang: eng
text: 'In multi-task learning, a learner is given a collection of prediction tasks
and needs to solve all of them. In contrast to previous work, which required that
annotated training data must be available for all tasks, we consider a new setting,
in which for some tasks, potentially most of them, only unlabeled training data
is provided. Consequently, to solve all tasks, information must be transferred
between tasks with labels and tasks without labels. Focusing on an instance-based
transfer method we analyze two variants of this setting: when the set of labeled
tasks is fixed, and when it can be actively selected by the learner. We state
and prove a generalization bound that covers both scenarios and derive from it
an algorithm for making the choice of labeled tasks (in the active case) and for
transferring information between the tasks in a principled way. We also illustrate
the effectiveness of the algorithm on synthetic and real data. '
alternative_title:
- PMLR
article_processing_charge: No
author:
- first_name: Anastasia
full_name: Pentina, Anastasia
id: 42E87FC6-F248-11E8-B48F-1D18A9856A87
last_name: Pentina
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Pentina A, Lampert C. Multi-task learning with labeled and unlabeled tasks.
In: Vol 70. ML Research Press; 2017:2807-2816.'
apa: 'Pentina, A., & Lampert, C. (2017). Multi-task learning with labeled and
unlabeled tasks (Vol. 70, pp. 2807–2816). Presented at the ICML: International
Conference on Machine Learning, Sydney, Australia: ML Research Press.'
chicago: Pentina, Anastasia, and Christoph Lampert. “Multi-Task Learning with Labeled
and Unlabeled Tasks,” 70:2807–16. ML Research Press, 2017.
ieee: 'A. Pentina and C. Lampert, “Multi-task learning with labeled and unlabeled
tasks,” presented at the ICML: International Conference on Machine Learning, Sydney,
Australia, 2017, vol. 70, pp. 2807–2816.'
ista: 'Pentina A, Lampert C. 2017. Multi-task learning with labeled and unlabeled
tasks. ICML: International Conference on Machine Learning, PMLR, vol. 70, 2807–2816.'
mla: Pentina, Anastasia, and Christoph Lampert. Multi-Task Learning with Labeled
and Unlabeled Tasks. Vol. 70, ML Research Press, 2017, pp. 2807–16.
short: A. Pentina, C. Lampert, in:, ML Research Press, 2017, pp. 2807–2816.
conference:
end_date: 2017-08-11
location: Sydney, Australia
name: 'ICML: International Conference on Machine Learning'
start_date: 2017-08-06
date_created: 2018-12-11T11:49:37Z
date_published: 2017-06-08T00:00:00Z
date_updated: 2023-10-17T11:53:32Z
day: '08'
department:
- _id: ChLa
ec_funded: 1
external_id:
isi:
- '000683309502093'
intvolume: ' 70'
isi: 1
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/1602.06518
month: '06'
oa: 1
oa_version: Submitted Version
page: 2807 - 2816
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '308036'
name: Lifelong Learning of Visual Scene Understanding
publication_identifier:
isbn:
- '9781510855144'
publication_status: published
publisher: ML Research Press
publist_id: '6399'
quality_controlled: '1'
scopus_import: '1'
status: public
title: Multi-task learning with labeled and unlabeled tasks
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 70
year: '2017'
...
---
_id: '1102'
abstract:
- lang: eng
text: Weakly-supervised object localization methods tend to fail for object classes
that consistently co-occur with the same background elements, e.g. trains on tracks.
We propose a method to overcome these failures by adding a very small amount of
model-specific additional annotation. The main idea is to cluster a deep network\'s
mid-level representations and assign object or distractor labels to each cluster.
Experiments show substantially improved localization results on the challenging
ILSVC2014 dataset for bounding box detection and the PASCAL VOC2012 dataset for
semantic segmentation.
acknowledgement: "This work was funded in parts by the European Research Council\r\nunder
the European Union’s Seventh Framework Programme (FP7/2007-2013)/ERC grant\r\nagreement
no 308036. We gratefully acknowledge the support of NVIDIA Corporation with\r\nthe
donation of the GPUs used for this research."
author:
- first_name: Alexander
full_name: Kolesnikov, Alexander
id: 2D157DB6-F248-11E8-B48F-1D18A9856A87
last_name: Kolesnikov
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Kolesnikov A, Lampert C. Improving weakly-supervised object localization by
micro-annotation. In: Proceedings of the British Machine Vision Conference
2016. Vol 2016-September. BMVA Press; 2016:92.1-92.12. doi:10.5244/C.30.92'
apa: 'Kolesnikov, A., & Lampert, C. (2016). Improving weakly-supervised object
localization by micro-annotation. In Proceedings of the British Machine Vision
Conference 2016 (Vol. 2016–September, p. 92.1-92.12). York, United Kingdom:
BMVA Press. https://doi.org/10.5244/C.30.92'
chicago: Kolesnikov, Alexander, and Christoph Lampert. “Improving Weakly-Supervised
Object Localization by Micro-Annotation.” In Proceedings of the British Machine
Vision Conference 2016, 2016–September:92.1-92.12. BMVA Press, 2016. https://doi.org/10.5244/C.30.92.
ieee: A. Kolesnikov and C. Lampert, “Improving weakly-supervised object localization
by micro-annotation,” in Proceedings of the British Machine Vision Conference
2016, York, United Kingdom, 2016, vol. 2016–September, p. 92.1-92.12.
ista: 'Kolesnikov A, Lampert C. 2016. Improving weakly-supervised object localization
by micro-annotation. Proceedings of the British Machine Vision Conference 2016.
BMVC: British Machine Vision Conference vol. 2016–September, 92.1-92.12.'
mla: Kolesnikov, Alexander, and Christoph Lampert. “Improving Weakly-Supervised
Object Localization by Micro-Annotation.” Proceedings of the British Machine
Vision Conference 2016, vol. 2016–September, BMVA Press, 2016, p. 92.1-92.12,
doi:10.5244/C.30.92.
short: A. Kolesnikov, C. Lampert, in:, Proceedings of the British Machine Vision
Conference 2016, BMVA Press, 2016, p. 92.1-92.12.
conference:
end_date: 2016-09-22
location: York, United Kingdom
name: 'BMVC: British Machine Vision Conference'
start_date: 2016-09-19
date_created: 2018-12-11T11:50:09Z
date_published: 2016-09-01T00:00:00Z
date_updated: 2021-01-12T06:48:18Z
day: '01'
department:
- _id: ChLa
doi: 10.5244/C.30.92
ec_funded: 1
language:
- iso: eng
main_file_link:
- open_access: '1'
url: http://www.bmva.org/bmvc/2016/papers/paper092/paper092.pdf
month: '09'
oa: 1
oa_version: Published Version
page: 92.1-92.12
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '308036'
name: Lifelong Learning of Visual Scene Understanding
publication: Proceedings of the British Machine Vision Conference 2016
publication_status: published
publisher: BMVA Press
publist_id: '6273'
quality_controlled: '1'
scopus_import: 1
status: public
title: Improving weakly-supervised object localization by micro-annotation
type: conference
user_id: 3E5EF7F0-F248-11E8-B48F-1D18A9856A87
volume: 2016-September
year: '2016'
...
---
_id: '1369'
abstract:
- lang: eng
text: 'We introduce a new loss function for the weakly-supervised training of semantic
image segmentation models based on three guiding principles: to seed with weak
localization cues, to expand objects based on the information about which classes
can occur in an image, and to constrain the segmentations to coincide with object
boundaries. We show experimentally that training a deep convolutional neural network
using the proposed loss function leads to substantially better segmentations than
previous state-of-the-art methods on the challenging PASCAL VOC 2012 dataset.
We furthermore give insight into the working mechanism of our method by a detailed
experimental study that illustrates how the segmentation quality is affected by
each term of the proposed loss function as well as their combinations.'
alternative_title:
- LNCS
author:
- first_name: Alexander
full_name: Kolesnikov, Alexander
id: 2D157DB6-F248-11E8-B48F-1D18A9856A87
last_name: Kolesnikov
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Kolesnikov A, Lampert C. Seed, expand and constrain: Three principles for
weakly-supervised image segmentation. In: Vol 9908. Springer; 2016:695-711. doi:10.1007/978-3-319-46493-0_42'
apa: 'Kolesnikov, A., & Lampert, C. (2016). Seed, expand and constrain: Three
principles for weakly-supervised image segmentation (Vol. 9908, pp. 695–711).
Presented at the ECCV: European Conference on Computer Vision, Amsterdam, The
Netherlands: Springer. https://doi.org/10.1007/978-3-319-46493-0_42'
chicago: 'Kolesnikov, Alexander, and Christoph Lampert. “Seed, Expand and Constrain:
Three Principles for Weakly-Supervised Image Segmentation,” 9908:695–711. Springer,
2016. https://doi.org/10.1007/978-3-319-46493-0_42.'
ieee: 'A. Kolesnikov and C. Lampert, “Seed, expand and constrain: Three principles
for weakly-supervised image segmentation,” presented at the ECCV: European Conference
on Computer Vision, Amsterdam, The Netherlands, 2016, vol. 9908, pp. 695–711.'
ista: 'Kolesnikov A, Lampert C. 2016. Seed, expand and constrain: Three principles
for weakly-supervised image segmentation. ECCV: European Conference on Computer
Vision, LNCS, vol. 9908, 695–711.'
mla: 'Kolesnikov, Alexander, and Christoph Lampert. Seed, Expand and Constrain:
Three Principles for Weakly-Supervised Image Segmentation. Vol. 9908, Springer,
2016, pp. 695–711, doi:10.1007/978-3-319-46493-0_42.'
short: A. Kolesnikov, C. Lampert, in:, Springer, 2016, pp. 695–711.
conference:
end_date: 2016-10-14
location: Amsterdam, The Netherlands
name: 'ECCV: European Conference on Computer Vision'
start_date: 2016-10-11
date_created: 2018-12-11T11:51:37Z
date_published: 2016-09-15T00:00:00Z
date_updated: 2021-01-12T06:50:12Z
day: '15'
department:
- _id: ChLa
doi: 10.1007/978-3-319-46493-0_42
ec_funded: 1
intvolume: ' 9908'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/1603.06098
month: '09'
oa: 1
oa_version: Preprint
page: 695 - 711
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '308036'
name: Lifelong Learning of Visual Scene Understanding
publication_status: published
publisher: Springer
publist_id: '5842'
quality_controlled: '1'
scopus_import: 1
status: public
title: 'Seed, expand and constrain: Three principles for weakly-supervised image segmentation'
type: conference
user_id: 3E5EF7F0-F248-11E8-B48F-1D18A9856A87
volume: 9908
year: '2016'
...
---
_id: '1707'
abstract:
- lang: eng
text: "Volunteer supporters play an important role in modern crisis and disaster
management. In the times of mobile Internet devices, help from thousands of volunteers
can be requested within a short time span, thus relieving professional helpers
from minor chores or geographically spread-out tasks. However, the simultaneous
availability of many volunteers also poses new problems. In particular, the volunteer
efforts must be well coordinated, or otherwise situations might emerge in which
too many idle volunteers at one location become more of a burden than a relief
to the professionals.\r\nIn this work, we study the task of optimally assigning
volunteers to selected locations, e.g. in order to perform regular measurements,
to report on damage, or to distribute information or resources to the population
in a crisis situation. We formulate the assignment tasks as an optimization problem
and propose an effective and efficient solution procedure. Experiments on real
data of the Team Österreich, consisting of over 36,000 Austrian volunteers, show
the effectiveness and efficiency of our approach."
acknowledgement: The DRIVER FP7 project has received funding from the European Unions
Seventh Framework Programme for research, technological development and demonstration
under grant agreement no 607798. RE-ACTA was funded within the framework of the
Austrian Security Research Programme KIRAS by the Federal Ministry for Transport,
Innovation and Technology.
article_number: '7402041'
author:
- first_name: Jasmin
full_name: Pielorz, Jasmin
id: 49BC895A-F248-11E8-B48F-1D18A9856A87
last_name: Pielorz
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Pielorz J, Lampert C. Optimal geospatial allocation of volunteers for crisis
management. In: IEEE; 2016. doi:10.1109/ICT-DM.2015.7402041'
apa: 'Pielorz, J., & Lampert, C. (2016). Optimal geospatial allocation of volunteers
for crisis management. Presented at the ICT-DM: Information and Communication
Technologies for Disaster Management, Rennes, France: IEEE. https://doi.org/10.1109/ICT-DM.2015.7402041'
chicago: Pielorz, Jasmin, and Christoph Lampert. “Optimal Geospatial Allocation
of Volunteers for Crisis Management.” IEEE, 2016. https://doi.org/10.1109/ICT-DM.2015.7402041.
ieee: 'J. Pielorz and C. Lampert, “Optimal geospatial allocation of volunteers for
crisis management,” presented at the ICT-DM: Information and Communication Technologies
for Disaster Management, Rennes, France, 2016.'
ista: 'Pielorz J, Lampert C. 2016. Optimal geospatial allocation of volunteers for
crisis management. ICT-DM: Information and Communication Technologies for Disaster
Management, 7402041.'
mla: Pielorz, Jasmin, and Christoph Lampert. Optimal Geospatial Allocation of
Volunteers for Crisis Management. 7402041, IEEE, 2016, doi:10.1109/ICT-DM.2015.7402041.
short: J. Pielorz, C. Lampert, in:, IEEE, 2016.
conference:
end_date: 2015-12-02
location: Rennes, France
name: 'ICT-DM: Information and Communication Technologies for Disaster Management'
start_date: 2015-11-30
date_created: 2018-12-11T11:53:35Z
date_published: 2016-02-11T00:00:00Z
date_updated: 2021-01-12T06:52:39Z
day: '11'
department:
- _id: ChLa
doi: 10.1109/ICT-DM.2015.7402041
language:
- iso: eng
month: '02'
oa_version: None
publication_status: published
publisher: IEEE
publist_id: '5429'
quality_controlled: '1'
scopus_import: 1
status: public
title: Optimal geospatial allocation of volunteers for crisis management
type: conference
user_id: 3E5EF7F0-F248-11E8-B48F-1D18A9856A87
year: '2016'
...
---
_id: '1425'
abstract:
- lang: eng
text: 'In this work we aim at extending the theoretical foundations of lifelong
learning. Previous work analyzing this scenario is based on the assumption that
learning tasks are sampled i.i.d. from a task environment or limited to strongly
constrained data distributions. Instead, we study two scenarios when lifelong
learning is possible, even though the observed tasks do not form an i.i.d. sample:
first, when they are sampled from the same environment, but possibly with dependencies,
and second, when the task environment is allowed to change over time in a consistent
way. In the first case we prove a PAC-Bayesian theorem that can be seen as a direct
generalization of the analogous previous result for the i.i.d. case. For the second
scenario we propose to learn an inductive bias in form of a transfer procedure.
We present a generalization bound and show on a toy example how it can be used
to identify a beneficial transfer algorithm.'
alternative_title:
- Advances in Neural Information Processing Systems
author:
- first_name: Anastasia
full_name: Pentina, Anastasia
id: 42E87FC6-F248-11E8-B48F-1D18A9856A87
last_name: Pentina
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Pentina A, Lampert C. Lifelong learning with non-i.i.d. tasks. In: Vol 2015.
Neural Information Processing Systems; 2015:1540-1548.'
apa: 'Pentina, A., & Lampert, C. (2015). Lifelong learning with non-i.i.d. tasks
(Vol. 2015, pp. 1540–1548). Presented at the NIPS: Neural Information Processing
Systems, Montreal, Canada: Neural Information Processing Systems.'
chicago: Pentina, Anastasia, and Christoph Lampert. “Lifelong Learning with Non-i.i.d.
Tasks,” 2015:1540–48. Neural Information Processing Systems, 2015.
ieee: 'A. Pentina and C. Lampert, “Lifelong learning with non-i.i.d. tasks,” presented
at the NIPS: Neural Information Processing Systems, Montreal, Canada, 2015, vol.
2015, pp. 1540–1548.'
ista: 'Pentina A, Lampert C. 2015. Lifelong learning with non-i.i.d. tasks. NIPS:
Neural Information Processing Systems, Advances in Neural Information Processing
Systems, vol. 2015, 1540–1548.'
mla: Pentina, Anastasia, and Christoph Lampert. Lifelong Learning with Non-i.i.d.
Tasks. Vol. 2015, Neural Information Processing Systems, 2015, pp. 1540–48.
short: A. Pentina, C. Lampert, in:, Neural Information Processing Systems, 2015,
pp. 1540–1548.
conference:
end_date: 2015-12-12
location: Montreal, Canada
name: 'NIPS: Neural Information Processing Systems'
start_date: 2015-12-07
date_created: 2018-12-11T11:51:57Z
date_published: 2015-01-01T00:00:00Z
date_updated: 2021-01-12T06:50:39Z
day: '01'
department:
- _id: ChLa
ec_funded: 1
intvolume: ' 2015'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: http://papers.nips.cc/paper/6007-lifelong-learning-with-non-iid-tasks
month: '01'
oa: 1
oa_version: None
page: 1540 - 1548
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '308036'
name: Lifelong Learning of Visual Scene Understanding
publication_status: published
publisher: Neural Information Processing Systems
publist_id: '5781'
quality_controlled: '1'
scopus_import: 1
status: public
title: Lifelong learning with non-i.i.d. tasks
type: conference
user_id: 3E5EF7F0-F248-11E8-B48F-1D18A9856A87
volume: 2015
year: '2015'
...
---
_id: '1859'
abstract:
- lang: eng
text: "Structural support vector machines (SSVMs) are amongst the best performing
models for structured computer vision tasks, such as semantic image segmentation
or human pose estimation. Training SSVMs, however, is computationally costly,
because it requires repeated calls to a structured prediction subroutine (called
\\emph{max-oracle}), which has to solve an optimization problem itself, e.g. a
graph cut.\r\nIn this work, we introduce a new algorithm for SSVM training that
is more efficient than earlier techniques when the max-oracle is computationally
expensive, as it is frequently the case in computer vision tasks. The main idea
is to (i) combine the recent stochastic Block-Coordinate Frank-Wolfe algorithm
with efficient hyperplane caching, and (ii) use an automatic selection rule for
deciding whether to call the exact max-oracle or to rely on an approximate one
based on the cached hyperplanes.\r\nWe show experimentally that this strategy
leads to faster convergence to the optimum with respect to the number of requires
oracle calls, and that this translates into faster convergence with respect to
the total runtime when the max-oracle is slow compared to the other steps of the
algorithm. "
author:
- first_name: Neel
full_name: Shah, Neel
id: 31ABAF80-F248-11E8-B48F-1D18A9856A87
last_name: Shah
- first_name: Vladimir
full_name: Kolmogorov, Vladimir
id: 3D50B0BA-F248-11E8-B48F-1D18A9856A87
last_name: Kolmogorov
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Shah N, Kolmogorov V, Lampert C. A multi-plane block-coordinate Frank-Wolfe
algorithm for training structural SVMs with a costly max-oracle. In: IEEE; 2015:2737-2745.
doi:10.1109/CVPR.2015.7298890'
apa: 'Shah, N., Kolmogorov, V., & Lampert, C. (2015). A multi-plane block-coordinate
Frank-Wolfe algorithm for training structural SVMs with a costly max-oracle (pp.
2737–2745). Presented at the CVPR: Computer Vision and Pattern Recognition, Boston,
MA, USA: IEEE. https://doi.org/10.1109/CVPR.2015.7298890'
chicago: Shah, Neel, Vladimir Kolmogorov, and Christoph Lampert. “A Multi-Plane
Block-Coordinate Frank-Wolfe Algorithm for Training Structural SVMs with a Costly
Max-Oracle,” 2737–45. IEEE, 2015. https://doi.org/10.1109/CVPR.2015.7298890.
ieee: 'N. Shah, V. Kolmogorov, and C. Lampert, “A multi-plane block-coordinate Frank-Wolfe
algorithm for training structural SVMs with a costly max-oracle,” presented at
the CVPR: Computer Vision and Pattern Recognition, Boston, MA, USA, 2015, pp.
2737–2745.'
ista: 'Shah N, Kolmogorov V, Lampert C. 2015. A multi-plane block-coordinate Frank-Wolfe
algorithm for training structural SVMs with a costly max-oracle. CVPR: Computer
Vision and Pattern Recognition, 2737–2745.'
mla: Shah, Neel, et al. A Multi-Plane Block-Coordinate Frank-Wolfe Algorithm
for Training Structural SVMs with a Costly Max-Oracle. IEEE, 2015, pp. 2737–45,
doi:10.1109/CVPR.2015.7298890.
short: N. Shah, V. Kolmogorov, C. Lampert, in:, IEEE, 2015, pp. 2737–2745.
conference:
end_date: 2015-06-12
location: Boston, MA, USA
name: 'CVPR: Computer Vision and Pattern Recognition'
start_date: 2015-06-07
date_created: 2018-12-11T11:54:24Z
date_published: 2015-06-01T00:00:00Z
date_updated: 2021-01-12T06:53:40Z
day: '01'
department:
- _id: VlKo
- _id: ChLa
doi: 10.1109/CVPR.2015.7298890
ec_funded: 1
language:
- iso: eng
main_file_link:
- open_access: '1'
url: http://arxiv.org/abs/1408.6804
month: '06'
oa: 1
oa_version: Preprint
page: 2737 - 2745
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '308036'
name: Lifelong Learning of Visual Scene Understanding
- _id: 25FBA906-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '616160'
name: 'Discrete Optimization in Computer Vision: Theory and Practice'
publication_status: published
publisher: IEEE
publist_id: '5240'
quality_controlled: '1'
scopus_import: 1
status: public
title: A multi-plane block-coordinate Frank-Wolfe algorithm for training structural
SVMs with a costly max-oracle
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2015'
...
---
_id: '1860'
abstract:
- lang: eng
text: Classifiers for object categorization are usually evaluated by their accuracy
on a set of i.i.d. test examples. This provides us with an estimate of the expected
error when applying the classifiers to a single new image. In real application,
however, classifiers are rarely only used for a single image and then discarded.
Instead, they are applied sequentially to many images, and these are typically
not i.i.d. samples from a fixed data distribution, but they carry dependencies
and their class distribution varies over time. In this work, we argue that the
phenomenon of correlated data at prediction time is not a nuisance, but a blessing
in disguise. We describe a probabilistic method for adapting classifiers at prediction
time without having to retrain them. We also introduce a framework for creating
realistically distributed image sequences, which offers a way to benchmark classifier
adaptation methods, such as the one we propose. Experiments on the ILSVRC2010
and ILSVRC2012 datasets show that adapting object classification systems at prediction
time can significantly reduce their error rate, even with no additional human
feedback.
author:
- first_name: Amélie
full_name: Royer, Amélie
last_name: Royer
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Royer A, Lampert C. Classifier adaptation at prediction time. In: IEEE; 2015:1401-1409.
doi:10.1109/CVPR.2015.7298746'
apa: 'Royer, A., & Lampert, C. (2015). Classifier adaptation at prediction time
(pp. 1401–1409). Presented at the CVPR: Computer Vision and Pattern Recognition,
Boston, MA, United States: IEEE. https://doi.org/10.1109/CVPR.2015.7298746'
chicago: Royer, Amélie, and Christoph Lampert. “Classifier Adaptation at Prediction
Time,” 1401–9. IEEE, 2015. https://doi.org/10.1109/CVPR.2015.7298746.
ieee: 'A. Royer and C. Lampert, “Classifier adaptation at prediction time,” presented
at the CVPR: Computer Vision and Pattern Recognition, Boston, MA, United States,
2015, pp. 1401–1409.'
ista: 'Royer A, Lampert C. 2015. Classifier adaptation at prediction time. CVPR:
Computer Vision and Pattern Recognition, 1401–1409.'
mla: Royer, Amélie, and Christoph Lampert. Classifier Adaptation at Prediction
Time. IEEE, 2015, pp. 1401–09, doi:10.1109/CVPR.2015.7298746.
short: A. Royer, C. Lampert, in:, IEEE, 2015, pp. 1401–1409.
conference:
end_date: 2015-06-12
location: Boston, MA, United States
name: 'CVPR: Computer Vision and Pattern Recognition'
start_date: 2015-06-07
date_created: 2018-12-11T11:54:24Z
date_published: 2015-06-01T00:00:00Z
date_updated: 2021-01-12T06:53:41Z
day: '01'
department:
- _id: ChLa
doi: 10.1109/CVPR.2015.7298746
ec_funded: 1
language:
- iso: eng
main_file_link:
- open_access: '1'
url: http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Royer_Classifier_Adaptation_at_2015_CVPR_paper.pdf
month: '06'
oa: 1
oa_version: Submitted Version
page: 1401 - 1409
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '308036'
name: Lifelong Learning of Visual Scene Understanding
publication_status: published
publisher: IEEE
publist_id: '5239'
quality_controlled: '1'
scopus_import: 1
status: public
title: Classifier adaptation at prediction time
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2015'
...
---
_id: '1858'
abstract:
- lang: eng
text: 'We study the problem of predicting the future, though only in the probabilistic
sense of estimating a future state of a time-varying probability distribution.
This is not only an interesting academic problem, but solving this extrapolation
problem also has many practical application, e.g. for training classifiers that
have to operate under time-varying conditions. Our main contribution is a method
for predicting the next step of the time-varying distribution from a given sequence
of sample sets from earlier time steps. For this we rely on two recent machine
learning techniques: embedding probability distributions into a reproducing kernel
Hilbert space, and learning operators by vector-valued regression. We illustrate
the working principles and the practical usefulness of our method by experiments
on synthetic and real data. We also highlight an exemplary application: training
a classifier in a domain adaptation setting without having access to examples
from the test time distribution at training time.'
author:
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Lampert C. Predicting the future behavior of a time-varying probability distribution.
In: IEEE; 2015:942-950. doi:10.1109/CVPR.2015.7298696'
apa: 'Lampert, C. (2015). Predicting the future behavior of a time-varying probability
distribution (pp. 942–950). Presented at the CVPR: Computer Vision and Pattern
Recognition, Boston, MA, United States: IEEE. https://doi.org/10.1109/CVPR.2015.7298696'
chicago: Lampert, Christoph. “Predicting the Future Behavior of a Time-Varying Probability
Distribution,” 942–50. IEEE, 2015. https://doi.org/10.1109/CVPR.2015.7298696.
ieee: 'C. Lampert, “Predicting the future behavior of a time-varying probability
distribution,” presented at the CVPR: Computer Vision and Pattern Recognition,
Boston, MA, United States, 2015, pp. 942–950.'
ista: 'Lampert C. 2015. Predicting the future behavior of a time-varying probability
distribution. CVPR: Computer Vision and Pattern Recognition, 942–950.'
mla: Lampert, Christoph. Predicting the Future Behavior of a Time-Varying Probability
Distribution. IEEE, 2015, pp. 942–50, doi:10.1109/CVPR.2015.7298696.
short: C. Lampert, in:, IEEE, 2015, pp. 942–950.
conference:
end_date: 2015-06-12
location: Boston, MA, United States
name: 'CVPR: Computer Vision and Pattern Recognition'
start_date: 2015-06-07
date_created: 2018-12-11T11:54:24Z
date_published: 2015-10-15T00:00:00Z
date_updated: 2021-01-12T06:53:40Z
day: '15'
department:
- _id: ChLa
doi: 10.1109/CVPR.2015.7298696
external_id:
arxiv:
- '1406.5362'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/1406.5362
month: '10'
oa: 1
oa_version: Preprint
page: 942 - 950
publication_status: published
publisher: IEEE
publist_id: '5241'
quality_controlled: '1'
scopus_import: 1
status: public
title: Predicting the future behavior of a time-varying probability distribution
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2015'
...
---
_id: '1857'
abstract:
- lang: eng
text: 'Sharing information between multiple tasks enables algorithms to achieve
good generalization performance even from small amounts of training data. However,
in a realistic scenario of multi-task learning not all tasks are equally related
to each other, hence it could be advantageous to transfer information only between
the most related tasks. In this work we propose an approach that processes multiple
tasks in a sequence with sharing between subsequent tasks instead of solving all
tasks jointly. Subsequently, we address the question of curriculum learning of
tasks, i.e. finding the best order of tasks to be learned. Our approach is based
on a generalization bound criterion for choosing the task order that optimizes
the average expected classification performance over all tasks. Our experimental
results show that learning multiple related tasks sequentially can be more effective
than learning them jointly, the order in which tasks are being solved affects
the overall performance, and that our model is able to automatically discover
the favourable order of tasks. '
author:
- first_name: Anastasia
full_name: Pentina, Anastasia
id: 42E87FC6-F248-11E8-B48F-1D18A9856A87
last_name: Pentina
- first_name: Viktoriia
full_name: Sharmanska, Viktoriia
id: 2EA6D09E-F248-11E8-B48F-1D18A9856A87
last_name: Sharmanska
orcid: 0000-0003-0192-9308
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Pentina A, Sharmanska V, Lampert C. Curriculum learning of multiple tasks.
In: IEEE; 2015:5492-5500. doi:10.1109/CVPR.2015.7299188'
apa: 'Pentina, A., Sharmanska, V., & Lampert, C. (2015). Curriculum learning
of multiple tasks (pp. 5492–5500). Presented at the CVPR: Computer Vision and
Pattern Recognition, Boston, MA, United States: IEEE. https://doi.org/10.1109/CVPR.2015.7299188'
chicago: Pentina, Anastasia, Viktoriia Sharmanska, and Christoph Lampert. “Curriculum
Learning of Multiple Tasks,” 5492–5500. IEEE, 2015. https://doi.org/10.1109/CVPR.2015.7299188.
ieee: 'A. Pentina, V. Sharmanska, and C. Lampert, “Curriculum learning of multiple
tasks,” presented at the CVPR: Computer Vision and Pattern Recognition, Boston,
MA, United States, 2015, pp. 5492–5500.'
ista: 'Pentina A, Sharmanska V, Lampert C. 2015. Curriculum learning of multiple
tasks. CVPR: Computer Vision and Pattern Recognition, 5492–5500.'
mla: Pentina, Anastasia, et al. Curriculum Learning of Multiple Tasks. IEEE,
2015, pp. 5492–500, doi:10.1109/CVPR.2015.7299188.
short: A. Pentina, V. Sharmanska, C. Lampert, in:, IEEE, 2015, pp. 5492–5500.
conference:
end_date: 2015-06-12
location: Boston, MA, United States
name: 'CVPR: Computer Vision and Pattern Recognition'
start_date: 2015-06-07
date_created: 2018-12-11T11:54:23Z
date_published: 2015-06-01T00:00:00Z
date_updated: 2023-02-23T10:17:31Z
day: '01'
department:
- _id: ChLa
doi: 10.1109/CVPR.2015.7299188
language:
- iso: eng
main_file_link:
- open_access: '1'
url: http://arxiv.org/abs/1412.1353
month: '06'
oa: 1
oa_version: Preprint
page: 5492 - 5500
publication_status: published
publisher: IEEE
publist_id: '5243'
quality_controlled: '1'
scopus_import: 1
status: public
title: Curriculum learning of multiple tasks
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2015'
...
---
_id: '1829'
abstract:
- lang: eng
text: Hitting and batting tasks, such as tennis forehands, ping-pong strokes, or
baseball batting, depend on predictions where the ball can be intercepted and
how it can properly be returned to the opponent. These predictions get more accurate
over time, hence the behaviors need to be continuously modified. As a result,
movement templates with a learned global shape need to be adapted during the execution
so that the racket reaches a target position and velocity that will return the
ball over to the other side of the net or court. It requires altering learned
movements to hit a varying target with the necessary velocity at a specific instant
in time. Such a task cannot be incorporated straightforwardly in most movement
representations suitable for learning. For example, the standard formulation of
the dynamical system based motor primitives (introduced by Ijspeert et al (2002b))
does not satisfy this property despite their flexibility which has allowed learning
tasks ranging from locomotion to kendama. In order to fulfill this requirement,
we reformulate the Ijspeert framework to incorporate the possibility of specifying
a desired hitting point and a desired hitting velocity while maintaining all advantages
of the original formulation.We show that the proposed movement template formulation
works well in two scenarios, i.e., for hitting a ball on a string with a table
tennis racket at a specified velocity and for returning balls launched by a ball
gun successfully over the net using forehand movements.
alternative_title:
- Springer Tracts in Advanced Robotics
author:
- first_name: Katharina
full_name: Muelling, Katharina
last_name: Muelling
- first_name: Oliver
full_name: Kroemer, Oliver
last_name: Kroemer
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
- first_name: Bernhard
full_name: Schölkopf, Bernhard
last_name: Schölkopf
citation:
ama: 'Muelling K, Kroemer O, Lampert C, Schölkopf B. Movement templates for learning
of hitting and batting. In: Kober J, Peters J, eds. Learning Motor Skills.
Vol 97. From Algorithms to Robot Experiments. Springer; 2014:69-82. doi:10.1007/978-3-319-03194-1_3'
apa: Muelling, K., Kroemer, O., Lampert, C., & Schölkopf, B. (2014). Movement
templates for learning of hitting and batting. In J. Kober & J. Peters (Eds.),
Learning Motor Skills (Vol. 97, pp. 69–82). Springer. https://doi.org/10.1007/978-3-319-03194-1_3
chicago: Muelling, Katharina, Oliver Kroemer, Christoph Lampert, and Bernhard Schölkopf.
“Movement Templates for Learning of Hitting and Batting.” In Learning Motor
Skills, edited by Jens Kober and Jan Peters, 97:69–82. From Algorithms to
Robot Experiments. Springer, 2014. https://doi.org/10.1007/978-3-319-03194-1_3.
ieee: K. Muelling, O. Kroemer, C. Lampert, and B. Schölkopf, “Movement templates
for learning of hitting and batting,” in Learning Motor Skills, vol. 97,
J. Kober and J. Peters, Eds. Springer, 2014, pp. 69–82.
ista: 'Muelling K, Kroemer O, Lampert C, Schölkopf B. 2014.Movement templates for
learning of hitting and batting. In: Learning Motor Skills. Springer Tracts in
Advanced Robotics, vol. 97, 69–82.'
mla: Muelling, Katharina, et al. “Movement Templates for Learning of Hitting and
Batting.” Learning Motor Skills, edited by Jens Kober and Jan Peters, vol.
97, Springer, 2014, pp. 69–82, doi:10.1007/978-3-319-03194-1_3.
short: K. Muelling, O. Kroemer, C. Lampert, B. Schölkopf, in:, J. Kober, J. Peters
(Eds.), Learning Motor Skills, Springer, 2014, pp. 69–82.
date_created: 2018-12-11T11:54:14Z
date_published: 2014-01-01T00:00:00Z
date_updated: 2021-01-12T06:53:28Z
day: '01'
department:
- _id: ChLa
doi: 10.1007/978-3-319-03194-1_3
editor:
- first_name: Jens
full_name: Kober, Jens
last_name: Kober
- first_name: Jan
full_name: Peters, Jan
last_name: Peters
intvolume: ' 97'
language:
- iso: eng
month: '01'
oa_version: None
page: 69 - 82
publication: Learning Motor Skills
publication_status: published
publisher: Springer
publist_id: '5274'
quality_controlled: '1'
scopus_import: 1
series_title: From Algorithms to Robot Experiments
status: public
title: Movement templates for learning of hitting and batting
type: book_chapter
user_id: 4435EBFC-F248-11E8-B48F-1D18A9856A87
volume: 97
year: '2014'
...
---
_id: '2033'
abstract:
- lang: eng
text: 'The learning with privileged information setting has recently attracted a
lot of attention within the machine learning community, as it allows the integration
of additional knowledge into the training process of a classifier, even when this
comes in the form of a data modality that is not available at test time. Here,
we show that privileged information can naturally be treated as noise in the latent
function of a Gaussian process classifier (GPC). That is, in contrast to the standard
GPC setting, the latent function is not just a nuisance but a feature: it becomes
a natural measure of confidence about the training data by modulating the slope
of the GPC probit likelihood function. Extensive experiments on public datasets
show that the proposed GPC method using privileged noise, called GPC+, improves
over a standard GPC without privileged knowledge, and also over the current state-of-the-art
SVM-based method, SVM+. Moreover, we show that advanced neural networks and deep
learning methods can be compressed as privileged information.'
author:
- first_name: Daniel
full_name: Hernandez Lobato, Daniel
last_name: Hernandez Lobato
- first_name: Viktoriia
full_name: Sharmanska, Viktoriia
id: 2EA6D09E-F248-11E8-B48F-1D18A9856A87
last_name: Sharmanska
orcid: 0000-0003-0192-9308
- first_name: Kristian
full_name: Kersting, Kristian
last_name: Kersting
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
- first_name: Novi
full_name: Quadrianto, Novi
last_name: Quadrianto
citation:
ama: 'Hernandez Lobato D, Sharmanska V, Kersting K, Lampert C, Quadrianto N. Mind
the nuisance: Gaussian process classification using privileged noise. In: Advances
in Neural Information Processing Systems. Vol 1. Neural Information Processing
Systems; 2014:837-845.'
apa: 'Hernandez Lobato, D., Sharmanska, V., Kersting, K., Lampert, C., & Quadrianto,
N. (2014). Mind the nuisance: Gaussian process classification using privileged
noise. In Advances in Neural Information Processing Systems (Vol. 1, pp.
837–845). Montreal, Canada: Neural Information Processing Systems.'
chicago: 'Hernandez Lobato, Daniel, Viktoriia Sharmanska, Kristian Kersting, Christoph
Lampert, and Novi Quadrianto. “Mind the Nuisance: Gaussian Process Classification
Using Privileged Noise.” In Advances in Neural Information Processing Systems,
1:837–45. Neural Information Processing Systems, 2014.'
ieee: 'D. Hernandez Lobato, V. Sharmanska, K. Kersting, C. Lampert, and N. Quadrianto,
“Mind the nuisance: Gaussian process classification using privileged noise,” in
Advances in Neural Information Processing Systems, Montreal, Canada, 2014,
vol. 1, no. January, pp. 837–845.'
ista: 'Hernandez Lobato D, Sharmanska V, Kersting K, Lampert C, Quadrianto N. 2014.
Mind the nuisance: Gaussian process classification using privileged noise. Advances
in Neural Information Processing Systems. NIPS: Neural Information Processing
Systems vol. 1, 837–845.'
mla: 'Hernandez Lobato, Daniel, et al. “Mind the Nuisance: Gaussian Process Classification
Using Privileged Noise.” Advances in Neural Information Processing Systems,
vol. 1, no. January, Neural Information Processing Systems, 2014, pp. 837–45.'
short: D. Hernandez Lobato, V. Sharmanska, K. Kersting, C. Lampert, N. Quadrianto,
in:, Advances in Neural Information Processing Systems, Neural Information Processing
Systems, 2014, pp. 837–845.
conference:
end_date: 2014-12-13
location: Montreal, Canada
name: 'NIPS: Neural Information Processing Systems'
start_date: 2014-12-08
date_created: 2018-12-11T11:55:20Z
date_published: 2014-12-08T00:00:00Z
date_updated: 2023-02-23T10:25:24Z
day: '08'
department:
- _id: ChLa
intvolume: ' 1'
issue: January
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://papers.nips.cc/paper/5373-mind-the-nuisance-gaussian-process-classification-using-privileged-noise
month: '12'
oa: 1
oa_version: Submitted Version
page: 837-845
publication: Advances in Neural Information Processing Systems
publication_status: published
publisher: Neural Information Processing Systems
publist_id: '5038'
quality_controlled: '1'
scopus_import: 1
status: public
title: 'Mind the nuisance: Gaussian process classification using privileged noise'
type: conference
user_id: 4435EBFC-F248-11E8-B48F-1D18A9856A87
volume: 1
year: '2014'
...
---
_id: '2171'
abstract:
- lang: eng
text: We present LS-CRF, a new method for training cyclic Conditional Random Fields
(CRFs) from large datasets that is inspired by classical closed-form expressions
for the maximum likelihood parameters of a generative graphical model with tree
topology. Training a CRF with LS-CRF requires only solving a set of independent
regression problems, each of which can be solved efficiently in closed form or
by an iterative solver. This makes LS-CRF orders of magnitude faster than classical
CRF training based on probabilistic inference, and at the same time more flexible
and easier to implement than other approximate techniques, such as pseudolikelihood
or piecewise training. We apply LS-CRF to the task of semantic image segmentation,
showing that it achieves on par accuracy to other training techniques at higher
speed, thereby allowing efficient CRF training from very large training sets.
For example, training a linearly parameterized pairwise CRF on 150,000 images
requires less than one hour on a modern workstation.
alternative_title:
- LNCS
author:
- first_name: Alexander
full_name: Kolesnikov, Alexander
id: 2D157DB6-F248-11E8-B48F-1D18A9856A87
last_name: Kolesnikov
- first_name: Matthieu
full_name: Guillaumin, Matthieu
last_name: Guillaumin
- first_name: Vittorio
full_name: Ferrari, Vittorio
last_name: Ferrari
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Kolesnikov A, Guillaumin M, Ferrari V, Lampert C. Closed-form approximate
CRF training for scalable image segmentation. In: Fleet D, Pajdla T, Schiele B,
Tuytelaars T, eds. Lecture Notes in Computer Science (Including Subseries Lecture
Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol
8691. Springer; 2014:550-565. doi:10.1007/978-3-319-10578-9_36'
apa: 'Kolesnikov, A., Guillaumin, M., Ferrari, V., & Lampert, C. (2014). Closed-form
approximate CRF training for scalable image segmentation. In D. Fleet, T. Pajdla,
B. Schiele, & T. Tuytelaars (Eds.), Lecture Notes in Computer Science (including
subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
(Vol. 8691, pp. 550–565). Zurich, Switzerland: Springer. https://doi.org/10.1007/978-3-319-10578-9_36'
chicago: Kolesnikov, Alexander, Matthieu Guillaumin, Vittorio Ferrari, and Christoph
Lampert. “Closed-Form Approximate CRF Training for Scalable Image Segmentation.”
In Lecture Notes in Computer Science (Including Subseries Lecture Notes in
Artificial Intelligence and Lecture Notes in Bioinformatics), edited by David
Fleet, Tomas Pajdla, Bernt Schiele, and Tinne Tuytelaars, 8691:550–65. Springer,
2014. https://doi.org/10.1007/978-3-319-10578-9_36.
ieee: A. Kolesnikov, M. Guillaumin, V. Ferrari, and C. Lampert, “Closed-form approximate
CRF training for scalable image segmentation,” in Lecture Notes in Computer
Science (including subseries Lecture Notes in Artificial Intelligence and Lecture
Notes in Bioinformatics), Zurich, Switzerland, 2014, vol. 8691, no. PART 3,
pp. 550–565.
ista: 'Kolesnikov A, Guillaumin M, Ferrari V, Lampert C. 2014. Closed-form approximate
CRF training for scalable image segmentation. Lecture Notes in Computer Science
(including subseries Lecture Notes in Artificial Intelligence and Lecture Notes
in Bioinformatics). ECCV: European Conference on Computer Vision, LNCS, vol. 8691,
550–565.'
mla: Kolesnikov, Alexander, et al. “Closed-Form Approximate CRF Training for Scalable
Image Segmentation.” Lecture Notes in Computer Science (Including Subseries
Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics),
edited by David Fleet et al., vol. 8691, no. PART 3, Springer, 2014, pp. 550–65,
doi:10.1007/978-3-319-10578-9_36.
short: A. Kolesnikov, M. Guillaumin, V. Ferrari, C. Lampert, in:, D. Fleet, T. Pajdla,
B. Schiele, T. Tuytelaars (Eds.), Lecture Notes in Computer Science (Including
Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics),
Springer, 2014, pp. 550–565.
conference:
end_date: 2014-09-12
location: Zurich, Switzerland
name: 'ECCV: European Conference on Computer Vision'
start_date: 2014-09-06
date_created: 2018-12-11T11:56:07Z
date_published: 2014-09-01T00:00:00Z
date_updated: 2021-01-12T06:55:46Z
day: '01'
department:
- _id: ChLa
doi: 10.1007/978-3-319-10578-9_36
ec_funded: 1
editor:
- first_name: David
full_name: Fleet, David
last_name: Fleet
- first_name: Tomas
full_name: Pajdla, Tomas
last_name: Pajdla
- first_name: Bernt
full_name: Schiele, Bernt
last_name: Schiele
- first_name: Tinne
full_name: Tuytelaars, Tinne
last_name: Tuytelaars
intvolume: ' 8691'
issue: PART 3
language:
- iso: eng
main_file_link:
- open_access: '1'
url: http://arxiv.org/abs/1403.7057
month: '09'
oa: 1
oa_version: Submitted Version
page: 550 - 565
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '308036'
name: Lifelong Learning of Visual Scene Understanding
publication: Lecture Notes in Computer Science (including subseries Lecture Notes
in Artificial Intelligence and Lecture Notes in Bioinformatics)
publication_status: published
publisher: Springer
publist_id: '4813'
quality_controlled: '1'
scopus_import: 1
status: public
title: Closed-form approximate CRF training for scalable image segmentation
type: conference
user_id: 4435EBFC-F248-11E8-B48F-1D18A9856A87
volume: 8691
year: '2014'
...
---
_id: '2173'
abstract:
- lang: eng
text: "In this work we introduce a new approach to co-classification, i.e. the task
of jointly classifying multiple, otherwise independent, data samples. The method
we present, named CoConut, is based on the idea of adding a regularizer in the
label space to encode certain priors on the resulting labelings. A regularizer
that encourages labelings that are smooth across the test set, for instance, can
be seen as a test-time variant of the cluster assumption, which has been proven
useful at training time in semi-supervised learning. A regularizer that introduces
a preference for certain class proportions can be regarded as a prior distribution
on the class labels. CoConut can build on existing classifiers without making
any assumptions on how they were obtained and without the need to re-train them.
The use of a regularizer adds a new level of flexibility. It allows the integration
of potentially new information at test time, even in other modalities than what
the classifiers were trained on. We evaluate our framework on six datasets, reporting
a clear performance gain in classification accuracy compared to the standard classification
setup that predicts labels for each test sample separately.\r\n"
author:
- first_name: Sameh
full_name: Khamis, Sameh
last_name: Khamis
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Khamis S, Lampert C. CoConut: Co-classification with output space regularization.
In: Proceedings of the British Machine Vision Conference 2014. BMVA Press;
2014.'
apa: 'Khamis, S., & Lampert, C. (2014). CoConut: Co-classification with output
space regularization. In Proceedings of the British Machine Vision Conference
2014. Nottingham, UK: BMVA Press.'
chicago: 'Khamis, Sameh, and Christoph Lampert. “CoConut: Co-Classification with
Output Space Regularization.” In Proceedings of the British Machine Vision
Conference 2014. BMVA Press, 2014.'
ieee: 'S. Khamis and C. Lampert, “CoConut: Co-classification with output space regularization,”
in Proceedings of the British Machine Vision Conference 2014, Nottingham,
UK, 2014.'
ista: 'Khamis S, Lampert C. 2014. CoConut: Co-classification with output space regularization.
Proceedings of the British Machine Vision Conference 2014. BMVC: British Machine
Vision Conference.'
mla: 'Khamis, Sameh, and Christoph Lampert. “CoConut: Co-Classification with Output
Space Regularization.” Proceedings of the British Machine Vision Conference
2014, BMVA Press, 2014.'
short: S. Khamis, C. Lampert, in:, Proceedings of the British Machine Vision Conference
2014, BMVA Press, 2014.
conference:
end_date: 2014-09-05
location: Nottingham, UK
name: 'BMVC: British Machine Vision Conference'
start_date: 2014-09-01
date_created: 2018-12-11T11:56:08Z
date_published: 2014-09-01T00:00:00Z
date_updated: 2021-01-12T06:55:46Z
day: '01'
ddc:
- '000'
department:
- _id: ChLa
ec_funded: 1
file:
- access_level: open_access
checksum: c4c6d3efdb8ee648faf3e76849839ce2
content_type: application/pdf
creator: system
date_created: 2018-12-12T10:08:23Z
date_updated: 2020-07-14T12:45:31Z
file_id: '4683'
file_name: IST-2016-490-v1+1_khamis-bmvc2014.pdf
file_size: 408172
relation: main_file
file_date_updated: 2020-07-14T12:45:31Z
has_accepted_license: '1'
language:
- iso: eng
month: '09'
oa: 1
oa_version: Published Version
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '308036'
name: Lifelong Learning of Visual Scene Understanding
publication: Proceedings of the British Machine Vision Conference 2014
publication_status: published
publisher: BMVA Press
publist_id: '4811'
pubrep_id: '490'
quality_controlled: '1'
scopus_import: 1
status: public
title: 'CoConut: Co-classification with output space regularization'
type: conference
user_id: 3E5EF7F0-F248-11E8-B48F-1D18A9856A87
year: '2014'
...
---
_id: '2172'
abstract:
- lang: eng
text: Fisher Kernels and Deep Learning were two developments with significant impact
on large-scale object categorization in the last years. Both approaches were shown
to achieve state-of-the-art results on large-scale object categorization datasets,
such as ImageNet. Conceptually, however, they are perceived as very different
and it is not uncommon for heated debates to spring up when advocates of both
paradigms meet at conferences or workshops. In this work, we emphasize the similarities
between both architectures rather than their differences and we argue that such
a unified view allows us to transfer ideas from one domain to the other. As a
concrete example we introduce a method for learning a support vector machine classifier
with Fisher kernel at the same time as a task-specific data representation. We
reinterpret the setting as a multi-layer feed forward network. Its final layer
is the classifier, parameterized by a weight vector, and the two previous layers
compute Fisher vectors, parameterized by the coefficients of a Gaussian mixture
model. We introduce a gradient descent based learning algorithm that, in contrast
to other feature learning techniques, is not just derived from intuition or biological
analogy, but has a theoretical justification in the framework of statistical learning
theory. Our experiments show that the new training procedure leads to significant
improvements in classification accuracy while preserving the modularity and geometric
interpretability of a support vector machine setup.
author:
- first_name: Vladyslav
full_name: Sydorov, Vladyslav
last_name: Sydorov
- first_name: Mayu
full_name: Sakurada, Mayu
last_name: Sakurada
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Sydorov V, Sakurada M, Lampert C. Deep Fisher Kernels – End to end learning
of the Fisher Kernel GMM parameters. In: Proceedings of the IEEE Computer Society
Conference on Computer Vision and Pattern Recognition. IEEE; 2014:1402-1409.
doi:10.1109/CVPR.2014.182'
apa: 'Sydorov, V., Sakurada, M., & Lampert, C. (2014). Deep Fisher Kernels –
End to end learning of the Fisher Kernel GMM parameters. In Proceedings of
the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
(pp. 1402–1409). Columbus, USA: IEEE. https://doi.org/10.1109/CVPR.2014.182'
chicago: Sydorov, Vladyslav, Mayu Sakurada, and Christoph Lampert. “Deep Fisher
Kernels – End to End Learning of the Fisher Kernel GMM Parameters.” In Proceedings
of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition,
1402–9. IEEE, 2014. https://doi.org/10.1109/CVPR.2014.182.
ieee: V. Sydorov, M. Sakurada, and C. Lampert, “Deep Fisher Kernels – End to end
learning of the Fisher Kernel GMM parameters,” in Proceedings of the IEEE Computer
Society Conference on Computer Vision and Pattern Recognition, Columbus, USA,
2014, pp. 1402–1409.
ista: 'Sydorov V, Sakurada M, Lampert C. 2014. Deep Fisher Kernels – End to end
learning of the Fisher Kernel GMM parameters. Proceedings of the IEEE Computer
Society Conference on Computer Vision and Pattern Recognition. CVPR: Computer
Vision and Pattern Recognition, 1402–1409.'
mla: Sydorov, Vladyslav, et al. “Deep Fisher Kernels – End to End Learning of the
Fisher Kernel GMM Parameters.” Proceedings of the IEEE Computer Society Conference
on Computer Vision and Pattern Recognition, IEEE, 2014, pp. 1402–09, doi:10.1109/CVPR.2014.182.
short: V. Sydorov, M. Sakurada, C. Lampert, in:, Proceedings of the IEEE Computer
Society Conference on Computer Vision and Pattern Recognition, IEEE, 2014, pp.
1402–1409.
conference:
end_date: 2014-06-28
location: Columbus, USA
name: 'CVPR: Computer Vision and Pattern Recognition'
start_date: 2014-06-23
date_created: 2018-12-11T11:56:08Z
date_published: 2014-09-24T00:00:00Z
date_updated: 2021-01-12T06:55:46Z
day: '24'
department:
- _id: ChLa
doi: 10.1109/CVPR.2014.182
ec_funded: 1
language:
- iso: eng
month: '09'
oa_version: None
page: 1402 - 1409
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '308036'
name: Lifelong Learning of Visual Scene Understanding
publication: Proceedings of the IEEE Computer Society Conference on Computer Vision
and Pattern Recognition
publication_status: published
publisher: IEEE
publist_id: '4812'
quality_controlled: '1'
scopus_import: 1
status: public
title: Deep Fisher Kernels – End to end learning of the Fisher Kernel GMM parameters
type: conference
user_id: 4435EBFC-F248-11E8-B48F-1D18A9856A87
year: '2014'
...
---
_id: '2160'
abstract:
- lang: eng
text: Transfer learning has received a lot of attention in the machine learning
community over the last years, and several effective algorithms have been developed.
However, relatively little is known about their theoretical properties, especially
in the setting of lifelong learning, where the goal is to transfer information
to tasks for which no data have been observed so far. In this work we study lifelong
learning from a theoretical perspective. Our main result is a PAC-Bayesian generalization
bound that offers a unified view on existing paradigms for transfer learning,
such as the transfer of parameters or the transfer of low-dimensional representations.
We also use the bound to derive two principled lifelong learning algorithms, and
we show that these yield results comparable with existing methods.
article_processing_charge: No
author:
- first_name: Anastasia
full_name: Pentina, Anastasia
id: 42E87FC6-F248-11E8-B48F-1D18A9856A87
last_name: Pentina
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Pentina A, Lampert C. A PAC-Bayesian bound for Lifelong Learning. In: Vol
32. ML Research Press; 2014:991-999.'
apa: 'Pentina, A., & Lampert, C. (2014). A PAC-Bayesian bound for Lifelong Learning
(Vol. 32, pp. 991–999). Presented at the ICML: International Conference on Machine
Learning, Beijing, China: ML Research Press.'
chicago: Pentina, Anastasia, and Christoph Lampert. “A PAC-Bayesian Bound for Lifelong
Learning,” 32:991–99. ML Research Press, 2014.
ieee: 'A. Pentina and C. Lampert, “A PAC-Bayesian bound for Lifelong Learning,”
presented at the ICML: International Conference on Machine Learning, Beijing,
China, 2014, vol. 32, pp. 991–999.'
ista: 'Pentina A, Lampert C. 2014. A PAC-Bayesian bound for Lifelong Learning. ICML:
International Conference on Machine Learning vol. 32, 991–999.'
mla: Pentina, Anastasia, and Christoph Lampert. A PAC-Bayesian Bound for Lifelong
Learning. Vol. 32, ML Research Press, 2014, pp. 991–99.
short: A. Pentina, C. Lampert, in:, ML Research Press, 2014, pp. 991–999.
conference:
end_date: 2014-06-26
location: Beijing, China
name: 'ICML: International Conference on Machine Learning'
start_date: 2014-06-21
date_created: 2018-12-11T11:56:03Z
date_published: 2014-05-10T00:00:00Z
date_updated: 2023-10-17T11:54:24Z
day: '10'
department:
- _id: ChLa
intvolume: ' 32'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://dl.acm.org/citation.cfm?id=3045003
month: '05'
oa: 1
oa_version: Submitted Version
page: 991 - 999
publication_status: published
publisher: ML Research Press
publist_id: '4844'
quality_controlled: '1'
scopus_import: '1'
status: public
title: A PAC-Bayesian bound for Lifelong Learning
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 32
year: '2014'
...
---
_id: '2294'
abstract:
- lang: eng
text: "In this work we propose a system for automatic classification of Drosophila
embryos into developmental stages.\r\nWhile the system is designed to solve an
actual problem in biological research, we believe that the principle underly-\r\ning
it is interesting not only for biologists, but also for researchers in computer
vision. The main idea is to combine two orthogonal sources of information: one
is a classifier trained on strongly invariant features, which makes it applicable
to images of very different conditions, but also leads to rather noisy predictions.
The other is a label propagation step based on a more powerful similarity measure
that however is only consistent within specific subsets of the data at a time.\r\nIn
our biological setup, the information sources are the shape and the staining patterns
of embryo images. We show\r\nexperimentally that while neither of the methods
\ can be used by itself to achieve satisfactory results, their combina-\r\ntion
achieves prediction quality comparable to human performance."
author:
- first_name: Tomas
full_name: Kazmar, Tomas
last_name: Kazmar
- first_name: Evgeny
full_name: Kvon, Evgeny
last_name: Kvon
- first_name: Alexander
full_name: Stark, Alexander
last_name: Stark
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Kazmar T, Kvon E, Stark A, Lampert C. Drosophila Embryo Stage Annotation using
Label Propagation. In: IEEE; 2013. doi:10.1109/ICCV.2013.139'
apa: 'Kazmar, T., Kvon, E., Stark, A., & Lampert, C. (2013). Drosophila Embryo
Stage Annotation using Label Propagation. Presented at the ICCV: International
Conference on Computer Vision, Sydney, Australia: IEEE. https://doi.org/10.1109/ICCV.2013.139'
chicago: Kazmar, Tomas, Evgeny Kvon, Alexander Stark, and Christoph Lampert. “Drosophila
Embryo Stage Annotation Using Label Propagation.” IEEE, 2013. https://doi.org/10.1109/ICCV.2013.139.
ieee: 'T. Kazmar, E. Kvon, A. Stark, and C. Lampert, “Drosophila Embryo Stage Annotation
using Label Propagation,” presented at the ICCV: International Conference on Computer
Vision, Sydney, Australia, 2013.'
ista: 'Kazmar T, Kvon E, Stark A, Lampert C. 2013. Drosophila Embryo Stage Annotation
using Label Propagation. ICCV: International Conference on Computer Vision.'
mla: Kazmar, Tomas, et al. Drosophila Embryo Stage Annotation Using Label Propagation.
IEEE, 2013, doi:10.1109/ICCV.2013.139.
short: T. Kazmar, E. Kvon, A. Stark, C. Lampert, in:, IEEE, 2013.
conference:
end_date: 2013-12-08
location: Sydney, Australia
name: 'ICCV: International Conference on Computer Vision'
start_date: 2013-12-01
date_created: 2018-12-11T11:56:49Z
date_published: 2013-12-01T00:00:00Z
date_updated: 2021-01-12T06:56:35Z
day: '01'
department:
- _id: ChLa
doi: 10.1109/ICCV.2013.139
ec_funded: 1
language:
- iso: eng
main_file_link:
- open_access: '1'
url: http://www.cv-foundation.org/openaccess/ICCV2013.py
month: '12'
oa: 1
oa_version: Submitted Version
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '308036'
name: Lifelong Learning of Visual Scene Understanding
publication_status: published
publisher: IEEE
publist_id: '4634'
quality_controlled: '1'
scopus_import: 1
status: public
title: Drosophila Embryo Stage Annotation using Label Propagation
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2013'
...
---
_id: '2293'
abstract:
- lang: eng
text: Many computer vision problems have an asymmetric distribution of information
between training and test time. In this work, we study the case where we are given
additional information about the training data, which however will not be available
at test time. This situation is called learning using privileged information (LUPI).
We introduce two maximum-margin techniques that are able to make use of this additional
source of information, and we show that the framework is applicable to several
scenarios that have been studied in computer vision before. Experiments with attributes,
bounding boxes, image tags and rationales as additional information in object
classification show promising results.
author:
- first_name: Viktoriia
full_name: Sharmanska, Viktoriia
id: 2EA6D09E-F248-11E8-B48F-1D18A9856A87
last_name: Sharmanska
orcid: 0000-0003-0192-9308
- first_name: Novi
full_name: Quadrianto, Novi
last_name: Quadrianto
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Sharmanska V, Quadrianto N, Lampert C. Learning to rank using privileged information.
In: IEEE; 2013:825-832. doi:10.1109/ICCV.2013.107'
apa: 'Sharmanska, V., Quadrianto, N., & Lampert, C. (2013). Learning to rank
using privileged information (pp. 825–832). Presented at the ICCV: International
Conference on Computer Vision, Sydney, Australia: IEEE. https://doi.org/10.1109/ICCV.2013.107'
chicago: Sharmanska, Viktoriia, Novi Quadrianto, and Christoph Lampert. “Learning
to Rank Using Privileged Information,” 825–32. IEEE, 2013. https://doi.org/10.1109/ICCV.2013.107.
ieee: 'V. Sharmanska, N. Quadrianto, and C. Lampert, “Learning to rank using privileged
information,” presented at the ICCV: International Conference on Computer Vision,
Sydney, Australia, 2013, pp. 825–832.'
ista: 'Sharmanska V, Quadrianto N, Lampert C. 2013. Learning to rank using privileged
information. ICCV: International Conference on Computer Vision, 825–832.'
mla: Sharmanska, Viktoriia, et al. Learning to Rank Using Privileged Information.
IEEE, 2013, pp. 825–32, doi:10.1109/ICCV.2013.107.
short: V. Sharmanska, N. Quadrianto, C. Lampert, in:, IEEE, 2013, pp. 825–832.
conference:
end_date: 2013-12-08
location: Sydney, Australia
name: 'ICCV: International Conference on Computer Vision'
start_date: 2013-12-01
date_created: 2018-12-11T11:56:49Z
date_published: 2013-12-01T00:00:00Z
date_updated: 2023-02-23T10:36:41Z
day: '01'
department:
- _id: ChLa
doi: 10.1109/ICCV.2013.107
ec_funded: 1
language:
- iso: eng
main_file_link:
- open_access: '1'
url: www.cv-foundation.org/openaccess/content_iccv_2013/papers/Sharmanska_Learning_to_Rank_2013_ICCV_paper.pdf
month: '12'
oa: 1
oa_version: Submitted Version
page: 825 - 832
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '308036'
name: Lifelong Learning of Visual Scene Understanding
publication_status: published
publisher: IEEE
publist_id: '4635'
quality_controlled: '1'
scopus_import: 1
status: public
title: Learning to rank using privileged information
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2013'
...
---
_id: '2516'
abstract:
- lang: eng
text: 'We study the problem of object recognition for categories for which we have
no training examples, a task also called zero-data or zero-shot learning. This
situation has hardly been studied in computer vision research, even though it
occurs frequently: the world contains tens of thousands of different object classes
and for only few of them image collections have been formed and suitably annotated.
To tackle the problem we introduce attribute-based classification: objects are
identified based on a high-level description that is phrased in terms of semantic
attributes, such as the object''s color or shape. Because the identification of
each such property transcends the specific learning task at hand, the attribute
classifiers can be pre-learned independently, e.g. from existing image datasets
unrelated to the current task. Afterwards, new classes can be detected based on
their attribute representation, without the need for a new training phase. In
this paper we also introduce a new dataset, Animals with Attributes, of over 30,000
images of 50 animal classes, annotated with 85 semantic attributes. Extensive
experiments on this and two more datasets show that attribute-based classification
indeed is able to categorize images without access to any training images of the
target classes.'
author:
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
- first_name: Hannes
full_name: Nickisch, Hannes
last_name: Nickisch
- first_name: Stefan
full_name: Harmeling, Stefan
last_name: Harmeling
citation:
ama: Lampert C, Nickisch H, Harmeling S. Attribute-based classification for zero-shot
learning of object categories. IEEE Transactions on Pattern Analysis and Machine
Intelligence. 2013;36(3):453-465. doi:10.1109/TPAMI.2013.140
apa: Lampert, C., Nickisch, H., & Harmeling, S. (2013). Attribute-based classification
for zero-shot learning of object categories. IEEE Transactions on Pattern Analysis
and Machine Intelligence. IEEE. https://doi.org/10.1109/TPAMI.2013.140
chicago: Lampert, Christoph, Hannes Nickisch, and Stefan Harmeling. “Attribute-Based
Classification for Zero-Shot Learning of Object Categories.” IEEE Transactions
on Pattern Analysis and Machine Intelligence. IEEE, 2013. https://doi.org/10.1109/TPAMI.2013.140.
ieee: C. Lampert, H. Nickisch, and S. Harmeling, “Attribute-based classification
for zero-shot learning of object categories,” IEEE Transactions on Pattern
Analysis and Machine Intelligence, vol. 36, no. 3. IEEE, pp. 453–465, 2013.
ista: Lampert C, Nickisch H, Harmeling S. 2013. Attribute-based classification for
zero-shot learning of object categories. IEEE Transactions on Pattern Analysis
and Machine Intelligence. 36(3), 453–465.
mla: Lampert, Christoph, et al. “Attribute-Based Classification for Zero-Shot Learning
of Object Categories.” IEEE Transactions on Pattern Analysis and Machine Intelligence,
vol. 36, no. 3, IEEE, 2013, pp. 453–65, doi:10.1109/TPAMI.2013.140.
short: C. Lampert, H. Nickisch, S. Harmeling, IEEE Transactions on Pattern Analysis
and Machine Intelligence 36 (2013) 453–465.
date_created: 2018-12-11T11:58:08Z
date_published: 2013-07-30T00:00:00Z
date_updated: 2021-01-12T06:57:58Z
day: '30'
department:
- _id: ChLa
doi: 10.1109/TPAMI.2013.140
intvolume: ' 36'
issue: '3'
language:
- iso: eng
month: '07'
oa_version: None
page: 453 - 465
publication: IEEE Transactions on Pattern Analysis and Machine Intelligence
publication_status: published
publisher: IEEE
publist_id: '4385'
quality_controlled: '1'
scopus_import: 1
status: public
title: Attribute-based classification for zero-shot learning of object categories
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 36
year: '2013'
...
---
_id: '2901'
abstract:
- lang: eng
text: ' We introduce the M-modes problem for graphical models: predicting the M
label configurations of highest probability that are at the same time local maxima
of the probability landscape. M-modes have multiple possible applications: because
they are intrinsically diverse, they provide a principled alternative to non-maximum
suppression techniques for structured prediction, they can act as codebook vectors
for quantizing the configuration space, or they can form component centers for
mixture model approximation. We present two algorithms for solving the M-modes
problem. The first algorithm solves the problem in polynomial time when the underlying
graphical model is a simple chain. The second algorithm solves the problem for
junction chains. In synthetic and real dataset, we demonstrate how M-modes can
improve the performance of prediction. We also use the generated modes as a tool
to understand the topography of the probability distribution of configurations,
for example with relation to the training set size and amount of noise in the
data. '
alternative_title:
- ' JMLR: W&CP'
author:
- first_name: Chao
full_name: Chen, Chao
id: 3E92416E-F248-11E8-B48F-1D18A9856A87
last_name: Chen
- first_name: Vladimir
full_name: Kolmogorov, Vladimir
id: 3D50B0BA-F248-11E8-B48F-1D18A9856A87
last_name: Kolmogorov
- first_name: Zhu
full_name: Yan, Zhu
last_name: Yan
- first_name: Dimitris
full_name: Metaxas, Dimitris
last_name: Metaxas
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Chen C, Kolmogorov V, Yan Z, Metaxas D, Lampert C. Computing the M most probable
modes of a graphical model. In: Vol 31. JMLR; 2013:161-169.'
apa: 'Chen, C., Kolmogorov, V., Yan, Z., Metaxas, D., & Lampert, C. (2013).
Computing the M most probable modes of a graphical model (Vol. 31, pp. 161–169).
Presented at the AISTATS: Conference on Uncertainty in Artificial Intelligence,
Scottsdale, AZ, United States: JMLR.'
chicago: Chen, Chao, Vladimir Kolmogorov, Zhu Yan, Dimitris Metaxas, and Christoph
Lampert. “Computing the M Most Probable Modes of a Graphical Model,” 31:161–69.
JMLR, 2013.
ieee: 'C. Chen, V. Kolmogorov, Z. Yan, D. Metaxas, and C. Lampert, “Computing the
M most probable modes of a graphical model,” presented at the AISTATS: Conference
on Uncertainty in Artificial Intelligence, Scottsdale, AZ, United States, 2013,
vol. 31, pp. 161–169.'
ista: 'Chen C, Kolmogorov V, Yan Z, Metaxas D, Lampert C. 2013. Computing the M
most probable modes of a graphical model. AISTATS: Conference on Uncertainty
in Artificial Intelligence, JMLR: W&CP, vol. 31, 161–169.'
mla: Chen, Chao, et al. Computing the M Most Probable Modes of a Graphical Model.
Vol. 31, JMLR, 2013, pp. 161–69.
short: C. Chen, V. Kolmogorov, Z. Yan, D. Metaxas, C. Lampert, in:, JMLR, 2013,
pp. 161–169.
conference:
end_date: 2013-05-01
location: Scottsdale, AZ, United States
name: ' AISTATS: Conference on Uncertainty in Artificial Intelligence'
start_date: 2013-04-29
date_created: 2018-12-11T12:00:14Z
date_published: 2013-01-01T00:00:00Z
date_updated: 2021-01-12T07:00:35Z
day: '01'
department:
- _id: HeEd
- _id: VlKo
- _id: ChLa
intvolume: ' 31'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: http://jmlr.org/proceedings/papers/v31/chen13a.html
month: '01'
oa: 1
oa_version: None
page: 161 - 169
publication_status: published
publisher: JMLR
publist_id: '3846'
quality_controlled: '1'
scopus_import: 1
status: public
title: Computing the M most probable modes of a graphical model
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 31
year: '2013'
...
---
_id: '2948'
abstract:
- lang: eng
text: 'Many visual datasets are traditionally used to analyze the performance of
different learning techniques. The evaluation is usually done within each dataset,
therefore it is questionable if such results are a reliable indicator of true
generalization ability. We propose here an algorithm to exploit the existing data
resources when learning on a new multiclass problem. Our main idea is to identify
an image representation that decomposes orthogonally into two subspaces: a part
specific to each dataset, and a part generic to, and therefore shared between,
all the considered source sets. This allows us to use the generic representation
as un-biased reference knowledge for a novel classification task. By casting the
method in the multi-view setting, we also make it possible to use different features
for different databases. We call the algorithm MUST, Multitask Unaligned Shared
knowledge Transfer. Through extensive experiments on five public datasets, we
show that MUST consistently improves the cross-datasets generalization performance.'
acknowledgement: This work was supported by the PASCAL 2 Network of Excellence (TT)
and by the Newton International Fellowship (NQ)
alternative_title:
- LNCS
author:
- first_name: Tatiana
full_name: Tommasi, Tatiana
last_name: Tommasi
- first_name: Novi
full_name: Quadrianto, Novi
last_name: Quadrianto
- first_name: Barbara
full_name: Caputo, Barbara
last_name: Caputo
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Tommasi T, Quadrianto N, Caputo B, Lampert C. Beyond dataset bias: Multi-task
unaligned shared knowledge transfer. 2013;7724:1-15. doi:10.1007/978-3-642-37331-2_1'
apa: 'Tommasi, T., Quadrianto, N., Caputo, B., & Lampert, C. (2013). Beyond
dataset bias: Multi-task unaligned shared knowledge transfer. Presented at the
ACCV: Asian Conference on Computer Vision, Daejeon, Korea: Springer. https://doi.org/10.1007/978-3-642-37331-2_1'
chicago: 'Tommasi, Tatiana, Novi Quadrianto, Barbara Caputo, and Christoph Lampert.
“Beyond Dataset Bias: Multi-Task Unaligned Shared Knowledge Transfer.” Lecture
Notes in Computer Science. Springer, 2013. https://doi.org/10.1007/978-3-642-37331-2_1.'
ieee: 'T. Tommasi, N. Quadrianto, B. Caputo, and C. Lampert, “Beyond dataset bias:
Multi-task unaligned shared knowledge transfer,” vol. 7724. Springer, pp. 1–15,
2013.'
ista: 'Tommasi T, Quadrianto N, Caputo B, Lampert C. 2013. Beyond dataset bias:
Multi-task unaligned shared knowledge transfer. 7724, 1–15.'
mla: 'Tommasi, Tatiana, et al. Beyond Dataset Bias: Multi-Task Unaligned Shared
Knowledge Transfer. Vol. 7724, Springer, 2013, pp. 1–15, doi:10.1007/978-3-642-37331-2_1.'
short: T. Tommasi, N. Quadrianto, B. Caputo, C. Lampert, 7724 (2013) 1–15.
conference:
end_date: 2012-11-09
location: Daejeon, Korea
name: 'ACCV: Asian Conference on Computer Vision'
start_date: 2012-11-05
date_created: 2018-12-11T12:00:30Z
date_published: 2013-04-04T00:00:00Z
date_updated: 2020-08-11T10:09:54Z
day: '04'
ddc:
- '000'
department:
- _id: ChLa
doi: 10.1007/978-3-642-37331-2_1
file:
- access_level: open_access
checksum: a0a7234a89e2192af655b0d0ae3bf445
content_type: application/pdf
creator: dernst
date_created: 2019-01-22T14:03:11Z
date_updated: 2020-07-14T12:45:55Z
file_id: '5874'
file_name: 2012_ACCV_Tommasi.pdf
file_size: 1513620
relation: main_file
file_date_updated: 2020-07-14T12:45:55Z
has_accepted_license: '1'
intvolume: ' 7724'
language:
- iso: eng
month: '04'
oa: 1
oa_version: Submitted Version
page: 1 - 15
publication_status: published
publisher: Springer
publist_id: '3784'
quality_controlled: '1'
scopus_import: 1
series_title: Lecture Notes in Computer Science
status: public
title: 'Beyond dataset bias: Multi-task unaligned shared knowledge transfer'
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 7724
year: '2013'
...
---
_id: '3321'
author:
- first_name: Novi
full_name: Quadrianto, Novi
last_name: Quadrianto
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Quadrianto N, Lampert C. Kernel based learning. In: Dubitzky W, Wolkenhauer
O, Cho K, Yokota H, eds. Encyclopedia of Systems Biology. Vol 3. Springer;
2013:1069-1069. doi:10.1007/978-1-4419-9863-7_604'
apa: Quadrianto, N., & Lampert, C. (2013). Kernel based learning. In W. Dubitzky,
O. Wolkenhauer, K. Cho, & H. Yokota (Eds.), Encyclopedia of Systems Biology
(Vol. 3, pp. 1069–1069). Springer. https://doi.org/10.1007/978-1-4419-9863-7_604
chicago: Quadrianto, Novi, and Christoph Lampert. “Kernel Based Learning.” In Encyclopedia
of Systems Biology, edited by Werner Dubitzky, Olaf Wolkenhauer, Kwang Cho,
and Hiroki Yokota, 3:1069–1069. Springer, 2013. https://doi.org/10.1007/978-1-4419-9863-7_604.
ieee: N. Quadrianto and C. Lampert, “Kernel based learning,” in Encyclopedia
of Systems Biology, vol. 3, W. Dubitzky, O. Wolkenhauer, K. Cho, and H. Yokota,
Eds. Springer, 2013, pp. 1069–1069.
ista: 'Quadrianto N, Lampert C. 2013.Kernel based learning. In: Encyclopedia of
Systems Biology. vol. 3, 1069–1069.'
mla: Quadrianto, Novi, and Christoph Lampert. “Kernel Based Learning.” Encyclopedia
of Systems Biology, edited by Werner Dubitzky et al., vol. 3, Springer, 2013,
pp. 1069–1069, doi:10.1007/978-1-4419-9863-7_604.
short: N. Quadrianto, C. Lampert, in:, W. Dubitzky, O. Wolkenhauer, K. Cho, H. Yokota
(Eds.), Encyclopedia of Systems Biology, Springer, 2013, pp. 1069–1069.
date_created: 2018-12-11T12:02:39Z
date_published: 2013-01-01T00:00:00Z
date_updated: 2021-01-12T07:42:38Z
day: '01'
department:
- _id: ChLa
doi: 10.1007/978-1-4419-9863-7_604
editor:
- first_name: Werner
full_name: Dubitzky, Werner
last_name: Dubitzky
- first_name: Olaf
full_name: Wolkenhauer, Olaf
last_name: Wolkenhauer
- first_name: Kwang
full_name: Cho, Kwang
last_name: Cho
- first_name: Hiroki
full_name: Yokota, Hiroki
last_name: Yokota
intvolume: ' 3'
language:
- iso: eng
month: '01'
oa_version: None
page: 1069 - 1069
publication: Encyclopedia of Systems Biology
publication_status: published
publisher: Springer
publist_id: '3314'
quality_controlled: '1'
status: public
title: Kernel based learning
type: encyclopedia_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 3
year: '2013'
...
---
_id: '2825'
abstract:
- lang: eng
text: 'We study the problem of maximum marginal prediction (MMP) in probabilistic
graphical models, a task that occurs, for example, as the Bayes optimal decision
rule under a Hamming loss. MMP is typically performed as a two-stage procedure:
one estimates each variable''s marginal probability and then forms a prediction
from the states of maximal probability. In this work we propose a simple yet effective
technique for accelerating MMP when inference is sampling-based: instead of the
above two-stage procedure we directly estimate the posterior probability of each
decision variable. This allows us to identify the point of time when we are sufficiently
certain about any individual decision. Whenever this is the case, we dynamically
prune the variables we are confident about from the underlying factor graph. Consequently,
at any time only samples of variables whose decision is still uncertain need to
be created. Experiments in two prototypical scenarios, multi-label classification
and image inpainting, show that adaptive sampling can drastically accelerate MMP
without sacrificing prediction accuracy.'
author:
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Lampert C. Dynamic pruning of factor graphs for maximum marginal prediction.
In: Vol 1. Neural Information Processing Systems; 2012:82-90.'
apa: 'Lampert, C. (2012). Dynamic pruning of factor graphs for maximum marginal
prediction (Vol. 1, pp. 82–90). Presented at the NIPS: Neural Information Processing
Systems, Lake Tahoe, NV, United States: Neural Information Processing Systems.'
chicago: Lampert, Christoph. “Dynamic Pruning of Factor Graphs for Maximum Marginal
Prediction,” 1:82–90. Neural Information Processing Systems, 2012.
ieee: 'C. Lampert, “Dynamic pruning of factor graphs for maximum marginal prediction,”
presented at the NIPS: Neural Information Processing Systems, Lake Tahoe, NV,
United States, 2012, vol. 1, pp. 82–90.'
ista: 'Lampert C. 2012. Dynamic pruning of factor graphs for maximum marginal prediction.
NIPS: Neural Information Processing Systems vol. 1, 82–90.'
mla: Lampert, Christoph. Dynamic Pruning of Factor Graphs for Maximum Marginal
Prediction. Vol. 1, Neural Information Processing Systems, 2012, pp. 82–90.
short: C. Lampert, in:, Neural Information Processing Systems, 2012, pp. 82–90.
conference:
end_date: 2012-12-06
location: Lake Tahoe, NV, United States
name: 'NIPS: Neural Information Processing Systems'
start_date: 2012-12-03
date_created: 2018-12-11T11:59:48Z
date_published: 2012-12-01T00:00:00Z
date_updated: 2021-01-12T06:59:59Z
day: '01'
department:
- _id: ChLa
intvolume: ' 1'
language:
- iso: eng
month: '12'
oa_version: None
page: 82 - 90
publication_status: published
publisher: Neural Information Processing Systems
publist_id: '3975'
quality_controlled: '1'
scopus_import: 1
status: public
title: Dynamic pruning of factor graphs for maximum marginal prediction
type: conference
user_id: 3E5EF7F0-F248-11E8-B48F-1D18A9856A87
volume: 1
year: '2012'
...
---
_id: '3164'
abstract:
- lang: eng
text: Overview of the Special Issue on structured prediction and inference.
author:
- first_name: Matthew
full_name: Blaschko, Matthew
last_name: Blaschko
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Blaschko M, Lampert C. Guest editorial: Special issue on structured prediction
and inference. International Journal of Computer Vision. 2012;99(3):257-258.
doi:10.1007/s11263-012-0530-y'
apa: 'Blaschko, M., & Lampert, C. (2012). Guest editorial: Special issue on
structured prediction and inference. International Journal of Computer Vision.
Springer. https://doi.org/10.1007/s11263-012-0530-y'
chicago: 'Blaschko, Matthew, and Christoph Lampert. “Guest Editorial: Special Issue
on Structured Prediction and Inference.” International Journal of Computer
Vision. Springer, 2012. https://doi.org/10.1007/s11263-012-0530-y.'
ieee: 'M. Blaschko and C. Lampert, “Guest editorial: Special issue on structured
prediction and inference,” International Journal of Computer Vision, vol.
99, no. 3. Springer, pp. 257–258, 2012.'
ista: 'Blaschko M, Lampert C. 2012. Guest editorial: Special issue on structured
prediction and inference. International Journal of Computer Vision. 99(3), 257–258.'
mla: 'Blaschko, Matthew, and Christoph Lampert. “Guest Editorial: Special Issue
on Structured Prediction and Inference.” International Journal of Computer
Vision, vol. 99, no. 3, Springer, 2012, pp. 257–58, doi:10.1007/s11263-012-0530-y.'
short: M. Blaschko, C. Lampert, International Journal of Computer Vision 99 (2012)
257–258.
date_created: 2018-12-11T12:01:46Z
date_published: 2012-09-01T00:00:00Z
date_updated: 2021-01-12T07:41:30Z
day: '01'
department:
- _id: ChLa
doi: 10.1007/s11263-012-0530-y
intvolume: ' 99'
issue: '3'
language:
- iso: eng
month: '09'
oa_version: None
page: 257 - 258
publication: International Journal of Computer Vision
publication_status: published
publisher: Springer
publist_id: '3521'
quality_controlled: '1'
scopus_import: 1
status: public
title: 'Guest editorial: Special issue on structured prediction and inference'
type: journal_article
user_id: 3E5EF7F0-F248-11E8-B48F-1D18A9856A87
volume: 99
year: '2012'
...
---
_id: '3125'
abstract:
- lang: eng
text: We propose a new learning method to infer a mid-level feature representation
that combines the advantage of semantic attribute representations with the higher
expressive power of non-semantic features. The idea lies in augmenting an existing
attribute-based representation with additional dimensions for which an autoencoder
model is coupled with a large-margin principle. This construction allows a smooth
transition between the zero-shot regime with no training example, the unsupervised
regime with training examples but without class labels, and the supervised regime
with training examples and with class labels. The resulting optimization problem
can be solved efficiently, because several of the necessity steps have closed-form
solutions. Through extensive experiments we show that the augmented representation
achieves better results in terms of object categorization accuracy than the semantic
representation alone.
alternative_title:
- LNCS
article_processing_charge: No
author:
- first_name: Viktoriia
full_name: Sharmanska, Viktoriia
id: 2EA6D09E-F248-11E8-B48F-1D18A9856A87
last_name: Sharmanska
orcid: 0000-0003-0192-9308
- first_name: Novi
full_name: Quadrianto, Novi
last_name: Quadrianto
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Sharmanska V, Quadrianto N, Lampert C. Augmented attribute representations.
In: Vol 7576. Springer; 2012:242-255. doi:10.1007/978-3-642-33715-4_18'
apa: 'Sharmanska, V., Quadrianto, N., & Lampert, C. (2012). Augmented attribute
representations (Vol. 7576, pp. 242–255). Presented at the ECCV: European Conference
on Computer Vision, Florence, Italy: Springer. https://doi.org/10.1007/978-3-642-33715-4_18'
chicago: Sharmanska, Viktoriia, Novi Quadrianto, and Christoph Lampert. “Augmented
Attribute Representations,” 7576:242–55. Springer, 2012. https://doi.org/10.1007/978-3-642-33715-4_18.
ieee: 'V. Sharmanska, N. Quadrianto, and C. Lampert, “Augmented attribute representations,”
presented at the ECCV: European Conference on Computer Vision, Florence, Italy,
2012, vol. 7576, no. PART 5, pp. 242–255.'
ista: 'Sharmanska V, Quadrianto N, Lampert C. 2012. Augmented attribute representations.
ECCV: European Conference on Computer Vision, LNCS, vol. 7576, 242–255.'
mla: Sharmanska, Viktoriia, et al. Augmented Attribute Representations. Vol.
7576, no. PART 5, Springer, 2012, pp. 242–55, doi:10.1007/978-3-642-33715-4_18.
short: V. Sharmanska, N. Quadrianto, C. Lampert, in:, Springer, 2012, pp. 242–255.
conference:
end_date: 2012-10-13
location: Florence, Italy
name: 'ECCV: European Conference on Computer Vision'
start_date: 2012-10-07
date_created: 2018-12-11T12:01:32Z
date_published: 2012-10-01T00:00:00Z
date_updated: 2023-02-23T11:13:25Z
day: '01'
ddc:
- '000'
department:
- _id: ChLa
doi: 10.1007/978-3-642-33715-4_18
file:
- access_level: open_access
checksum: bccdbe0663780d25a1e0524002b2d896
content_type: application/pdf
creator: dernst
date_created: 2020-05-15T12:29:04Z
date_updated: 2020-07-14T12:46:00Z
file_id: '7861'
file_name: 2012_ECCV_Sharmanska.pdf
file_size: 6073897
relation: main_file
file_date_updated: 2020-07-14T12:46:00Z
has_accepted_license: '1'
intvolume: ' 7576'
issue: PART 5
language:
- iso: eng
month: '10'
oa: 1
oa_version: Submitted Version
page: 242 - 255
publication_status: published
publisher: Springer
publist_id: '3574'
quality_controlled: '1'
scopus_import: 1
status: public
title: Augmented attribute representations
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 7576
year: '2012'
...
---
_id: '3126'
abstract:
- lang: eng
text: "In this work we propose a new information-theoretic clustering algorithm
that infers cluster memberships by direct optimization of a non-parametric mutual
information estimate between data distribution and cluster assignment. Although
the optimization objective has a solid theoretical foundation it is hard to optimize.
We propose an approximate optimization formulation that leads to an efficient
algorithm with low runtime complexity. The algorithm has a single free parameter,
the number of clusters to find. We demonstrate superior performance on several
synthetic and real datasets.\r\n"
alternative_title:
- LNCS
author:
- first_name: Andreas
full_name: Müller, Andreas
last_name: Müller
- first_name: Sebastian
full_name: Nowozin, Sebastian
last_name: Nowozin
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Müller A, Nowozin S, Lampert C. Information theoretic clustering using minimal
spanning trees. In: Vol 7476. Springer; 2012:205-215. doi:10.1007/978-3-642-32717-9_21'
apa: 'Müller, A., Nowozin, S., & Lampert, C. (2012). Information theoretic clustering
using minimal spanning trees (Vol. 7476, pp. 205–215). Presented at the DAGM:
German Association For Pattern Recognition, Graz, Austria: Springer. https://doi.org/10.1007/978-3-642-32717-9_21'
chicago: Müller, Andreas, Sebastian Nowozin, and Christoph Lampert. “Information
Theoretic Clustering Using Minimal Spanning Trees,” 7476:205–15. Springer, 2012.
https://doi.org/10.1007/978-3-642-32717-9_21.
ieee: 'A. Müller, S. Nowozin, and C. Lampert, “Information theoretic clustering
using minimal spanning trees,” presented at the DAGM: German Association For Pattern
Recognition, Graz, Austria, 2012, vol. 7476, pp. 205–215.'
ista: 'Müller A, Nowozin S, Lampert C. 2012. Information theoretic clustering using
minimal spanning trees. DAGM: German Association For Pattern Recognition, LNCS,
vol. 7476, 205–215.'
mla: Müller, Andreas, et al. Information Theoretic Clustering Using Minimal Spanning
Trees. Vol. 7476, Springer, 2012, pp. 205–15, doi:10.1007/978-3-642-32717-9_21.
short: A. Müller, S. Nowozin, C. Lampert, in:, Springer, 2012, pp. 205–215.
conference:
end_date: 2012-08-31
location: Graz, Austria
name: 'DAGM: German Association For Pattern Recognition'
start_date: 2012-08-28
date_created: 2018-12-11T12:01:32Z
date_published: 2012-08-14T00:00:00Z
date_updated: 2021-01-12T07:41:14Z
day: '14'
department:
- _id: ChLa
doi: 10.1007/978-3-642-32717-9_21
intvolume: ' 7476'
language:
- iso: eng
month: '08'
oa_version: None
page: 205 - 215
publication_status: published
publisher: Springer
publist_id: '3573'
quality_controlled: '1'
scopus_import: 1
status: public
title: Information theoretic clustering using minimal spanning trees
type: conference
user_id: 3E5EF7F0-F248-11E8-B48F-1D18A9856A87
volume: 7476
year: '2012'
...
---
_id: '3248'
abstract:
- lang: eng
text: We describe RTblob, a high speed vision system that detects objects in cluttered
scenes based on their color and shape at a speed of over 800 frames/s. Because
the system is available as open-source software and relies only on off-the-shelf
PC hardware components, it can provide the basis for multiple application scenarios.
As an illustrative example, we show how RTblob can be used in a robotic table
tennis scenario to estimate ball trajectories through 3D space simultaneously
from four cameras images at a speed of 200 Hz.
article_processing_charge: No
article_type: original
author:
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
- first_name: Jan
full_name: Peters, Jan
last_name: Peters
citation:
ama: Lampert C, Peters J. Real-time detection of colored objects in multiple camera
streams with off-the-shelf hardware components. Journal of Real-Time Image
Processing. 2012;7(1):31-41. doi:10.1007/s11554-010-0168-3
apa: Lampert, C., & Peters, J. (2012). Real-time detection of colored objects
in multiple camera streams with off-the-shelf hardware components. Journal
of Real-Time Image Processing. Springer. https://doi.org/10.1007/s11554-010-0168-3
chicago: Lampert, Christoph, and Jan Peters. “Real-Time Detection of Colored Objects
in Multiple Camera Streams with off-the-Shelf Hardware Components.” Journal
of Real-Time Image Processing. Springer, 2012. https://doi.org/10.1007/s11554-010-0168-3.
ieee: C. Lampert and J. Peters, “Real-time detection of colored objects in multiple
camera streams with off-the-shelf hardware components,” Journal of Real-Time
Image Processing, vol. 7, no. 1. Springer, pp. 31–41, 2012.
ista: Lampert C, Peters J. 2012. Real-time detection of colored objects in multiple
camera streams with off-the-shelf hardware components. Journal of Real-Time Image
Processing. 7(1), 31–41.
mla: Lampert, Christoph, and Jan Peters. “Real-Time Detection of Colored Objects
in Multiple Camera Streams with off-the-Shelf Hardware Components.” Journal
of Real-Time Image Processing, vol. 7, no. 1, Springer, 2012, pp. 31–41, doi:10.1007/s11554-010-0168-3.
short: C. Lampert, J. Peters, Journal of Real-Time Image Processing 7 (2012) 31–41.
date_created: 2018-12-11T12:02:15Z
date_published: 2012-03-01T00:00:00Z
date_updated: 2022-05-24T08:05:40Z
day: '01'
ddc:
- '000'
department:
- _id: ChLa
doi: 10.1007/s11554-010-0168-3
file:
- access_level: open_access
checksum: 241be47ea50e81a283bcf4c45b07e8cc
content_type: application/pdf
creator: kschuh
date_created: 2019-02-12T10:52:25Z
date_updated: 2020-07-14T12:46:04Z
file_id: '5958'
file_name: 2012_Springer_Lampert.pdf
file_size: 2933187
relation: main_file
file_date_updated: 2020-07-14T12:46:04Z
has_accepted_license: '1'
intvolume: ' 7'
issue: '1'
language:
- iso: eng
month: '03'
oa: 1
oa_version: Submitted Version
page: 31 - 41
publication: Journal of Real-Time Image Processing
publication_identifier:
eissn:
- 1861-8219
issn:
- 1861-8200
publication_status: published
publisher: Springer
publist_id: '3417'
quality_controlled: '1'
scopus_import: '1'
status: public
title: Real-time detection of colored objects in multiple camera streams with off-the-shelf
hardware components
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 7
year: '2012'
...
---
_id: '3124'
abstract:
- lang: eng
text: "We consider the problem of inference in a graphical model with binary variables.
While in theory it is arguably preferable to compute marginal probabilities, in
practice researchers often use MAP inference due to the availability of efficient
discrete optimization algorithms. We bridge the gap between the two approaches
by introducing the Discrete Marginals technique in which approximate marginals
are obtained by minimizing an objective function with unary and pairwise terms
over a discretized domain. This allows the use of techniques originally developed
for MAP-MRF inference and learning. We explore two ways to set up the objective
function - by discretizing the Bethe free energy and by learning it from training
data. Experimental results show that for certain types of graphs a learned function
can outperform the Bethe approximation. We also establish a link between the Bethe
free energy and submodular functions.\r\n"
alternative_title:
- Inferning 2012
author:
- first_name: Filip
full_name: Korc, Filip
id: 476A2FD6-F248-11E8-B48F-1D18A9856A87
last_name: Korc
- first_name: Vladimir
full_name: Kolmogorov, Vladimir
id: 3D50B0BA-F248-11E8-B48F-1D18A9856A87
last_name: Kolmogorov
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Korc F, Kolmogorov V, Lampert C. Approximating marginals using discrete energy
minimization. In: ICML; 2012.'
apa: 'Korc, F., Kolmogorov, V., & Lampert, C. (2012). Approximating marginals
using discrete energy minimization. Presented at the ICML: International Conference
on Machine Learning, Edinburgh, Scotland: ICML.'
chicago: Korc, Filip, Vladimir Kolmogorov, and Christoph Lampert. “Approximating
Marginals Using Discrete Energy Minimization.” ICML, 2012.
ieee: 'F. Korc, V. Kolmogorov, and C. Lampert, “Approximating marginals using discrete
energy minimization,” presented at the ICML: International Conference on Machine
Learning, Edinburgh, Scotland, 2012.'
ista: 'Korc F, Kolmogorov V, Lampert C. 2012. Approximating marginals using discrete
energy minimization. ICML: International Conference on Machine Learning, Inferning
2012, .'
mla: Korc, Filip, et al. Approximating Marginals Using Discrete Energy Minimization.
ICML, 2012.
short: F. Korc, V. Kolmogorov, C. Lampert, in:, ICML, 2012.
conference:
end_date: 2012-07-01
location: Edinburgh, Scotland
name: 'ICML: International Conference on Machine Learning'
start_date: 2012-06-26
date_created: 2018-12-11T12:01:31Z
date_published: 2012-06-30T00:00:00Z
date_updated: 2023-02-23T12:24:24Z
day: '30'
ddc:
- '000'
department:
- _id: ChLa
- _id: VlKo
file:
- access_level: open_access
checksum: 3d0d4246548c736857302aadb2ff5d15
content_type: application/pdf
creator: system
date_created: 2018-12-12T10:11:34Z
date_updated: 2020-07-14T12:46:00Z
file_id: '4889'
file_name: IST-2016-565-v1+1_DM-inferning2012.pdf
file_size: 305836
relation: main_file
file_date_updated: 2020-07-14T12:46:00Z
has_accepted_license: '1'
language:
- iso: eng
month: '06'
oa: 1
oa_version: Submitted Version
publication_status: published
publisher: ICML
publist_id: '3575'
pubrep_id: '565'
quality_controlled: '1'
related_material:
record:
- id: '5396'
relation: later_version
status: public
status: public
title: Approximating marginals using discrete energy minimization
type: conference
user_id: 3E5EF7F0-F248-11E8-B48F-1D18A9856A87
year: '2012'
...
---
_id: '5396'
abstract:
- lang: eng
text: We consider the problem of inference in agraphical model with binary variables.
While in theory it is arguably preferable to compute marginal probabilities, in
practice researchers often use MAP inference due to the availability of efficient
discrete optimization algorithms. We bridge the gap between the two approaches
by introducing the Discrete Marginals technique in which approximate marginals
are obtained by minimizing an objective function with unary and pair-wise terms
over a discretized domain. This allows the use of techniques originally devel-oped
for MAP-MRF inference and learning. We explore two ways to set up the objective
function - by discretizing the Bethe free energy and by learning it from training
data. Experimental results show that for certain types of graphs a learned function
can out-perform the Bethe approximation. We also establish a link between the
Bethe free energy and submodular functions.
alternative_title:
- IST Austria Technical Report
author:
- first_name: Filip
full_name: Korc, Filip
id: 476A2FD6-F248-11E8-B48F-1D18A9856A87
last_name: Korc
- first_name: Vladimir
full_name: Kolmogorov, Vladimir
id: 3D50B0BA-F248-11E8-B48F-1D18A9856A87
last_name: Kolmogorov
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: Korc F, Kolmogorov V, Lampert C. Approximating Marginals Using Discrete
Energy Minimization. IST Austria; 2012. doi:10.15479/AT:IST-2012-0003
apa: Korc, F., Kolmogorov, V., & Lampert, C. (2012). Approximating marginals
using discrete energy minimization. IST Austria. https://doi.org/10.15479/AT:IST-2012-0003
chicago: Korc, Filip, Vladimir Kolmogorov, and Christoph Lampert. Approximating
Marginals Using Discrete Energy Minimization. IST Austria, 2012. https://doi.org/10.15479/AT:IST-2012-0003.
ieee: F. Korc, V. Kolmogorov, and C. Lampert, Approximating marginals using discrete
energy minimization. IST Austria, 2012.
ista: Korc F, Kolmogorov V, Lampert C. 2012. Approximating marginals using discrete
energy minimization, IST Austria, 13p.
mla: Korc, Filip, et al. Approximating Marginals Using Discrete Energy Minimization.
IST Austria, 2012, doi:10.15479/AT:IST-2012-0003.
short: F. Korc, V. Kolmogorov, C. Lampert, Approximating Marginals Using Discrete
Energy Minimization, IST Austria, 2012.
date_created: 2018-12-12T11:39:06Z
date_published: 2012-07-23T00:00:00Z
date_updated: 2023-02-23T11:13:22Z
day: '23'
ddc:
- '000'
department:
- _id: VlKo
- _id: ChLa
doi: 10.15479/AT:IST-2012-0003
file:
- access_level: open_access
checksum: 7e0ba85ad123b13223aaf6cdde2d288c
content_type: application/pdf
creator: system
date_created: 2018-12-12T11:53:29Z
date_updated: 2020-07-14T12:46:44Z
file_id: '5490'
file_name: IST-2012-0003_IST-2012-0003.pdf
file_size: 618744
relation: main_file
file_date_updated: 2020-07-14T12:46:44Z
has_accepted_license: '1'
language:
- iso: eng
month: '07'
oa: 1
oa_version: Published Version
page: '13'
publication_identifier:
issn:
- 2664-1690
publication_status: published
publisher: IST Austria
pubrep_id: '36'
related_material:
record:
- id: '3124'
relation: earlier_version
status: public
status: public
title: Approximating marginals using discrete energy minimization
type: technical_report
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2012'
...
---
_id: '2915'
acknowledgement: "The project receives funding from the European Community’s Seventh
Framework Programme under grant agreement\r\nno. ICT- 248273 GeRT."
article_processing_charge: No
author:
- first_name: Oliver
full_name: Kroemer, Oliver
last_name: Kroemer
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
- first_name: Jan
full_name: Peters, Jan
last_name: Peters
citation:
ama: 'Kroemer O, Lampert C, Peters J. Multi-modal learning for dynamic tactile sensing.
In: Deutsches Zentrum für Luft und Raumfahrt; 2012.'
apa: Kroemer, O., Lampert, C., & Peters, J. (2012). Multi-modal learning for
dynamic tactile sensing. Deutsches Zentrum für Luft und Raumfahrt.
chicago: Kroemer, Oliver, Christoph Lampert, and Jan Peters. “Multi-Modal Learning
for Dynamic Tactile Sensing.” Deutsches Zentrum für Luft und Raumfahrt, 2012.
ieee: O. Kroemer, C. Lampert, and J. Peters, “Multi-modal learning for dynamic tactile
sensing,” 2012.
ista: Kroemer O, Lampert C, Peters J. 2012. Multi-modal learning for dynamic tactile
sensing
mla: Kroemer, Oliver, et al. Multi-Modal Learning for Dynamic Tactile Sensing.
Deutsches Zentrum für Luft und Raumfahrt, 2012.
short: O. Kroemer, C. Lampert, J. Peters, in:, Deutsches Zentrum für Luft und Raumfahrt,
2012.
date_created: 2018-12-11T12:00:19Z
date_published: 2012-10-11T00:00:00Z
date_updated: 2023-10-17T07:58:59Z
day: '11'
department:
- _id: ChLa
language:
- iso: eng
month: '10'
oa_version: None
publication_status: published
publisher: Deutsches Zentrum für Luft und Raumfahrt
publist_id: '3828'
quality_controlled: '1'
status: public
title: Multi-modal learning for dynamic tactile sensing
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2012'
...
---
_id: '3127'
abstract:
- lang: eng
text: "When searching for characteristic subpatterns in potentially noisy graph
data, it appears self-evident that having multiple observations would be better
than having just one. However, it turns out that the inconsistencies introduced
when different graph instances have different edge sets pose a serious challenge.
In this work we address this challenge for the problem of finding maximum weighted
cliques.\r\n We introduce the concept of most persistent soft-clique. This
is subset of vertices, that 1) is almost fully or at least densely connected,
2) occurs in all or almost all graph instances, and 3) has the maximum weight.
We present a measure of clique-ness, that essentially counts the number of edge
missing to make a subset of vertices into a clique. With this measure, we show
that the problem of finding the most persistent soft-clique problem can be cast
either as: a) a max-min two person game optimization problem, or b) a min-min
soft margin optimization problem. Both formulations lead to the same solution
when using a partial Lagrangian method to solve the optimization problems. By
experiments on synthetic data and on real social network data, we show that the
proposed method is able to reliably find soft cliques in graph data, even if that
is distorted by random noise or unreliable observations."
article_processing_charge: No
author:
- first_name: Novi
full_name: Quadrianto, Novi
last_name: Quadrianto
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
- first_name: Chao
full_name: Chen, Chao
id: 3E92416E-F248-11E8-B48F-1D18A9856A87
last_name: Chen
citation:
ama: 'Quadrianto N, Lampert C, Chen C. The most persistent soft-clique in a set
of sampled graphs. In: Proceedings of the 29th International Conference on
Machine Learning. ML Research Press; 2012:211-218.'
apa: 'Quadrianto, N., Lampert, C., & Chen, C. (2012). The most persistent soft-clique
in a set of sampled graphs. In Proceedings of the 29th International Conference
on Machine Learning (pp. 211–218). Edinburgh, United Kingdom: ML Research
Press.'
chicago: Quadrianto, Novi, Christoph Lampert, and Chao Chen. “The Most Persistent
Soft-Clique in a Set of Sampled Graphs.” In Proceedings of the 29th International
Conference on Machine Learning, 211–18. ML Research Press, 2012.
ieee: N. Quadrianto, C. Lampert, and C. Chen, “The most persistent soft-clique in
a set of sampled graphs,” in Proceedings of the 29th International Conference
on Machine Learning, Edinburgh, United Kingdom, 2012, pp. 211–218.
ista: 'Quadrianto N, Lampert C, Chen C. 2012. The most persistent soft-clique in
a set of sampled graphs. Proceedings of the 29th International Conference on Machine
Learning. ICML: International Conference on Machine Learning, 211–218.'
mla: Quadrianto, Novi, et al. “The Most Persistent Soft-Clique in a Set of Sampled
Graphs.” Proceedings of the 29th International Conference on Machine Learning,
ML Research Press, 2012, pp. 211–18.
short: N. Quadrianto, C. Lampert, C. Chen, in:, Proceedings of the 29th International
Conference on Machine Learning, ML Research Press, 2012, pp. 211–218.
conference:
end_date: 2012-07-01
location: Edinburgh, United Kingdom
name: 'ICML: International Conference on Machine Learning'
start_date: 2012-06-26
date_created: 2018-12-11T12:01:33Z
date_published: 2012-06-01T00:00:00Z
date_updated: 2023-10-17T11:55:06Z
day: '01'
department:
- _id: ChLa
- _id: HeEd
language:
- iso: eng
main_file_link:
- open_access: '1'
url: http://arxiv.org/abs/1206.4652
month: '06'
oa: 1
oa_version: Preprint
page: 211-218
publication: Proceedings of the 29th International Conference on Machine Learning
publication_status: published
publisher: ML Research Press
publist_id: '3572'
quality_controlled: '1'
scopus_import: '1'
status: public
title: The most persistent soft-clique in a set of sampled graphs
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2012'
...
---
_id: '3337'
abstract:
- lang: eng
text: Playing table tennis is a difficult task for robots, especially due to their
limitations of acceleration. A key bottleneck is the amount of time needed to
reach the desired hitting position and velocity of the racket for returning the
incoming ball. Here, it often does not suffice to simply extrapolate the ball's
trajectory after the opponent returns it but more information is needed. Humans
are able to predict the ball's trajectory based on the opponent's moves and, thus,
have a considerable advantage. Hence, we propose to incorporate an anticipation
system into robot table tennis players, which enables the robot to react earlier
while the opponent is performing the striking movement. Based on visual observation
of the opponent's racket movement, the robot can predict the aim of the opponent
and adjust its movement generation accordingly. The policies for deciding how
and when to react are obtained by reinforcement learning. We conduct experiments
with an existing robot player to show that the learned reaction policy can significantly
improve the performance of the overall system.
author:
- first_name: Zhikun
full_name: Wang, Zhikun
last_name: Wang
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
- first_name: Katharina
full_name: Mülling, Katharina
last_name: Mülling
- first_name: Bernhard
full_name: Schölkopf, Bernhard
last_name: Schölkopf
- first_name: Jan
full_name: Peters, Jan
last_name: Peters
citation:
ama: 'Wang Z, Lampert C, Mülling K, Schölkopf B, Peters J. Learning anticipation
policies for robot table tennis. In: IEEE; 2011:332-337. doi:10.1109/IROS.2011.6094892'
apa: 'Wang, Z., Lampert, C., Mülling, K., Schölkopf, B., & Peters, J. (2011).
Learning anticipation policies for robot table tennis (pp. 332–337). Presented
at the IROS: RSJ International Conference on Intelligent Robots and Systems, San
Francisco, USA: IEEE. https://doi.org/10.1109/IROS.2011.6094892'
chicago: Wang, Zhikun, Christoph Lampert, Katharina Mülling, Bernhard Schölkopf,
and Jan Peters. “Learning Anticipation Policies for Robot Table Tennis,” 332–37.
IEEE, 2011. https://doi.org/10.1109/IROS.2011.6094892.
ieee: 'Z. Wang, C. Lampert, K. Mülling, B. Schölkopf, and J. Peters, “Learning anticipation
policies for robot table tennis,” presented at the IROS: RSJ International Conference
on Intelligent Robots and Systems, San Francisco, USA, 2011, pp. 332–337.'
ista: 'Wang Z, Lampert C, Mülling K, Schölkopf B, Peters J. 2011. Learning anticipation
policies for robot table tennis. IROS: RSJ International Conference on Intelligent
Robots and Systems, 332–337.'
mla: Wang, Zhikun, et al. Learning Anticipation Policies for Robot Table Tennis.
IEEE, 2011, pp. 332–37, doi:10.1109/IROS.2011.6094892.
short: Z. Wang, C. Lampert, K. Mülling, B. Schölkopf, J. Peters, in:, IEEE, 2011,
pp. 332–337.
conference:
end_date: 2011-09-30
location: San Francisco, USA
name: 'IROS: RSJ International Conference on Intelligent Robots and Systems'
start_date: 2011-09-25
date_created: 2018-12-11T12:02:45Z
date_published: 2011-01-01T00:00:00Z
date_updated: 2021-01-12T07:42:45Z
day: '01'
department:
- _id: ChLa
doi: 10.1109/IROS.2011.6094892
language:
- iso: eng
month: '01'
oa_version: None
page: 332 - 337
publication_status: published
publisher: IEEE
publist_id: '3293'
quality_controlled: '1'
scopus_import: 1
status: public
title: Learning anticipation policies for robot table tennis
type: conference
user_id: 4435EBFC-F248-11E8-B48F-1D18A9856A87
year: '2011'
...
---
_id: '3389'
abstract:
- lang: eng
text: Kernel canonical correlation analysis (KCCA) is a general technique for subspace
learning that incorporates principal components analysis (PCA) and Fisher linear
discriminant analysis (LDA) as special cases. By finding directions that maximize
correlation, KCCA learns representations that are more closely tied to the underlying
process that generates the data and can ignore high-variance noise directions.
However, for data where acquisition in one or more modalities is expensive or
otherwise limited, KCCA may suffer from small sample effects. We propose to use
semi-supervised Laplacian regularization to utilize data that are present in only
one modality. This approach is able to find highly correlated directions that
also lie along the data manifold, resulting in a more robust estimate of correlated
subspaces. Functional magnetic resonance imaging (fMRI) acquired data are naturally
amenable to subspace techniques as data are well aligned. fMRI data of the human
brain are a particularly interesting candidate. In this study we implemented various
supervised and semi-supervised versions of KCCA on human fMRI data, with regression
to single and multi-variate labels (corresponding to video content subjects viewed
during the image acquisition). In each variate condition, the semi-supervised
variants of KCCA performed better than the supervised variants, including a supervised
variant with Laplacian regularization. We additionally analyze the weights learned
by the regression in order to infer brain regions that are important to different
types of visual processing.
acknowledgement: The research leading to these results has received funding from the
European Research Council under the European Community’s Seventh Framework Programme
(FP7/2007-2013)/ERC Grant Agreement No. 228180. This work was funded in part by
the EC project CLASS, IST 027978, and the PASCAL2 network of excellence, IST 2002-506778.
author:
- first_name: Matthew
full_name: Blaschko, Matthew
last_name: Blaschko
- first_name: Jacquelyn
full_name: Shelton, Jacquelyn
last_name: Shelton
- first_name: Andreas
full_name: Bartels, Andreas
last_name: Bartels
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
- first_name: Arthur
full_name: Gretton, Arthur
last_name: Gretton
citation:
ama: Blaschko M, Shelton J, Bartels A, Lampert C, Gretton A. Semi supervised kernel
canonical correlation analysis with application to human fMRI. Pattern Recognition
Letters. 2011;32(11):1572-1583. doi:10.1016/j.patrec.2011.02.011
apa: Blaschko, M., Shelton, J., Bartels, A., Lampert, C., & Gretton, A. (2011).
Semi supervised kernel canonical correlation analysis with application to human
fMRI. Pattern Recognition Letters. Elsevier. https://doi.org/10.1016/j.patrec.2011.02.011
chicago: Blaschko, Matthew, Jacquelyn Shelton, Andreas Bartels, Christoph Lampert,
and Arthur Gretton. “Semi Supervised Kernel Canonical Correlation Analysis with
Application to Human FMRI.” Pattern Recognition Letters. Elsevier, 2011.
https://doi.org/10.1016/j.patrec.2011.02.011.
ieee: M. Blaschko, J. Shelton, A. Bartels, C. Lampert, and A. Gretton, “Semi supervised
kernel canonical correlation analysis with application to human fMRI,” Pattern
Recognition Letters, vol. 32, no. 11. Elsevier, pp. 1572–1583, 2011.
ista: Blaschko M, Shelton J, Bartels A, Lampert C, Gretton A. 2011. Semi supervised
kernel canonical correlation analysis with application to human fMRI. Pattern
Recognition Letters. 32(11), 1572–1583.
mla: Blaschko, Matthew, et al. “Semi Supervised Kernel Canonical Correlation Analysis
with Application to Human FMRI.” Pattern Recognition Letters, vol. 32,
no. 11, Elsevier, 2011, pp. 1572–83, doi:10.1016/j.patrec.2011.02.011.
short: M. Blaschko, J. Shelton, A. Bartels, C. Lampert, A. Gretton, Pattern Recognition
Letters 32 (2011) 1572–1583.
date_created: 2018-12-11T12:03:03Z
date_published: 2011-08-01T00:00:00Z
date_updated: 2021-01-12T07:43:09Z
day: '01'
department:
- _id: ChLa
doi: 10.1016/j.patrec.2011.02.011
intvolume: ' 32'
issue: '11'
language:
- iso: eng
month: '08'
oa_version: None
page: 1572 - 1583
publication: Pattern Recognition Letters
publication_status: published
publisher: Elsevier
publist_id: '3218'
quality_controlled: '1'
scopus_import: 1
status: public
title: Semi supervised kernel canonical correlation analysis with application to human
fMRI
type: journal_article
user_id: 4435EBFC-F248-11E8-B48F-1D18A9856A87
volume: 32
year: '2011'
...
---
_id: '3382'
abstract:
- lang: eng
text: Dynamic tactile sensing is a fundamental ability to recognize materials and
objects. However, while humans are born with partially developed dynamic tactile
sensing and quickly master this skill, today's robots remain in their infancy.
The development of such a sense requires not only better sensors but the right
algorithms to deal with these sensors' data as well. For example, when classifying
a material based on touch, the data are noisy, high-dimensional, and contain irrelevant
signals as well as essential ones. Few classification methods from machine learning
can deal with such problems. In this paper, we propose an efficient approach to
infer suitable lower dimensional representations of the tactile data. In order
to classify materials based on only the sense of touch, these representations
are autonomously discovered using visual information of the surfaces during training.
However, accurately pairing vision and tactile samples in real-robot applications
is a difficult problem. The proposed approach, therefore, works with weak pairings
between the modalities. Experiments show that the resulting approach is very robust
and yields significantly higher classification performance based on only dynamic
tactile sensing.
author:
- first_name: Oliver
full_name: Kroemer, Oliver
last_name: Kroemer
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
- first_name: Jan
full_name: Peters, Jan
last_name: Peters
citation:
ama: Kroemer O, Lampert C, Peters J. Learning dynamic tactile sensing with robust
vision based training. IEEE Transactions on Robotics. 2011;27(3):545-557.
doi:10.1109/TRO.2011.2121130
apa: Kroemer, O., Lampert, C., & Peters, J. (2011). Learning dynamic tactile
sensing with robust vision based training. IEEE Transactions on Robotics.
IEEE. https://doi.org/10.1109/TRO.2011.2121130
chicago: Kroemer, Oliver, Christoph Lampert, and Jan Peters. “Learning Dynamic Tactile
Sensing with Robust Vision Based Training.” IEEE Transactions on Robotics.
IEEE, 2011. https://doi.org/10.1109/TRO.2011.2121130.
ieee: O. Kroemer, C. Lampert, and J. Peters, “Learning dynamic tactile sensing with
robust vision based training,” IEEE Transactions on Robotics, vol. 27,
no. 3. IEEE, pp. 545–557, 2011.
ista: Kroemer O, Lampert C, Peters J. 2011. Learning dynamic tactile sensing with
robust vision based training. IEEE Transactions on Robotics. 27(3), 545–557.
mla: Kroemer, Oliver, et al. “Learning Dynamic Tactile Sensing with Robust Vision
Based Training.” IEEE Transactions on Robotics, vol. 27, no. 3, IEEE, 2011,
pp. 545–57, doi:10.1109/TRO.2011.2121130.
short: O. Kroemer, C. Lampert, J. Peters, IEEE Transactions on Robotics 27 (2011)
545–557.
date_created: 2018-12-11T12:03:01Z
date_published: 2011-05-21T00:00:00Z
date_updated: 2021-01-12T07:43:06Z
day: '21'
department:
- _id: ChLa
doi: 10.1109/TRO.2011.2121130
intvolume: ' 27'
issue: '3'
language:
- iso: eng
month: '05'
oa_version: None
page: 545 - 557
publication: IEEE Transactions on Robotics
publication_status: published
publisher: IEEE
publist_id: '3225'
quality_controlled: '1'
scopus_import: 1
status: public
title: Learning dynamic tactile sensing with robust vision based training
type: journal_article
user_id: 4435EBFC-F248-11E8-B48F-1D18A9856A87
volume: 27
year: '2011'
...
---
_id: '5386'
abstract:
- lang: eng
text: 'We introduce TopoCut: a new way to integrate knowledge about topological
properties (TPs) into random field image segmentation model. Instead of including
TPs as additional constraints during minimization of the energy function, we devise
an efficient algorithm for modifying the unary potentials such that the resulting
segmentation is guaranteed with the desired properties. Our method is more flexible
in the sense that it handles more topology constraints than previous methods,
which were only able to enforce pairwise or global connectivity. In particular,
our method is very fast, making it for the first time possible to enforce global
topological properties in practical image segmentation tasks.'
alternative_title:
- IST Austria Technical Report
author:
- first_name: Chao
full_name: Chen, Chao
id: 3E92416E-F248-11E8-B48F-1D18A9856A87
last_name: Chen
- first_name: Daniel
full_name: Freedman, Daniel
last_name: Freedman
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: Chen C, Freedman D, Lampert C. Enforcing Topological Constraints in Random
Field Image Segmentation. IST Austria; 2011. doi:10.15479/AT:IST-2011-0002
apa: Chen, C., Freedman, D., & Lampert, C. (2011). Enforcing topological
constraints in random field image segmentation. IST Austria. https://doi.org/10.15479/AT:IST-2011-0002
chicago: Chen, Chao, Daniel Freedman, and Christoph Lampert. Enforcing Topological
Constraints in Random Field Image Segmentation. IST Austria, 2011. https://doi.org/10.15479/AT:IST-2011-0002.
ieee: C. Chen, D. Freedman, and C. Lampert, Enforcing topological constraints
in random field image segmentation. IST Austria, 2011.
ista: Chen C, Freedman D, Lampert C. 2011. Enforcing topological constraints in
random field image segmentation, IST Austria, 69p.
mla: Chen, Chao, et al. Enforcing Topological Constraints in Random Field Image
Segmentation. IST Austria, 2011, doi:10.15479/AT:IST-2011-0002.
short: C. Chen, D. Freedman, C. Lampert, Enforcing Topological Constraints in Random
Field Image Segmentation, IST Austria, 2011.
date_created: 2018-12-12T11:39:02Z
date_published: 2011-03-28T00:00:00Z
date_updated: 2023-02-23T11:22:48Z
day: '28'
ddc:
- '000'
department:
- _id: ChLa
doi: 10.15479/AT:IST-2011-0002
file:
- access_level: open_access
checksum: ad64c2add5fe2ad10e9d5c669f3f9526
content_type: application/pdf
creator: system
date_created: 2018-12-12T11:53:34Z
date_updated: 2020-07-14T12:46:41Z
file_id: '5495'
file_name: IST-2011-0002_IST-2011-0002.pdf
file_size: 26390601
relation: main_file
file_date_updated: 2020-07-14T12:46:41Z
has_accepted_license: '1'
language:
- iso: eng
month: '03'
oa: 1
oa_version: Published Version
page: '69'
publication_identifier:
issn:
- 2664-1690
publication_status: published
publisher: IST Austria
pubrep_id: '22'
related_material:
record:
- id: '3336'
relation: later_version
status: public
status: public
title: Enforcing topological constraints in random field image segmentation
type: technical_report
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2011'
...
---
_id: '3336'
abstract:
- lang: eng
text: 'We introduce TopoCut: a new way to integrate knowledge about topological
properties (TPs) into random field image segmentation model. Instead of including
TPs as additional constraints during minimization of the energy function, we devise
an efficient algorithm for modifying the unary potentials such that the resulting
segmentation is guaranteed with the desired properties. Our method is more flexible
in the sense that it handles more topology constraints than previous methods,
which were only able to enforce pairwise or global connectivity. In particular,
our method is very fast, making it for the first time possible to enforce global
topological properties in practical image segmentation tasks.'
acknowledgement: The first author is supported by the Austrian Science Fund (FWF)
grant No. P20134-N13. The authors would like to thank Sebastian Nowozin for helpful
discussions.
article_processing_charge: No
author:
- first_name: Chao
full_name: Chen, Chao
id: 3E92416E-F248-11E8-B48F-1D18A9856A87
last_name: Chen
- first_name: Daniel
full_name: Freedman, Daniel
last_name: Freedman
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Chen C, Freedman D, Lampert C. Enforcing topological constraints in random
field image segmentation. In: CVPR: Computer Vision and Pattern Recognition.
IEEE; 2011:2089-2096. doi:10.1109/CVPR.2011.5995503'
apa: 'Chen, C., Freedman, D., & Lampert, C. (2011). Enforcing topological constraints
in random field image segmentation. In CVPR: Computer Vision and Pattern Recognition
(pp. 2089–2096). Colorado Springs, CO, United States: IEEE. https://doi.org/10.1109/CVPR.2011.5995503'
chicago: 'Chen, Chao, Daniel Freedman, and Christoph Lampert. “Enforcing Topological
Constraints in Random Field Image Segmentation.” In CVPR: Computer Vision and
Pattern Recognition, 2089–96. IEEE, 2011. https://doi.org/10.1109/CVPR.2011.5995503.'
ieee: 'C. Chen, D. Freedman, and C. Lampert, “Enforcing topological constraints
in random field image segmentation,” in CVPR: Computer Vision and Pattern Recognition,
Colorado Springs, CO, United States, 2011, pp. 2089–2096.'
ista: 'Chen C, Freedman D, Lampert C. 2011. Enforcing topological constraints in
random field image segmentation. CVPR: Computer Vision and Pattern Recognition.
CVPR: Conference on Computer Vision and Pattern Recognition, 2089–2096.'
mla: 'Chen, Chao, et al. “Enforcing Topological Constraints in Random Field Image
Segmentation.” CVPR: Computer Vision and Pattern Recognition, IEEE, 2011,
pp. 2089–96, doi:10.1109/CVPR.2011.5995503.'
short: 'C. Chen, D. Freedman, C. Lampert, in:, CVPR: Computer Vision and Pattern
Recognition, IEEE, 2011, pp. 2089–2096.'
conference:
end_date: 2011-06-25
location: Colorado Springs, CO, United States
name: 'CVPR: Conference on Computer Vision and Pattern Recognition'
start_date: 2011-06-20
date_created: 2018-12-11T12:02:45Z
date_published: 2011-07-22T00:00:00Z
date_updated: 2023-02-23T12:23:56Z
day: '22'
department:
- _id: HeEd
- _id: ChLa
doi: 10.1109/CVPR.2011.5995503
language:
- iso: eng
month: '07'
oa_version: None
page: 2089 - 2096
publication: 'CVPR: Computer Vision and Pattern Recognition'
publication_identifier:
eisbn:
- 978-1-4577-0395-9
isbn:
- 978-1-4577-0394-2
publication_status: published
publisher: IEEE
publist_id: '3294'
quality_controlled: '1'
related_material:
record:
- id: '5386'
relation: earlier_version
status: public
scopus_import: '1'
status: public
title: Enforcing topological constraints in random field image segmentation
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2011'
...
---
_id: '3163'
abstract:
- lang: eng
text: We study multi-label prediction for structured output sets, a problem that
occurs, for example, in object detection in images, secondary structure prediction
in computational biology, and graph matching with symmetries. Conventional multilabel
classification techniques are typically not applicable in this situation, because
they require explicit enumeration of the label set, which is infeasible in case
of structured outputs. Relying on techniques originally designed for single-label
structured prediction, in particular structured support vector machines, results
in reduced prediction accuracy, or leads to infeasible optimization problems.
In this work we derive a maximum-margin training formulation for multi-label structured
prediction that remains computationally tractable while achieving high prediction
accuracy. It also shares most beneficial properties with single-label maximum-margin
approaches, in particular formulation as a convex optimization problem, efficient
working set training, and PAC-Bayesian generalization bounds.
author:
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Lampert C. Maximum margin multi-label structured prediction. In: Neural Information
Processing Systems; 2011.'
apa: 'Lampert, C. (2011). Maximum margin multi-label structured prediction. Presented
at the NIPS: Neural Information Processing Systems, Granada, Spain: Neural Information
Processing Systems.'
chicago: Lampert, Christoph. “Maximum Margin Multi-Label Structured Prediction.”
Neural Information Processing Systems, 2011.
ieee: 'C. Lampert, “Maximum margin multi-label structured prediction,” presented
at the NIPS: Neural Information Processing Systems, Granada, Spain, 2011.'
ista: 'Lampert C. 2011. Maximum margin multi-label structured prediction. NIPS:
Neural Information Processing Systems.'
mla: Lampert, Christoph. Maximum Margin Multi-Label Structured Prediction.
Neural Information Processing Systems, 2011.
short: C. Lampert, in:, Neural Information Processing Systems, 2011.
conference:
end_date: 2011-12-14
location: Granada, Spain
name: 'NIPS: Neural Information Processing Systems'
start_date: 2011-12-12
date_created: 2018-12-11T12:01:45Z
date_published: 2011-12-01T00:00:00Z
date_updated: 2023-10-17T11:47:35Z
day: '01'
department:
- _id: ChLa
language:
- iso: eng
month: '12'
oa_version: None
publication_status: published
publisher: Neural Information Processing Systems
publist_id: '3522'
quality_controlled: '1'
related_material:
record:
- id: '3322'
relation: later_version
status: public
scopus_import: 1
status: public
title: Maximum margin multi-label structured prediction
type: conference
user_id: 4435EBFC-F248-11E8-B48F-1D18A9856A87
year: '2011'
...
---
_id: '3322'
abstract:
- lang: eng
text: We study multi-label prediction for structured output spaces, a problem that
occurs, for example, in object detection in images, secondary structure prediction
in computational biology, and graph matching with symmetries. Conventional multi-label
classification techniques are typically not applicable in this situation, because
they require explicit enumeration of the label space, which is infeasible in case
of structured outputs. Relying on techniques originally designed for single- label
structured prediction, in particular structured support vector machines, results
in reduced prediction accuracy, or leads to infeasible optimization problems.
In this work we derive a maximum-margin training formulation for multi-label structured
prediction that remains computationally tractable while achieving high prediction
accuracy. It also shares most beneficial properties with single-label maximum-margin
approaches, in particular a formulation as a convex optimization problem, efficient
working set training, and PAC-Bayesian generalization bounds.
article_processing_charge: No
author:
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: Lampert C. Maximum Margin Multi Label Structured Prediction. Neural
Information Processing Systems Foundation; 2011.
apa: 'Lampert, C. (2011). Maximum margin multi label structured prediction.
NIPS: Neural Information Processing Systems. Neural Information Processing
Systems Foundation.'
chicago: 'Lampert, Christoph. Maximum Margin Multi Label Structured Prediction.
NIPS: Neural Information Processing Systems. Neural Information Processing
Systems Foundation, 2011.'
ieee: C. Lampert, Maximum margin multi label structured prediction. Neural
Information Processing Systems Foundation, 2011.
ista: Lampert C. 2011. Maximum margin multi label structured prediction, Neural
Information Processing Systems Foundation,p.
mla: 'Lampert, Christoph. “Maximum Margin Multi Label Structured Prediction.” NIPS:
Neural Information Processing Systems, Neural Information Processing Systems
Foundation, 2011.'
short: C. Lampert, Maximum Margin Multi Label Structured Prediction, Neural Information
Processing Systems Foundation, 2011.
date_created: 2018-12-11T12:02:40Z
date_published: 2011-12-13T00:00:00Z
date_updated: 2023-10-17T11:47:36Z
day: '13'
department:
- _id: ChLa
language:
- iso: eng
month: '12'
oa_version: None
publication: 'NIPS: Neural Information Processing Systems'
publication_status: published
publisher: Neural Information Processing Systems Foundation
publist_id: '3313'
related_material:
record:
- id: '3163'
relation: earlier_version
status: public
status: public
title: Maximum margin multi label structured prediction
type: conference_poster
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2011'
...
---
_id: '3320'
abstract:
- lang: eng
text: Powerful statistical models that can be learned efficiently from large amounts
of data are currently revolutionizing computer vision. These models possess a
rich internal structure reflecting task-specific relations and constraints. This
monograph introduces the reader to the most popular classes of structured models
in computer vision. Our focus is discrete undirected graphical models which we
cover in detail together with a description of algorithms for both probabilistic
inference and maximum a posteriori inference. We discuss separately recently successful
techniques for prediction in general structured models. In the second part of
this monograph we describe methods for parameter learning where we distinguish
the classic maximum likelihood based methods from the more recent prediction-based
parameter learning methods. We highlight developments to enhance current models
and discuss kernelized models and latent variable models. To make the monograph
more practical and to provide links to further study we provide examples of successful
application of many methods in the computer vision literature.
article_processing_charge: No
article_type: original
author:
- first_name: Sebastian
full_name: Nowozin, Sebastian
last_name: Nowozin
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: Nowozin S, Lampert C. Structured learning and prediction in computer vision.
Foundations and Trends in Computer Graphics and Vision. 2011;6(3-4):185-365.
doi:10.1561/0600000033
apa: Nowozin, S., & Lampert, C. (2011). Structured learning and prediction in
computer vision. Foundations and Trends in Computer Graphics and Vision.
Now Publishers. https://doi.org/10.1561/0600000033
chicago: Nowozin, Sebastian, and Christoph Lampert. “Structured Learning and Prediction
in Computer Vision.” Foundations and Trends in Computer Graphics and Vision.
Now Publishers, 2011. https://doi.org/10.1561/0600000033.
ieee: S. Nowozin and C. Lampert, “Structured learning and prediction in computer
vision,” Foundations and Trends in Computer Graphics and Vision, vol. 6,
no. 3–4. Now Publishers, pp. 185–365, 2011.
ista: Nowozin S, Lampert C. 2011. Structured learning and prediction in computer
vision. Foundations and Trends in Computer Graphics and Vision. 6(3–4), 185–365.
mla: Nowozin, Sebastian, and Christoph Lampert. “Structured Learning and Prediction
in Computer Vision.” Foundations and Trends in Computer Graphics and Vision,
vol. 6, no. 3–4, Now Publishers, 2011, pp. 185–365, doi:10.1561/0600000033.
short: S. Nowozin, C. Lampert, Foundations and Trends in Computer Graphics and Vision
6 (2011) 185–365.
date_created: 2018-12-11T12:02:39Z
date_published: 2011-05-23T00:00:00Z
date_updated: 2023-10-17T11:52:46Z
day: '23'
ddc:
- '000'
department:
- _id: ChLa
doi: 10.1561/0600000033
file:
- access_level: open_access
checksum: f1043ef389f1558e2a226bb51568511f
content_type: application/pdf
creator: dernst
date_created: 2020-05-14T14:34:47Z
date_updated: 2020-07-14T12:46:07Z
file_id: '7837'
file_name: 2011_CompGraphicsVision_Nowozin.pdf
file_size: 3745064
relation: main_file
file_date_updated: 2020-07-14T12:46:07Z
has_accepted_license: '1'
intvolume: ' 6'
issue: 3-4
language:
- iso: eng
month: '05'
oa: 1
oa_version: Published Version
page: 185 - 365
publication: Foundations and Trends in Computer Graphics and Vision
publication_status: published
publisher: Now Publishers
publist_id: '3315'
quality_controlled: '1'
scopus_import: '1'
status: public
title: Structured learning and prediction in computer vision
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 6
year: '2011'
...
---
_id: '3319'
abstract:
- lang: eng
text: We address the problem of metric learning for multi-view data, namely the
construction of embedding projections from data in different representations into
a shared feature space, such that the Euclidean distance in this space provides
a meaningful within-view as well as between-view similarity. Our motivation stems
from the problem of cross-media retrieval tasks, where the availability of a joint
Euclidean distance function is a pre-requisite to allow fast, in particular hashing-based,
nearest neighbor queries. We formulate an objective function that expresses the
intuitive concept that matching samples are mapped closely together in the output
space, whereas non-matching samples are pushed apart, no matter in which view
they are available. The resulting optimization problem is not convex, but it can
be decomposed explicitly into a convex and a concave part, thereby allowing efficient
optimization using the convex-concave procedure. Experiments on an image retrieval
task show that nearest-neighbor based cross-view retrieval is indeed possible,
and the proposed technique improves the retrieval accuracy over baseline techniques.
article_processing_charge: No
author:
- first_name: Novi
full_name: Quadrianto, Novi
last_name: Quadrianto
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Quadrianto N, Lampert C. Learning multi-view neighborhood preserving projections.
In: ML Research Press; 2011:425-432.'
apa: 'Quadrianto, N., & Lampert, C. (2011). Learning multi-view neighborhood
preserving projections (pp. 425–432). Presented at the ICML: International Conference
on Machine Learning, Bellevue, United States: ML Research Press.'
chicago: Quadrianto, Novi, and Christoph Lampert. “Learning Multi-View Neighborhood
Preserving Projections,” 425–32. ML Research Press, 2011.
ieee: 'N. Quadrianto and C. Lampert, “Learning multi-view neighborhood preserving
projections,” presented at the ICML: International Conference on Machine Learning,
Bellevue, United States, 2011, pp. 425–432.'
ista: 'Quadrianto N, Lampert C. 2011. Learning multi-view neighborhood preserving
projections. ICML: International Conference on Machine Learning, 425–432.'
mla: Quadrianto, Novi, and Christoph Lampert. Learning Multi-View Neighborhood
Preserving Projections. ML Research Press, 2011, pp. 425–32.
short: N. Quadrianto, C. Lampert, in:, ML Research Press, 2011, pp. 425–432.
conference:
end_date: 2011-07-02
location: Bellevue, United States
name: 'ICML: International Conference on Machine Learning'
start_date: 2011-06-28
date_created: 2018-12-11T12:02:39Z
date_published: 2011-01-01T00:00:00Z
date_updated: 2023-10-17T11:59:50Z
day: '01'
department:
- _id: ChLa
language:
- iso: eng
month: '01'
oa_version: None
page: 425 - 432
publication_status: published
publisher: ML Research Press
publist_id: '3316'
scopus_import: '1'
status: public
title: Learning multi-view neighborhood preserving projections
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2011'
...
---
_id: '3686'
abstract:
- lang: eng
text: |+
Markov random field (MRF) models, including conditional random field models, are popular in computer vision. However, in order to be computationally tractable, they are limited to incorporating only local interactions and cannot model global properties such as connectedness, which is a potentially useful high-level prior for object segmentation. In this work, we overcome this limitation by deriving a potential function that forces the output labeling to be connected and that can naturally be used in the framework of recent maximum a posteriori (MAP)-MRF linear program (LP) relaxations. Using techniques from polyhedral combinatorics, we show that a provably strong approximation to the MAP solution of the resulting MRF can still be found efficiently by solving a sequence of max-flow problems. The efficiency of the inference procedure also allows us to learn the parameters of an MRF with global connectivity potentials by means of a cutting plane algorithm. We experimentally evaluate our algorithm on both synthetic data and on the challenging image segmentation task of the PASCAL Visual Object Classes 2008 data set. We show that in both cases the addition of a connectedness prior significantly reduces the segmentation error.
acknowledgement: This work was funded in part by the EU CLASS project, IST 027978.
This work was also supported in part by the IST Programme of the European Community
under the PASCAL Network of Excellence, IST-2002-506778.
author:
- first_name: Sebastian
full_name: Nowozin, Sebastian
last_name: Nowozin
- first_name: Christoph
full_name: Christoph Lampert
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Nowozin S, Lampert C. Global interactions in random field models: A potential
function ensuring connectedness. SIAM Journal on Imaging Sciences. 2010;3(4
(Special Section on Optimization in Imaging Sciences)):1048-1074. doi:10.1137/090752614'
apa: 'Nowozin, S., & Lampert, C. (2010). Global interactions in random field
models: A potential function ensuring connectedness. SIAM Journal on Imaging
Sciences. Society for Industrial and Applied Mathematics . https://doi.org/10.1137/090752614'
chicago: 'Nowozin, Sebastian, and Christoph Lampert. “Global Interactions in Random
Field Models: A Potential Function Ensuring Connectedness.” SIAM Journal on
Imaging Sciences. Society for Industrial and Applied Mathematics , 2010. https://doi.org/10.1137/090752614.'
ieee: 'S. Nowozin and C. Lampert, “Global interactions in random field models: A
potential function ensuring connectedness,” SIAM Journal on Imaging Sciences,
vol. 3, no. 4 (Special Section on Optimization in Imaging Sciences). Society for
Industrial and Applied Mathematics , pp. 1048–1074, 2010.'
ista: 'Nowozin S, Lampert C. 2010. Global interactions in random field models: A
potential function ensuring connectedness. SIAM Journal on Imaging Sciences. 3(4
(Special Section on Optimization in Imaging Sciences)), 1048–1074.'
mla: 'Nowozin, Sebastian, and Christoph Lampert. “Global Interactions in Random
Field Models: A Potential Function Ensuring Connectedness.” SIAM Journal on
Imaging Sciences, vol. 3, no. 4 (Special Section on Optimization in Imaging
Sciences), Society for Industrial and Applied Mathematics , 2010, pp. 1048–74,
doi:10.1137/090752614.'
short: S. Nowozin, C. Lampert, SIAM Journal on Imaging Sciences 3 (2010) 1048–1074.
date_created: 2018-12-11T12:04:37Z
date_published: 2010-12-21T00:00:00Z
date_updated: 2021-01-12T07:48:57Z
day: '21'
doi: 10.1137/090752614
extern: 1
intvolume: ' 3'
issue: 4 (Special Section on Optimization in Imaging Sciences)
month: '12'
page: 1048 - 1074
publication: SIAM Journal on Imaging Sciences
publication_status: published
publisher: 'Society for Industrial and Applied Mathematics '
publist_id: '2684'
quality_controlled: 0
status: public
title: 'Global interactions in random field models: A potential function ensuring
connectedness'
type: journal_article
volume: 3
year: '2010'
...
...
---
_id: '3682'
abstract:
- lang: eng
text: For object category recognition to scale beyond a small number of classes,
it is important that algorithms be able to learn from a small amount of labeled
data per additional class. One-shot recognition aims to apply the knowledge gained
from a set of categories with plentiful data to categories for which only a single
exemplar is available for each. As with earlier efforts motivated by transfer
learning, we seek an internal representation for the domain that generalizes across
classes. However, in contrast to existing work, we formulate the problem in a
fundamentally new manner by optimizing the internal representation for the one-shot
task using the notion of micro-sets. A micro-set is a sample of data that contains
only a single instance of each category, sampled from the pool of available data,
which serves as a mechanism to force the learned representation to explicitly
address the variability and noise inherent in the one-shot recognition task. We
optimize our learned domain features so that they minimize an expected loss over
micro-sets drawn from the training set and show that these features generalize
effectively to previously unseen categories. We detail a discriminative approach
for optimizing one-shot recognition using micro-sets and present experiments on
the Animals with Attributes and Caltech-101 datasets that demonstrate the benefits
of our formulation.
author:
- first_name: Kevin
full_name: Tang, Kevin D
last_name: Tang
- first_name: Marshall
full_name: Tappen, Marshall F
last_name: Tappen
- first_name: Rahul
full_name: Sukthankar,Rahul
last_name: Sukthankar
- first_name: Christoph
full_name: Christoph Lampert
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Tang K, Tappen M, Sukthankar R, Lampert C. Optimizing one-shot recognition
with micro-set learning. In: IEEE; 2010:3027-3034. doi:10.1109/CVPR.2010.5540053'
apa: 'Tang, K., Tappen, M., Sukthankar, R., & Lampert, C. (2010). Optimizing
one-shot recognition with micro-set learning (pp. 3027–3034). Presented at the
CVPR: Computer Vision and Pattern Recognition, IEEE. https://doi.org/10.1109/CVPR.2010.5540053'
chicago: Tang, Kevin, Marshall Tappen, Rahul Sukthankar, and Christoph Lampert.
“Optimizing One-Shot Recognition with Micro-Set Learning,” 3027–34. IEEE, 2010.
https://doi.org/10.1109/CVPR.2010.5540053.
ieee: 'K. Tang, M. Tappen, R. Sukthankar, and C. Lampert, “Optimizing one-shot recognition
with micro-set learning,” presented at the CVPR: Computer Vision and Pattern Recognition,
2010, pp. 3027–3034.'
ista: 'Tang K, Tappen M, Sukthankar R, Lampert C. 2010. Optimizing one-shot recognition
with micro-set learning. CVPR: Computer Vision and Pattern Recognition, 3027–3034.'
mla: Tang, Kevin, et al. Optimizing One-Shot Recognition with Micro-Set Learning.
IEEE, 2010, pp. 3027–34, doi:10.1109/CVPR.2010.5540053.
short: K. Tang, M. Tappen, R. Sukthankar, C. Lampert, in:, IEEE, 2010, pp. 3027–3034.
conference:
name: 'CVPR: Computer Vision and Pattern Recognition'
date_created: 2018-12-11T12:04:36Z
date_published: 2010-06-18T00:00:00Z
date_updated: 2021-01-12T07:45:06Z
day: '18'
doi: 10.1109/CVPR.2010.5540053
extern: 1
month: '06'
page: 3027 - 3034
publication_status: published
publisher: IEEE
publist_id: '2696'
quality_controlled: 0
status: public
title: Optimizing one-shot recognition with micro-set learning
type: conference
year: '2010'
...
---
_id: '3702'
abstract:
- lang: eng
text: Hitting and batting tasks, such as tennis forehands, ping-pong strokes, or
baseball batting, depend on predictions where the ball can be intercepted and
how it can properly be returned to the opponent. These predictions get more accurate
over time, hence the behaviors need to be continuously modified. As a result,
movement templates with a learned global shape need to be adapted during the execution
so that the racket reaches a target position and velocity that will return the
ball over to the other side of the net or court. It requires altering learned
movements to hit a varying target with the necessary velocity at a specific instant
in time. Such a task cannot be incorporated straightforwardly in most movement
representations suitable for learning. For example, the standard formulation of
the dynamical system based motor primitives (introduced by Ijspeert et al. [1])
does not satisfy this property despite their flexibility which has allowed learning
tasks ranging from locomotion to kendama. In order to fulfill this requirement,
we reformulate the Ijspeert framework to incorporate the possibility of specifying
a desired hitting point and a desired hitting velocity while maintaining all advantages
of the original formulation. We show that the proposed movement template formulation
works well in two scenarios, i.e., for hitting a ball on a string with a table
tennis racket at a specified velocity and for returning balls launched by a ball
gun successfully over the net using forehand movements. All experiments were carried
out on a Barrett WAM using a four camera vision system.
author:
- first_name: Jens
full_name: Kober,Jens
last_name: Kober
- first_name: Katharina
full_name: Mülling,Katharina
last_name: Mülling
- first_name: Oliver
full_name: Krömer,Oliver
last_name: Krömer
- first_name: Christoph
full_name: Christoph Lampert
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
- first_name: Bernhard
full_name: Schölkopf,Bernhard
last_name: Schölkopf
- first_name: Jan
full_name: Peters, Jan
last_name: Peters
citation:
ama: 'Kober J, Mülling K, Krömer O, Lampert C, Schölkopf B, Peters J. Movement templates
for learning of hitting and batting. In: IEEE; 2010:853-858. doi:10.1109/ROBOT.2010.5509672'
apa: 'Kober, J., Mülling, K., Krömer, O., Lampert, C., Schölkopf, B., & Peters,
J. (2010). Movement templates for learning of hitting and batting (pp. 853–858).
Presented at the ICRA: International Conference on Robotics and Automation, IEEE.
https://doi.org/10.1109/ROBOT.2010.5509672'
chicago: Kober, Jens, Katharina Mülling, Oliver Krömer, Christoph Lampert, Bernhard
Schölkopf, and Jan Peters. “Movement Templates for Learning of Hitting and Batting,”
853–58. IEEE, 2010. https://doi.org/10.1109/ROBOT.2010.5509672.
ieee: 'J. Kober, K. Mülling, O. Krömer, C. Lampert, B. Schölkopf, and J. Peters,
“Movement templates for learning of hitting and batting,” presented at the ICRA:
International Conference on Robotics and Automation, 2010, pp. 853–858.'
ista: 'Kober J, Mülling K, Krömer O, Lampert C, Schölkopf B, Peters J. 2010. Movement
templates for learning of hitting and batting. ICRA: International Conference
on Robotics and Automation, 853–858.'
mla: Kober, Jens, et al. Movement Templates for Learning of Hitting and Batting.
IEEE, 2010, pp. 853–58, doi:10.1109/ROBOT.2010.5509672.
short: J. Kober, K. Mülling, O. Krömer, C. Lampert, B. Schölkopf, J. Peters, in:,
IEEE, 2010, pp. 853–858.
conference:
name: 'ICRA: International Conference on Robotics and Automation'
date_created: 2018-12-11T12:04:42Z
date_published: 2010-05-07T00:00:00Z
date_updated: 2021-01-12T07:51:35Z
day: '07'
doi: 10.1109/ROBOT.2010.5509672
extern: 1
main_file_link:
- open_access: '0'
url: http://www.kyb.mpg.de/fileadmin/user_upload/files/publications/attachments/ICRA2010-Kober_6231%5b1%5d.pdf
month: '05'
page: 853 - 858
publication_status: published
publisher: IEEE
publist_id: '2654'
quality_controlled: 0
status: public
title: Movement templates for learning of hitting and batting
type: conference
year: '2010'
...
---
_id: '3794'
abstract:
- lang: eng
text: 'We study the problem of multimodal dimensionality reduction assuming that
data samples can be missing at training time, and not all data modalities may
be present at application time. Maximum covariance analysis, as a generalization
of PCA, has many desirable properties, but its application to practical problems
is limited by its need for perfectly paired data. We overcome this limitation
by a latent variable approach that allows working with weakly paired data and
is still able to efficiently process large datasets using standard numerical routines.
The resulting weakly paired maximum covariance analysis often finds better representations
than alternative methods, as we show in two exemplary tasks: texture discrimination
and transfer learning.'
alternative_title:
- LNCS
author:
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
- first_name: Oliver
full_name: Krömer, Oliver
last_name: Krömer
citation:
ama: 'Lampert C, Krömer O. Weakly-paired maximum covariance analysis for multimodal
dimensionality reduction and transfer learning. In: Vol 6312. Springer; 2010:566-579.
doi:10.1007/978-3-642-15552-9_41'
apa: 'Lampert, C., & Krömer, O. (2010). Weakly-paired maximum covariance analysis
for multimodal dimensionality reduction and transfer learning (Vol. 6312, pp.
566–579). Presented at the ECCV: European Conference on Computer Vision, Heraklion,
Crete, Greece: Springer. https://doi.org/10.1007/978-3-642-15552-9_41'
chicago: Lampert, Christoph, and Oliver Krömer. “Weakly-Paired Maximum Covariance
Analysis for Multimodal Dimensionality Reduction and Transfer Learning,” 6312:566–79.
Springer, 2010. https://doi.org/10.1007/978-3-642-15552-9_41.
ieee: 'C. Lampert and O. Krömer, “Weakly-paired maximum covariance analysis for
multimodal dimensionality reduction and transfer learning,” presented at the ECCV:
European Conference on Computer Vision, Heraklion, Crete, Greece, 2010, vol. 6312,
pp. 566–579.'
ista: 'Lampert C, Krömer O. 2010. Weakly-paired maximum covariance analysis for
multimodal dimensionality reduction and transfer learning. ECCV: European Conference
on Computer Vision, LNCS, vol. 6312, 566–579.'
mla: Lampert, Christoph, and Oliver Krömer. Weakly-Paired Maximum Covariance
Analysis for Multimodal Dimensionality Reduction and Transfer Learning. Vol.
6312, Springer, 2010, pp. 566–79, doi:10.1007/978-3-642-15552-9_41.
short: C. Lampert, O. Krömer, in:, Springer, 2010, pp. 566–579.
conference:
end_date: 2010-09-11
location: Heraklion, Crete, Greece
name: 'ECCV: European Conference on Computer Vision'
start_date: 2010-09-05
date_created: 2018-12-11T12:05:12Z
date_published: 2010-11-10T00:00:00Z
date_updated: 2021-01-12T07:52:14Z
day: '10'
department:
- _id: ChLa
doi: 10.1007/978-3-642-15552-9_41
intvolume: ' 6312'
language:
- iso: eng
main_file_link:
- url: http://www.ics.forth.gr/eccv2010/intro.php
month: '11'
oa_version: None
page: 566 - 579
publication_status: published
publisher: Springer
publist_id: '2433'
quality_controlled: '1'
scopus_import: 1
status: public
title: Weakly-paired maximum covariance analysis for multimodal dimensionality reduction
and transfer learning
type: conference
user_id: 3E5EF7F0-F248-11E8-B48F-1D18A9856A87
volume: 6312
year: '2010'
...
---
_id: '3676'
abstract:
- lang: eng
text: |-
Most state-of-the-art systems for content-based video understanding tasks require video content to be represented as collections of many low-level descriptors, e.g. as histograms of the color, texture or motion in local image regions.
In order to preserve as much of the information contained in the original video as possible, these representations are typically high-dimensional, which conflicts with the aim for compact descriptors that would allow better efficiency and lower storage requirements.
In this paper, we address the problem of semantic com- pression of video, i.e. the reduction of low-level descriptors to a small number of dimensions while preserving most of the semantic information. For this, we adapt topic models – which have previously been used as compact representations of still images – to take into account the temporal structure of a video, as well as multi-modal components such as motion information.
Experiments on a large-scale collection of YouTube videos show that we can achieve a compression ratio of 20 : 1 compared to ordinary histogram representations and at least 2 : 1 compared to other dimensionality reduction techniques without significant loss of prediction accuracy. Also, improvements are demonstrated for our video-specific extensions modeling temporal structure and multiple modalities.
author:
- first_name: Jörn
full_name: Wanke,Jörn
last_name: Wanke
- first_name: Adrian
full_name: Ulges, Adrian
last_name: Ulges
- first_name: Christoph
full_name: Christoph Lampert
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
- first_name: Thomas
full_name: Breuel,Thomas M
last_name: Breuel
citation:
ama: 'Wanke J, Ulges A, Lampert C, Breuel T. Topic models for semantic video compression.
In: ACM; 2010:275-284. doi:10.1145/1743384.1743433'
apa: 'Wanke, J., Ulges, A., Lampert, C., & Breuel, T. (2010). Topic models for
semantic video compression (pp. 275–284). Presented at the MIR: Multimedia Information
Retrieval, ACM. https://doi.org/10.1145/1743384.1743433'
chicago: Wanke, Jörn, Adrian Ulges, Christoph Lampert, and Thomas Breuel. “Topic
Models for Semantic Video Compression,” 275–84. ACM, 2010. https://doi.org/10.1145/1743384.1743433.
ieee: 'J. Wanke, A. Ulges, C. Lampert, and T. Breuel, “Topic models for semantic
video compression,” presented at the MIR: Multimedia Information Retrieval, 2010,
pp. 275–284.'
ista: 'Wanke J, Ulges A, Lampert C, Breuel T. 2010. Topic models for semantic video
compression. MIR: Multimedia Information Retrieval, 275–284.'
mla: Wanke, Jörn, et al. Topic Models for Semantic Video Compression. ACM,
2010, pp. 275–84, doi:10.1145/1743384.1743433.
short: J. Wanke, A. Ulges, C. Lampert, T. Breuel, in:, ACM, 2010, pp. 275–284.
conference:
name: 'MIR: Multimedia Information Retrieval'
date_created: 2018-12-11T12:04:34Z
date_published: 2010-03-31T00:00:00Z
date_updated: 2021-01-12T07:45:04Z
day: '31'
doi: 10.1145/1743384.1743433
extern: 1
main_file_link:
- open_access: '0'
url: http://pub.ist.ac.at/~chl/papers/wanke-mir2010.pdf
month: '03'
page: 275 - 284
publication_status: published
publisher: ACM
publist_id: '2705'
quality_controlled: 0
status: public
title: Topic models for semantic video compression
type: conference
year: '2010'
...
---
_id: '3697'
abstract:
- lang: eng
text: The goal of this paper is to evaluate and compare models and methods for learning
to recognize basic entities in images in an unsupervised setting. In other words,
we want to discover the objects present in the images by analyzing unlabeled data
and searching for re-occurring patterns. We experiment with various baseline methods,
methods based on latent variable models, as well as spectral clustering methods.
The results are presented and compared both on subsets of Caltech256 and MSRC2,
data sets that are larger and more challenging and that include more object classes
than what has previously been reported in the literature. A rigorous framework
for evaluating unsupervised object discovery methods is proposed.
acknowledgement: The authors acknowledge support from the EU projects CLASS (IST project
027978), PerAct (IST project 504321) and the EU Network of Excellence PASCAL2.
author:
- first_name: Tinne
full_name: Tuytelaars,Tinne
last_name: Tuytelaars
- first_name: Christoph
full_name: Christoph Lampert
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
- first_name: Matthew
full_name: Blaschko,Matthew B
last_name: Blaschko
- first_name: Wray
full_name: Buntine,Wray
last_name: Buntine
citation:
ama: 'Tuytelaars T, Lampert C, Blaschko M, Buntine W. Unsupervised object discovery:
A comparison. International Journal of Computer Vision. 2010;88(2):284-302.
doi:10.1007/s11263-009-0271-8'
apa: 'Tuytelaars, T., Lampert, C., Blaschko, M., & Buntine, W. (2010). Unsupervised
object discovery: A comparison. International Journal of Computer Vision.
Springer. https://doi.org/10.1007/s11263-009-0271-8'
chicago: 'Tuytelaars, Tinne, Christoph Lampert, Matthew Blaschko, and Wray Buntine.
“Unsupervised Object Discovery: A Comparison.” International Journal of Computer
Vision. Springer, 2010. https://doi.org/10.1007/s11263-009-0271-8.'
ieee: 'T. Tuytelaars, C. Lampert, M. Blaschko, and W. Buntine, “Unsupervised object
discovery: A comparison,” International Journal of Computer Vision, vol.
88, no. 2. Springer, pp. 284–302, 2010.'
ista: 'Tuytelaars T, Lampert C, Blaschko M, Buntine W. 2010. Unsupervised object
discovery: A comparison. International Journal of Computer Vision. 88(2), 284–302.'
mla: 'Tuytelaars, Tinne, et al. “Unsupervised Object Discovery: A Comparison.” International
Journal of Computer Vision, vol. 88, no. 2, Springer, 2010, pp. 284–302, doi:10.1007/s11263-009-0271-8.'
short: T. Tuytelaars, C. Lampert, M. Blaschko, W. Buntine, International Journal
of Computer Vision 88 (2010) 284–302.
date_created: 2018-12-11T12:04:40Z
date_published: 2010-06-01T00:00:00Z
date_updated: 2021-01-12T07:49:02Z
day: '01'
doi: 10.1007/s11263-009-0271-8
extern: 1
intvolume: ' 88'
issue: '2'
license: https://creativecommons.org/licenses/by-nc/4.0/
month: '06'
page: 284 - 302
publication: International Journal of Computer Vision
publication_status: published
publisher: Springer
publist_id: '2664'
quality_controlled: 0
status: public
title: 'Unsupervised object discovery: A comparison'
tmp:
image: /images/cc_by_nc.png
legal_code_url: https://creativecommons.org/licenses/by-nc/4.0/legalcode
name: Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)
short: CC BY-NC (4.0)
type: journal_article
volume: 88
year: '2010'
...
---
_id: '3713'
abstract:
- lang: eng
text: We introduce a method to accelerate the evaluation of object detection cascades
with the help of a divide-and-conquer procedure in the space of candidate regions.
Compared to the exhaustive procedure that thus far is the state-of-the-art for
cascade evaluation, the proposed method requires fewer evaluations of the classifier
functions, thereby speeding up the search. Furthermore, we show how the recently
developed efficient subwindow search (ESS) procedure [11] can be integrated into
the last stage of our method. This allows us to use our method to act not only
as a faster procedure for cascade evaluation, but also as a tool to perform efficient
branch-and-bound object detection with nonlinear quality functions, in particular
kernelized support vector machines. Experiments on the PASCAL VOC 2006 dataset
show an acceleration of more than 50% by our method compared to standard cascade
evaluation.
acknowledgement: |-
Conference Information URL:
http://cvl.umiacs.umd.edu/conferences/cvpr2010/
author:
- first_name: Christoph
full_name: Christoph Lampert
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Lampert C. An efficient divide-and-conquer cascade for nonlinear object detection.
In: IEEE; 2010:1022-1029. doi:10.1109/CVPR.2010.5540107'
apa: 'Lampert, C. (2010). An efficient divide-and-conquer cascade for nonlinear
object detection (pp. 1022–1029). Presented at the CVPR: Computer Vision and Pattern
Recognition, IEEE. https://doi.org/10.1109/CVPR.2010.5540107'
chicago: Lampert, Christoph. “An Efficient Divide-and-Conquer Cascade for Nonlinear
Object Detection,” 1022–29. IEEE, 2010. https://doi.org/10.1109/CVPR.2010.5540107.
ieee: 'C. Lampert, “An efficient divide-and-conquer cascade for nonlinear object
detection,” presented at the CVPR: Computer Vision and Pattern Recognition, 2010,
pp. 1022–1029.'
ista: 'Lampert C. 2010. An efficient divide-and-conquer cascade for nonlinear object
detection. CVPR: Computer Vision and Pattern Recognition, 1022–1029.'
mla: Lampert, Christoph. An Efficient Divide-and-Conquer Cascade for Nonlinear
Object Detection. IEEE, 2010, pp. 1022–29, doi:10.1109/CVPR.2010.5540107.
short: C. Lampert, in:, IEEE, 2010, pp. 1022–1029.
conference:
name: 'CVPR: Computer Vision and Pattern Recognition'
date_created: 2018-12-11T12:04:46Z
date_published: 2010-06-18T00:00:00Z
date_updated: 2021-01-12T07:51:40Z
day: '18'
doi: 10.1109/CVPR.2010.5540107
extern: 1
month: '06'
page: 1022 - 1029
publication_status: published
publisher: IEEE
publist_id: '2643'
quality_controlled: 0
status: public
title: An efficient divide-and-conquer cascade for nonlinear object detection
type: conference
year: '2010'
...
---
_id: '3793'
abstract:
- lang: eng
text: "Recent progress in per-pixel object class labeling of natural images can
be attributed to the use of multiple types of image features and sound statistical
learning approaches. Within the latter, Conditional Random Fields (CRF) are prominently
used for their ability to represent interactions between random variables. Despite
their popularity in computer vision, parameter learning for CRFs has remained
difficult, popular approaches being cross-validation and piecewise training.\r\nIn
this work, we propose a simple yet expressive tree-structured CRF based on a recent
hierarchical image segmentation method. Our model combines and weights multiple
image features within a hierarchical representation and allows simple and efficient
globally-optimal learning of ≈ 105 parameters. The tractability of our model allows
us to pose and answer some of the open questions regarding parameter learning
applying to CRF-based approaches. The key findings for learning CRF models are,
from the obvious to the surprising, i) multiple image features always help, ii)
the limiting dimension with respect to current models is the amount of training
data, iii) piecewise training is competitive, iv) current methods for max-margin
training fail for models with many parameters.\r\n"
alternative_title:
- LNCS
article_processing_charge: No
author:
- first_name: Sebastian
full_name: Nowozin, Sebastian
last_name: Nowozin
- first_name: Peter
full_name: Gehler, Peter
last_name: Gehler
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Nowozin S, Gehler P, Lampert C. On parameter learning in CRF-based approaches
to object class image segmentation. In: Vol 6316. Springer; 2010:98-111. doi:10.1007/978-3-642-15567-3_8'
apa: 'Nowozin, S., Gehler, P., & Lampert, C. (2010). On parameter learning in
CRF-based approaches to object class image segmentation (Vol. 6316, pp. 98–111).
Presented at the ECCV: European Conference on Computer Vision, Heraklion, Crete,
Greece: Springer. https://doi.org/10.1007/978-3-642-15567-3_8'
chicago: Nowozin, Sebastian, Peter Gehler, and Christoph Lampert. “On Parameter
Learning in CRF-Based Approaches to Object Class Image Segmentation,” 6316:98–111.
Springer, 2010. https://doi.org/10.1007/978-3-642-15567-3_8.
ieee: 'S. Nowozin, P. Gehler, and C. Lampert, “On parameter learning in CRF-based
approaches to object class image segmentation,” presented at the ECCV: European
Conference on Computer Vision, Heraklion, Crete, Greece, 2010, vol. 6316, pp.
98–111.'
ista: 'Nowozin S, Gehler P, Lampert C. 2010. On parameter learning in CRF-based
approaches to object class image segmentation. ECCV: European Conference on Computer
Vision, LNCS, vol. 6316, 98–111.'
mla: Nowozin, Sebastian, et al. On Parameter Learning in CRF-Based Approaches
to Object Class Image Segmentation. Vol. 6316, Springer, 2010, pp. 98–111,
doi:10.1007/978-3-642-15567-3_8.
short: S. Nowozin, P. Gehler, C. Lampert, in:, Springer, 2010, pp. 98–111.
conference:
end_date: 2010-09-11
location: Heraklion, Crete, Greece
name: 'ECCV: European Conference on Computer Vision'
start_date: 2010-09-05
date_created: 2018-12-11T12:05:12Z
date_published: 2010-11-04T00:00:00Z
date_updated: 2021-01-12T07:52:14Z
day: '04'
ddc:
- '000'
department:
- _id: ChLa
doi: 10.1007/978-3-642-15567-3_8
file:
- access_level: open_access
checksum: 3716e10e161f7c714fd17ec193a223c3
content_type: application/pdf
creator: dernst
date_created: 2020-05-19T16:27:34Z
date_updated: 2020-07-14T12:46:16Z
file_id: '7871'
file_name: 2010_ECCV_Nowozin.pdf
file_size: 4087332
relation: main_file
file_date_updated: 2020-07-14T12:46:16Z
has_accepted_license: '1'
intvolume: ' 6316'
language:
- iso: eng
month: '11'
oa: 1
oa_version: Submitted Version
page: 98 - 111
publication_status: published
publisher: Springer
publist_id: '2431'
quality_controlled: '1'
scopus_import: 1
status: public
title: On parameter learning in CRF-based approaches to object class image segmentation
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 6316
year: '2010'
...
---
_id: '3699'
abstract:
- lang: eng
text: Kernel Canonical Correlation Analysis (KCCA) is a general technique for subspace
learning that incorporates principal components analysis (PCA) and Fisher linear
discriminant analysis (LDA) as special cases. By finding directions that maximize
correlation, CCA learns representations tied more closely to underlying process
generating the the data and can ignore high-variance noise directions. However,
for data where acquisition in a given modality is expensive or otherwise limited,
CCA may suffer from small sample effects. We propose to use semisupervised Laplacian
regularization to utilize data that are present in only one modality. This approach
is able to find highly correlated directions that also lie along the data manifold,
resulting in a more robust estimate of correlated subspaces. Functional magnetic
resonance imaging (fMRI) acquired data are naturally amenable to subspace techniques
as data are well aligned. fMRI data of the human brain are a particularly interesting
candidate. In this study we implemented various supervised and semi-supervised
versions of CCA on human fMRI data, with regression to single and multivariate
labels (corresponding to video content subjects viewed during the image acquisition).
In each variate condition, the semi-supervised variants of CCA performed better
than the supervised variants, including a supervised variant with Laplacian regularization.
We additionally analyze the weights learned by the regression in order to infer
brain regions that are important to different types of visual processing.
author:
- first_name: Matthew
full_name: Blaschko,Matthew B
last_name: Blaschko
- first_name: Christoph
full_name: Christoph Lampert
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
- first_name: Andreas
full_name: Bartels, Andreas
last_name: Bartels
citation:
ama: Blaschko M, Lampert C, Bartels A. Semi-Supervised Analysis of Human FMRI
Data. Berlin Institute of Technology; 2009.
apa: 'Blaschko, M., Lampert, C., & Bartels, A. (2009). Semi-supervised analysis
of human fMRI data. BBCI: Berlin Brain-Computer Interface Workshop - Advances
in Neurotechnology. Berlin Institute of Technology.'
chicago: 'Blaschko, Matthew, Christoph Lampert, and Andreas Bartels. Semi-Supervised
Analysis of Human FMRI Data. BBCI: Berlin Brain-Computer Interface Workshop
- Advances in Neurotechnology. Berlin Institute of Technology, 2009.'
ieee: M. Blaschko, C. Lampert, and A. Bartels, Semi-supervised analysis of human
fMRI data. Berlin Institute of Technology, 2009.
ista: Blaschko M, Lampert C, Bartels A. 2009. Semi-supervised analysis of human
fMRI data, Berlin Institute of Technology,p.
mla: 'Blaschko, Matthew, et al. “Semi-Supervised Analysis of Human FMRI Data.” BBCI:
Berlin Brain-Computer Interface Workshop - Advances in Neurotechnology, Berlin
Institute of Technology, 2009.'
short: M. Blaschko, C. Lampert, A. Bartels, Semi-Supervised Analysis of Human FMRI
Data, Berlin Institute of Technology, 2009.
date_created: 2018-12-11T12:04:41Z
date_published: 2009-07-10T00:00:00Z
date_updated: 2019-04-26T07:22:33Z
day: '10'
extern: 1
main_file_link:
- open_access: '0'
url: http://pubman.mpdl.mpg.de/pubman/faces/viewItemOverviewPage.jsp?itemId=escidoc:1789281
month: '07'
publication: 'BBCI: Berlin Brain-Computer Interface Workshop - Advances in Neurotechnology'
publication_status: published
publisher: Berlin Institute of Technology
publist_id: '2661'
quality_controlled: 0
status: public
title: Semi-supervised analysis of human fMRI data
type: conference_poster
year: '2009'
...
---
_id: '3703'
abstract:
- lang: eng
text: Recent research has shown that the use of contextual cues significantly improves
performance in sliding window type localization systems. In this work, we propose
a method that incorporates both global and local context information through appropriately
defined kernel functions. In particular, we make use of a weighted combination
of kernels defined over local spatial regions, as well as a global context kernel.
The relative importance of the context contributions is learned automatically,
and the resulting discriminant function is of a form such that localization at
test time can be solved efficiently using a branch and bound optimization scheme.
By specifying context directly with a kernel learning approach, we achieve high
localization accuracy with a simple and efficient representation. This is in contrast
to other systems that incorporate context for which expensive inference needs
to be done at test time. We show experimentally on the PASCAL VOC datasets that
the inclusion of context can significantly improve localization performance, provided
the relative contributions of context cues are learned appropriately.
acknowledgement: The research leading to these results has received funding from the
European Research Council under the European Community’s Seventh Framework Programme
(FP7/2007- 2013) / ERC grant agreement no. 228180. This work was funded in part
by the EC project CLASS, IST 027978, and the PASCAL2 network of excellence. The
first author is supported by the Royal Academy of Engineering through a Newton International
Fellowship.
alternative_title:
- Proceedings of the BMVC
author:
- first_name: Matthew
full_name: Blaschko,Matthew B
last_name: Blaschko
- first_name: Christoph
full_name: Christoph Lampert
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Blaschko M, Lampert C. Object localization with global and local context kernels.
In: BMVA Press; 2009:1-11. doi:10.5244/C.23.63'
apa: 'Blaschko, M., & Lampert, C. (2009). Object localization with global and
local context kernels (pp. 1–11). Presented at the BMVC: British Machine Vision
Conference, BMVA Press. https://doi.org/10.5244/C.23.63'
chicago: Blaschko, Matthew, and Christoph Lampert. “Object Localization with Global
and Local Context Kernels,” 1–11. BMVA Press, 2009. https://doi.org/10.5244/C.23.63.
ieee: 'M. Blaschko and C. Lampert, “Object localization with global and local context
kernels,” presented at the BMVC: British Machine Vision Conference, 2009, pp.
1–11.'
ista: 'Blaschko M, Lampert C. 2009. Object localization with global and local context
kernels. BMVC: British Machine Vision Conference, Proceedings of the BMVC, , 1–11.'
mla: Blaschko, Matthew, and Christoph Lampert. Object Localization with Global
and Local Context Kernels. BMVA Press, 2009, pp. 1–11, doi:10.5244/C.23.63.
short: M. Blaschko, C. Lampert, in:, BMVA Press, 2009, pp. 1–11.
conference:
name: 'BMVC: British Machine Vision Conference'
date_created: 2018-12-11T12:04:42Z
date_published: 2009-09-10T00:00:00Z
date_updated: 2021-01-12T07:51:36Z
day: '10'
doi: 10.5244/C.23.63
extern: 1
main_file_link:
- open_access: '0'
url: http://www.bmva.org/bmvc/2009/Papers/Paper228/Paper228.pdf
month: '09'
page: 1 - 11
publication_status: published
publisher: BMVA Press
publist_id: '2655'
quality_controlled: 0
status: public
title: Object localization with global and local context kernels
tmp:
image: /images/cc_by.png
legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
short: CC BY (4.0)
type: conference
year: '2009'
...
---
_id: '3704'
abstract:
- lang: eng
text: We study the problem of object classification when training and test classes
are disjoint, i.e. no training examples of the target classes are available. This
setup has hardly been studied in computer vision research, but it is the rule
rather than the exception, because the world contains tens of thousands of different
object classes and for only a very few of them image, collections have been formed
and annotated with suitable class labels. In this paper, we tackle the problem
by introducing attribute-based classification. It performs object detection based
on a human-specified high-level description of the target objects instead of training
images. The description consists of arbitrary semantic attributes, like shape,
color or even geographic information. Because such properties transcend the specific
learning task at hand, they can be pre-learned, e.g. from image datasets unrelated
to the current task. Afterwards, new classes can be detected based on their attribute
representation, without the need for a new training phase. In order to evaluate
our method and to facilitate research in this area, we have assembled a new large-scale
dataset, ldquoAnimals with Attributesrdquo, of over 30,000 animal images that
match the 50 classes in Osherson‘s classic table of how strongly humans associate
85 semantic attributes with animal classes. Our experiments show that by using
an attribute layer it is indeed possible to build a learning object detection
system that does not require any training images of the target classes.
acknowledgement: This work was funded in part by the EC project CLASS, IST 027978,
and the PASCAL2 network of excellence.
author:
- first_name: Christoph
full_name: Christoph Lampert
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
- first_name: Hannes
full_name: Nickisch,Hannes
last_name: Nickisch
- first_name: Stefan
full_name: Harmeling,Stefan
last_name: Harmeling
citation:
ama: 'Lampert C, Nickisch H, Harmeling S. Learning to detect unseen object classes
by between-class attribute transfer. In: IEEE; 2009:951-958. doi:10.1109/CVPR.2009.5206594'
apa: 'Lampert, C., Nickisch, H., & Harmeling, S. (2009). Learning to detect
unseen object classes by between-class attribute transfer (pp. 951–958). Presented
at the CVPR: Computer Vision and Pattern Recognition, IEEE. https://doi.org/10.1109/CVPR.2009.5206594'
chicago: Lampert, Christoph, Hannes Nickisch, and Stefan Harmeling. “Learning to
Detect Unseen Object Classes by Between-Class Attribute Transfer,” 951–58. IEEE,
2009. https://doi.org/10.1109/CVPR.2009.5206594.
ieee: 'C. Lampert, H. Nickisch, and S. Harmeling, “Learning to detect unseen object
classes by between-class attribute transfer,” presented at the CVPR: Computer
Vision and Pattern Recognition, 2009, pp. 951–958.'
ista: 'Lampert C, Nickisch H, Harmeling S. 2009. Learning to detect unseen object
classes by between-class attribute transfer. CVPR: Computer Vision and Pattern
Recognition, 951–958.'
mla: Lampert, Christoph, et al. Learning to Detect Unseen Object Classes by Between-Class
Attribute Transfer. IEEE, 2009, pp. 951–58, doi:10.1109/CVPR.2009.5206594.
short: C. Lampert, H. Nickisch, S. Harmeling, in:, IEEE, 2009, pp. 951–958.
conference:
name: 'CVPR: Computer Vision and Pattern Recognition'
date_created: 2018-12-11T12:04:43Z
date_published: 2009-06-20T00:00:00Z
date_updated: 2021-01-12T07:51:36Z
day: '20'
doi: 10.1109/CVPR.2009.5206594
extern: 1
month: '06'
page: 951 - 958
publication_status: published
publisher: IEEE
publist_id: '2652'
quality_controlled: 0
status: public
title: Learning to detect unseen object classes by between-class attribute transfer
type: conference
year: '2009'
...
---
_id: '3715'
abstract:
- lang: eng
text: High-speed smooth and accurate visual tracking of objects in arbitrary, unstructured
environments is essential for robotics and human motion analysis. However, building
a system that can adapt to arbitrary objects and a wide range of lighting conditions
is a challenging problem, especially if hard real-time constraints apply like
in robotics scenarios. In this work, we introduce a method for learning a discriminative
object tracking system based on the recent structured regression framework for
object localization. Using a kernel function that allows fast evaluation on the
GPU, the resulting system can process video streams at speed of 100 frames per
second or more. Consecutive frames in high speed video sequences are typically
very redundant, and for training an object detection system, it is sufficient
to have training labels from only a subset of all images. We propose an active
learning method that select training examples in a data-driven way, thereby minimizing
the required number of training labeling. Experiments on realistic data show that
the active learning is superior to previously used methods for dataset subsampling
for this task.
acknowledgement: |-
This work was funded in part by the EU project CLASS, IST 027978.
Conference Information URL: http://www.optecnet.de/veranstaltungen/2009/09/dagm-2009/
alternative_title:
- LNCS
author:
- first_name: Christoph
full_name: Christoph Lampert
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
- first_name: Jan
full_name: Peters, Jan
last_name: Peters
citation:
ama: 'Lampert C, Peters J. Active structured learning for high-speed object detection.
In: Vol 5748. Springer; 2009:221-231. doi:10.1007/978-3-642-03798-6_23'
apa: 'Lampert, C., & Peters, J. (2009). Active structured learning for high-speed
object detection (Vol. 5748, pp. 221–231). Presented at the DAGM: German Association
For Pattern Recognition, Springer. https://doi.org/10.1007/978-3-642-03798-6_23'
chicago: Lampert, Christoph, and Jan Peters. “Active Structured Learning for High-Speed
Object Detection,” 5748:221–31. Springer, 2009. https://doi.org/10.1007/978-3-642-03798-6_23.
ieee: 'C. Lampert and J. Peters, “Active structured learning for high-speed object
detection,” presented at the DAGM: German Association For Pattern Recognition,
2009, vol. 5748, pp. 221–231.'
ista: 'Lampert C, Peters J. 2009. Active structured learning for high-speed object
detection. DAGM: German Association For Pattern Recognition, LNCS, vol. 5748,
221–231.'
mla: Lampert, Christoph, and Jan Peters. Active Structured Learning for High-Speed
Object Detection. Vol. 5748, Springer, 2009, pp. 221–31, doi:10.1007/978-3-642-03798-6_23.
short: C. Lampert, J. Peters, in:, Springer, 2009, pp. 221–231.
conference:
name: 'DAGM: German Association For Pattern Recognition'
date_created: 2018-12-11T12:04:46Z
date_published: 2009-10-07T00:00:00Z
date_updated: 2021-01-12T07:51:41Z
day: '07'
doi: 10.1007/978-3-642-03798-6_23
extern: 1
intvolume: ' 5748'
month: '10'
page: 221 - 231
publication_status: published
publisher: Springer
publist_id: '2642'
quality_controlled: 0
status: public
title: Active structured learning for high-speed object detection
type: conference
volume: 5748
year: '2009'
...
---
_id: '3717'
abstract:
- lang: eng
text: We introduce RTblob, an open-source real-time vision system for 3D object
detection that achieves over 200 Hz tracking speed with only off-the-shelf hardware
component. It allows fast and accurate tracking of colored objects in 3D without
expensive and often custom-built hardware, instead making use of the PC graphics
cards for the necessary image processing operations.
acknowledgement: 'IEEE Workshop URL: http://humanoidscv.ime.cmc.osaka-u.ac.jp/'
author:
- first_name: Christoph
full_name: Christoph Lampert
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
- first_name: Jan
full_name: Peters, Jan
last_name: Peters
citation:
ama: Lampert C, Peters J. A High-Speed Object Tracker from off-the-Shelf Components.
IEEE; 2009.
apa: 'Lampert, C., & Peters, J. (2009). A high-speed object tracker from
off-the-shelf components. ICCV: International Conference on Computer Vision.
IEEE.'
chicago: 'Lampert, Christoph, and Jan Peters. A High-Speed Object Tracker from
off-the-Shelf Components. ICCV: International Conference on Computer Vision.
IEEE, 2009.'
ieee: C. Lampert and J. Peters, A high-speed object tracker from off-the-shelf
components. IEEE, 2009.
ista: Lampert C, Peters J. 2009. A high-speed object tracker from off-the-shelf
components, IEEE,p.
mla: 'Lampert, Christoph, and Jan Peters. “A High-Speed Object Tracker from off-the-Shelf
Components.” ICCV: International Conference on Computer Vision, IEEE, 2009.'
short: C. Lampert, J. Peters, A High-Speed Object Tracker from off-the-Shelf Components,
IEEE, 2009.
date_created: 2018-12-11T12:04:47Z
date_published: 2009-09-27T00:00:00Z
date_updated: 2020-07-14T12:46:14Z
day: '27'
extern: 1
main_file_link:
- open_access: '0'
url: http://pubman.mpdl.mpg.de/pubman/faces/viewItemOverviewPage.jsp?itemId=escidoc:1789154
month: '09'
publication: 'ICCV: International Conference on Computer Vision'
publication_status: published
publisher: IEEE
publist_id: '2640'
quality_controlled: 0
status: public
title: A high-speed object tracker from off-the-shelf components
tmp:
image: /images/cc_by.png
legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
short: CC BY (4.0)
type: conference_poster
year: '2009'
...
---
_id: '3696'
abstract:
- lang: eng
text: Discriminative techniques, such as conditional random fields (CRFs) or structure
aware maximum-margin techniques (maximum margin Markov networks (M3N), structured
output support vector machines (S-SVM)), are state-of-the-art in the prediction
of structured data. However, to achieve good results these techniques require
complete and reliable ground truth, which is not always available in realistic
problems. Furthermore, training either CRFs or margin-based techniques is computationally
costly, because the runtime of current training methods depends not only on the
size of the training set but also on properties of the output space to which the
training samples are assigned. We propose an alternative model for structured
output prediction, Joint Kernel Support Estimation (JKSE), which is rather generative
in nature as it relies on estimating the joint probability density of samples
and labels in the training set. This makes it tolerant against incomplete or incorrect
labels and also opens the possibility of learning in situations where more than
one output label can be considered correct. At the same time, we avoid typical
problems of generative models as we do not attempt to learn the full joint probability
distribution, but we model only its support in a joint reproducing kernel Hilbert
space. As a consequence, JKSE training is possible by an adaption of the classical
one-class SVM procedure. The resulting optimization problem is convex and efficiently
solvable even with tens of thousands of training examples. A particular advantage
of JKSE is that the training speed depends only on the size of the training set,
and not on the total size of the label space. No inference step during training
is required (as M3N and S-SVM would) nor do we have calculate a partition function
(as CRFs do). Experiments on realistic data show that, for suitable kernel functions,
our method works efficiently and robustly in situations that discriminative techniques
have problems with or that are computationally infeasible for them.
author:
- first_name: Christoph
full_name: Christoph Lampert
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
- first_name: Matthew
full_name: Blaschko,Matthew B
last_name: Blaschko
citation:
ama: Lampert C, Blaschko M. Structured prediction by joint kernel support estimation.
Machine Learning. 2009;77(2-3):249-269. doi:10.1007/s10994-009-5111-0
apa: Lampert, C., & Blaschko, M. (2009). Structured prediction by joint kernel
support estimation. Machine Learning. Springer. https://doi.org/10.1007/s10994-009-5111-0
chicago: Lampert, Christoph, and Matthew Blaschko. “Structured Prediction by Joint
Kernel Support Estimation.” Machine Learning. Springer, 2009. https://doi.org/10.1007/s10994-009-5111-0.
ieee: C. Lampert and M. Blaschko, “Structured prediction by joint kernel support
estimation,” Machine Learning, vol. 77, no. 2–3. Springer, pp. 249–269,
2009.
ista: Lampert C, Blaschko M. 2009. Structured prediction by joint kernel support
estimation. Machine Learning. 77(2–3), 249–269.
mla: Lampert, Christoph, and Matthew Blaschko. “Structured Prediction by Joint Kernel
Support Estimation.” Machine Learning, vol. 77, no. 2–3, Springer, 2009,
pp. 249–69, doi:10.1007/s10994-009-5111-0.
short: C. Lampert, M. Blaschko, Machine Learning 77 (2009) 249–269.
date_created: 2018-12-11T12:04:40Z
date_published: 2009-04-07T00:00:00Z
date_updated: 2021-01-12T07:49:01Z
day: '07'
doi: 10.1007/s10994-009-5111-0
extern: 1
intvolume: ' 77'
issue: 2-3
month: '04'
page: 249 - 269
publication: Machine Learning
publication_status: published
publisher: Springer
publist_id: '2663'
quality_controlled: 0
status: public
title: Structured prediction by joint kernel support estimation
tmp:
image: /images/cc_by_nc.png
legal_code_url: https://creativecommons.org/licenses/by-nc/4.0/legalcode
name: Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)
short: CC BY-NC (4.0)
type: journal_article
volume: 77
year: '2009'
...
---
_id: '3690'
abstract:
- lang: eng
text: An important cue to high level scene understanding is to analyze the objects
in the scene and their behavior and interactions. In this paper, we study the
problem of classification of activities in videos, as this is an integral component
of any scene understanding system, and present a novel approach for recognizing
human action categories in videos by combining information from appearance and
motion of human body parts. Our approach is based on tracking human body parts
by using mixture particle filters and then clustering the particles using local
non - parametric clustering, hence associating a local set of particles to each
cluster mode. The trajectory of these cluster modes provides the "motion"
information and the "appearance" information is provided by the statistical
information about the relative motion of these local set of particles over a number
of frames. Later we use a "Bag of Words" model to build one histogram
per video sequence from the set of these robust appearance and motion descriptors.
These histograms provide us characteristic information which helps us to discriminate
among various human actions which ultimately helps us in better understanding
of the complete scene. We tested our approach on the standard KTH and Weizmann
human action dataseis and the results were comparable to the state of the art
methods. Additionally our approach is able to distinguish between activities that
involve the motion of complete body from those in which only certain body parts
move. In other words, our method discriminates well between activities with "global
body motion" like running, jogging etc. and "local motion" like
waving, boxing etc.
author:
- first_name: Paramveer
full_name: Dhillon, Paramveer S
last_name: Dhillon
- first_name: Sebastian
full_name: Nowozin, Sebastian
last_name: Nowozin
- first_name: Christoph
full_name: Christoph Lampert
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Dhillon P, Nowozin S, Lampert C. Combining appearance and motion for human
action classification in videos. In: IEEE; 2009:22-29. doi:10.1109/CVPRW.2009.5204237'
apa: 'Dhillon, P., Nowozin, S., & Lampert, C. (2009). Combining appearance and
motion for human action classification in videos (pp. 22–29). Presented at the
CVPR: Computer Vision and Pattern Recognition, IEEE. https://doi.org/10.1109/CVPRW.2009.5204237'
chicago: Dhillon, Paramveer, Sebastian Nowozin, and Christoph Lampert. “Combining
Appearance and Motion for Human Action Classification in Videos,” 22–29. IEEE,
2009. https://doi.org/10.1109/CVPRW.2009.5204237.
ieee: 'P. Dhillon, S. Nowozin, and C. Lampert, “Combining appearance and motion
for human action classification in videos,” presented at the CVPR: Computer Vision
and Pattern Recognition, 2009, no. 174, pp. 22–29.'
ista: 'Dhillon P, Nowozin S, Lampert C. 2009. Combining appearance and motion for
human action classification in videos. CVPR: Computer Vision and Pattern Recognition,
22–29.'
mla: Dhillon, Paramveer, et al. Combining Appearance and Motion for Human Action
Classification in Videos. no. 174, IEEE, 2009, pp. 22–29, doi:10.1109/CVPRW.2009.5204237.
short: P. Dhillon, S. Nowozin, C. Lampert, in:, IEEE, 2009, pp. 22–29.
conference:
name: 'CVPR: Computer Vision and Pattern Recognition'
date_created: 2018-12-11T12:04:38Z
date_published: 2009-01-01T00:00:00Z
date_updated: 2021-01-12T07:48:59Z
day: '01'
doi: 10.1109/CVPRW.2009.5204237
extern: 1
issue: '174'
month: '01'
page: 22 - 29
publication_status: published
publisher: IEEE
publist_id: '2675'
quality_controlled: 0
status: public
title: Combining appearance and motion for human action classification in videos
type: conference
year: '2009'
...
---
_id: '3710'
abstract:
- lang: eng
text: Most successful object recognition systems rely on binary classification,
deciding only if an object is present or not, but not providing information on
the actual object location. To estimate the object‘s location, one can take a
sliding window approach, but this strongly increases the computational cost because
the classifier or similarity function has to be evaluated over a large set of
candidate subwindows. In this paper, we propose a simple yet powerful branch and
bound scheme that allows efficient maximization of a large class of quality functions
over all possible subimages. It converges to a globally optimal solution typically
in linear or even sublinear time, in contrast to the quadratic scaling of exhaustive
or sliding window search. We show how our method is applicable to different object
detection and image retrieval scenarios. The achieved speedup allows the use of
classifiers for localization that formerly were considered too slow for this task,
such as SVMs with a spatial pyramid kernel or nearest-neighbor classifiers based
on the chi^2 distance. We demonstrate state-of-the-art localization performance
of the resulting systems on the UIUC Cars data set, the PASCAL VOC 2006 data set,
and in the PASCAL VOC 2007 competition.
acknowledgement: 'This work was funded in part by the EU projects CLASS, IST 027978,
and PerAct, EST 504321. '
author:
- first_name: Christoph
full_name: Christoph Lampert
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
- first_name: Matthew
full_name: Blaschko,Matthew B
last_name: Blaschko
- first_name: Thomas
full_name: Hofmann,Thomas
last_name: Hofmann
citation:
ama: 'Lampert C, Blaschko M, Hofmann T. Efficient subwindow search: A branch and
bound framework for object localization. IEEE Transactions on Pattern Analysis
and Machine Intelligence. 2009;31(12):2129-2142. doi:10.1109/TPAMI.2009.144'
apa: 'Lampert, C., Blaschko, M., & Hofmann, T. (2009). Efficient subwindow search:
A branch and bound framework for object localization. IEEE Transactions on
Pattern Analysis and Machine Intelligence. IEEE. https://doi.org/10.1109/TPAMI.2009.144'
chicago: 'Lampert, Christoph, Matthew Blaschko, and Thomas Hofmann. “Efficient Subwindow
Search: A Branch and Bound Framework for Object Localization.” IEEE Transactions
on Pattern Analysis and Machine Intelligence. IEEE, 2009. https://doi.org/10.1109/TPAMI.2009.144.'
ieee: 'C. Lampert, M. Blaschko, and T. Hofmann, “Efficient subwindow search: A branch
and bound framework for object localization,” IEEE Transactions on Pattern
Analysis and Machine Intelligence, vol. 31, no. 12. IEEE, pp. 2129–2142, 2009.'
ista: 'Lampert C, Blaschko M, Hofmann T. 2009. Efficient subwindow search: A branch
and bound framework for object localization. IEEE Transactions on Pattern Analysis
and Machine Intelligence. 31(12), 2129–2142.'
mla: 'Lampert, Christoph, et al. “Efficient Subwindow Search: A Branch and Bound
Framework for Object Localization.” IEEE Transactions on Pattern Analysis and
Machine Intelligence, vol. 31, no. 12, IEEE, 2009, pp. 2129–42, doi:10.1109/TPAMI.2009.144.'
short: C. Lampert, M. Blaschko, T. Hofmann, IEEE Transactions on Pattern Analysis
and Machine Intelligence 31 (2009) 2129–2142.
date_created: 2018-12-11T12:04:45Z
date_published: 2009-12-01T00:00:00Z
date_updated: 2021-01-12T07:51:39Z
day: '01'
doi: 10.1109/TPAMI.2009.144
extern: 1
intvolume: ' 31'
issue: '12'
main_file_link:
- open_access: '0'
url: http://www2.computer.org/portal/web/csdl/doi/10.1109/TPAMI.2009.144
month: '12'
page: 2129 - 2142
publication: IEEE Transactions on Pattern Analysis and Machine Intelligence
publication_status: published
publisher: IEEE
publist_id: '2648'
quality_controlled: 0
status: public
title: 'Efficient subwindow search: A branch and bound framework for object localization'
type: journal_article
volume: 31
year: '2009'
...
---
_id: '3711'
abstract:
- lang: eng
text: An important cue to high level scene understanding is to analyze the objects
in the scene and their behavior and interactions. In this paper, we study the
problem of classification of activities in videos, as this is an integral component
of any scene understanding system, and present a novel approach for recognizing
human action categories in videos by combining information from appearance and
motion of human body parts. Our approach is based on tracking human body parts
by using mixture particle filters and then clustering the particles using local
non - parametric clustering, hence associating a local set of particles to each
cluster mode. The trajectory of these cluster modes provides the ldquomotionrdquo
information and the ldquoappearancerdquo information is provided by the statistical
information about the relative motion of these local set of particles over a number
of frames. Later we use a ldquoBag of Wordsrdquo model to build one histogram
per video sequence from the set of these robust appearance and motion descriptors.
These histograms provide us characteristic information which helps us to discriminate
among various human actions which ultimately helps us in better understanding
of the complete scene. We tested our approach on the standard KTH and Weizmann
human action datasets and the results were comparable to the state of the art
methods. Additionally our approach is able to distinguish between activities that
involve the motion of complete body from those in which only certain body parts
move. In other words, our method discriminates well between activities with ldquoglobal
body motionrdquo like running, jogging etc. and ldquolocal motionrdquo like waving,
boxing etc.
author:
- first_name: Paramveer
full_name: Dhillon, Paramveer S
last_name: Dhillon
- first_name: Sebastian
full_name: Nowozin, Sebastian
last_name: Nowozin
- first_name: Christoph
full_name: Christoph Lampert
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Dhillon P, Nowozin S, Lampert C. Combining appearance and motion for human
action classification in videos. In: IEEE; 2009:22-29. doi:10.1109/CVPRW.2009.5204237'
apa: 'Dhillon, P., Nowozin, S., & Lampert, C. (2009). Combining appearance and
motion for human action classification in videos (pp. 22–29). Presented at the
CVPR: Computer Vision and Pattern Recognition, IEEE. https://doi.org/10.1109/CVPRW.2009.5204237'
chicago: Dhillon, Paramveer, Sebastian Nowozin, and Christoph Lampert. “Combining
Appearance and Motion for Human Action Classification in Videos,” 22–29. IEEE,
2009. https://doi.org/10.1109/CVPRW.2009.5204237.
ieee: 'P. Dhillon, S. Nowozin, and C. Lampert, “Combining appearance and motion
for human action classification in videos,” presented at the CVPR: Computer Vision
and Pattern Recognition, 2009, pp. 22–29.'
ista: 'Dhillon P, Nowozin S, Lampert C. 2009. Combining appearance and motion for
human action classification in videos. CVPR: Computer Vision and Pattern Recognition,
22–29.'
mla: Dhillon, Paramveer, et al. Combining Appearance and Motion for Human Action
Classification in Videos. IEEE, 2009, pp. 22–29, doi:10.1109/CVPRW.2009.5204237.
short: P. Dhillon, S. Nowozin, C. Lampert, in:, IEEE, 2009, pp. 22–29.
conference:
name: 'CVPR: Computer Vision and Pattern Recognition'
date_created: 2018-12-11T12:04:45Z
date_published: 2009-08-18T00:00:00Z
date_updated: 2021-01-12T07:51:39Z
day: '18'
doi: 10.1109/CVPRW.2009.5204237
extern: 1
main_file_link:
- open_access: '0'
url: http://www.nowozin.net/sebastian/papers/dhillon2008actionclassification.pdf
month: '08'
page: 22 - 29
publication_status: published
publisher: IEEE
publist_id: '2645'
quality_controlled: 0
status: public
title: Combining appearance and motion for human action classification in videos
type: conference
year: '2009'
...
---
_id: '3707'
abstract:
- lang: eng
text: Over the last years, kernel methods have established themselves as powerful
tools for computer vision researchers as well as for practitioners. In this tutorial,
we give an introduction to kernel methods in computer vision from a geometric
perspective, introducing not only the ubiquitous support vector machines, but
also less known techniques for regression, dimensionality reduction, outlier detection
and clustering. Additionally, we give an outlook on very recent, non-classical
techniques for the prediction of structure data, for the estimation of statistical
dependency and for learning the kernel function itself. All methods are illustrated
with examples of successful application from the recent computer vision research
literature.
alternative_title:
- Foundations and Trends® in Computer Graphics and Vision
article_processing_charge: No
author:
- first_name: Christoph
full_name: Lampert, Christoph
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: Lampert C. Kernel Methods in Computer Vision. Vol 4. now publishers;
2009. doi:10.1561/0600000027
apa: Lampert, C. (2009). Kernel Methods in Computer Vision (Vol. 4). now
publishers. https://doi.org/10.1561/0600000027
chicago: Lampert, Christoph. Kernel Methods in Computer Vision. Vol. 4. now
publishers, 2009. https://doi.org/10.1561/0600000027.
ieee: C. Lampert, Kernel Methods in Computer Vision, vol. 4. now publishers,
2009.
ista: Lampert C. 2009. Kernel Methods in Computer Vision, now publishers, 112p.
mla: Lampert, Christoph. Kernel Methods in Computer Vision. Vol. 4, now publishers,
2009, doi:10.1561/0600000027.
short: C. Lampert, Kernel Methods in Computer Vision, now publishers, 2009.
date_created: 2018-12-11T12:04:44Z
date_published: 2009-09-03T00:00:00Z
date_updated: 2021-12-21T15:38:43Z
day: '03'
doi: 10.1561/0600000027
extern: '1'
intvolume: ' 4'
language:
- iso: eng
month: '09'
oa_version: None
page: '112'
publication_identifier:
eisbn:
- 978-1-60198-269-8
isbn:
- 978-1-60198-268-1
publication_status: published
publisher: now publishers
publist_id: '2651'
quality_controlled: '1'
status: public
title: Kernel Methods in Computer Vision
type: book
user_id: 8b945eb4-e2f2-11eb-945a-df72226e66a9
volume: 4
year: '2009'
...
---
_id: '3708'
abstract:
- lang: eng
text: Markov random field (MRF, CRF) models are popular in computer vision. However,
in order to be computationally tractable they are limited to incorporate only
local interactions and cannot model global properties, such as connectedness,
which is a potentially useful high-level prior for object segmentation. In this
work, we overcome this limitation by deriving a potential function that enforces
the output labeling to be connected and that can naturally be used in the framework
of recent MAP-MRF LP relaxations. Using techniques from polyhedral combinatorics,
we show that a provably tight approximation to the MAP solution of the resulting
MRF can still be found efficiently by solving a sequence of max-flow problems.
The efficiency of the inference procedure also allows us to learn the parameters
of a MRF with global connectivity potentials by means of a cutting plane algorithm.
We experimentally evaluate our algorithm on both synthetic data and on the challenging
segmentation task of the PASCAL VOC 2008 data set. We show that in both cases
the addition of a connectedness prior significantly reduces the segmentation error.
acknowledgement: |-
Conference Information URL:
http://www.cvpr2009.org/
author:
- first_name: Sebastian
full_name: Nowozin, Sebastian
last_name: Nowozin
- first_name: Christoph
full_name: Christoph Lampert
id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
last_name: Lampert
orcid: 0000-0001-8622-7887
citation:
ama: 'Nowozin S, Lampert C. Global connectivity potentials for random field models.
In: IEEE; 2009:818-825. doi:10.1109/CVPR.2009.5206567'
apa: 'Nowozin, S., & Lampert, C. (2009). Global connectivity potentials for
random field models (pp. 818–825). Presented at the CVPR: Computer Vision and
Pattern Recognition, IEEE. https://doi.org/10.1109/CVPR.2009.5206567'
chicago: Nowozin, Sebastian, and Christoph Lampert. “Global Connectivity Potentials
for Random Field Models,” 818–25. IEEE, 2009. https://doi.org/10.1109/CVPR.2009.5206567.
ieee: 'S. Nowozin and C. Lampert, “Global connectivity potentials for random field
models,” presented at the CVPR: Computer Vision and Pattern Recognition, 2009,
pp. 818–825.'
ista: 'Nowozin S, Lampert C. 2009. Global connectivity potentials for random field
models. CVPR: Computer Vision and Pattern Recognition, 818–825.'
mla: Nowozin, Sebastian, and Christoph Lampert. Global Connectivity Potentials
for Random Field Models. IEEE, 2009, pp. 818–25, doi:10.1109/CVPR.2009.5206567.
short: S. Nowozin, C. Lampert, in:, IEEE, 2009, pp. 818–825.
conference:
name: 'CVPR: Computer Vision and Pattern Recognition'
date_created: 2018-12-11T12:04:44Z
date_published: 2009-06-20T00:00:00Z
date_updated: 2021-01-12T07:51:38Z
day: '20'
doi: 10.1109/CVPR.2009.5206567
extern: 1
month: '06'
page: 818 - 825
publication_status: published
publisher: IEEE
publist_id: '2649'
quality_controlled: 0
status: public
title: Global connectivity potentials for random field models
type: conference
year: '2009'
...