---
_id: '7411'
abstract:
- lang: eng
text: "Proofs of sequential work (PoSW) are proof systems where a prover, upon receiving
a statement χ and a time parameter T computes a proof ϕ(χ,T) which is efficiently
and publicly verifiable. The proof can be computed in T sequential steps, but
not much less, even by a malicious party having large parallelism. A PoSW thus
serves as a proof that T units of time have passed since χ\r\n\r\nwas received.\r\n\r\nPoSW
were introduced by Mahmoody, Moran and Vadhan [MMV11], a simple and practical
construction was only recently proposed by Cohen and Pietrzak [CP18].\r\n\r\nIn
this work we construct a new simple PoSW in the random permutation model which
is almost as simple and efficient as [CP18] but conceptually very different. Whereas
the structure underlying [CP18] is a hash tree, our construction is based on skip
lists and has the interesting property that computing the PoSW is a reversible
computation.\r\nThe fact that the construction is reversible can potentially be
used for new applications like constructing proofs of replication. We also show
how to “embed” the sloth function of Lenstra and Weselowski [LW17] into our PoSW
to get a PoSW where one additionally can verify correctness of the output much
more efficiently than recomputing it (though recent constructions of “verifiable
delay functions” subsume most of the applications this construction was aiming
at)."
alternative_title:
- LNCS
article_processing_charge: No
author:
- first_name: Hamza M
full_name: Abusalah, Hamza M
id: 40297222-F248-11E8-B48F-1D18A9856A87
last_name: Abusalah
- first_name: Chethan
full_name: Kamath Hosdurg, Chethan
id: 4BD3F30E-F248-11E8-B48F-1D18A9856A87
last_name: Kamath Hosdurg
- first_name: Karen
full_name: Klein, Karen
id: 3E83A2F8-F248-11E8-B48F-1D18A9856A87
last_name: Klein
- first_name: Krzysztof Z
full_name: Pietrzak, Krzysztof Z
id: 3E04A7AA-F248-11E8-B48F-1D18A9856A87
last_name: Pietrzak
orcid: 0000-0002-9139-1654
- first_name: Michael
full_name: Walter, Michael
id: 488F98B0-F248-11E8-B48F-1D18A9856A87
last_name: Walter
orcid: 0000-0003-3186-2482
citation:
ama: 'Abusalah HM, Kamath Hosdurg C, Klein K, Pietrzak KZ, Walter M. Reversible
proofs of sequential work. In: Advances in Cryptology – EUROCRYPT 2019.
Vol 11477. Springer International Publishing; 2019:277-291. doi:10.1007/978-3-030-17656-3_10'
apa: 'Abusalah, H. M., Kamath Hosdurg, C., Klein, K., Pietrzak, K. Z., & Walter,
M. (2019). Reversible proofs of sequential work. In Advances in Cryptology
– EUROCRYPT 2019 (Vol. 11477, pp. 277–291). Darmstadt, Germany: Springer International
Publishing. https://doi.org/10.1007/978-3-030-17656-3_10'
chicago: Abusalah, Hamza M, Chethan Kamath Hosdurg, Karen Klein, Krzysztof Z Pietrzak,
and Michael Walter. “Reversible Proofs of Sequential Work.” In Advances in
Cryptology – EUROCRYPT 2019, 11477:277–91. Springer International Publishing,
2019. https://doi.org/10.1007/978-3-030-17656-3_10.
ieee: H. M. Abusalah, C. Kamath Hosdurg, K. Klein, K. Z. Pietrzak, and M. Walter,
“Reversible proofs of sequential work,” in Advances in Cryptology – EUROCRYPT
2019, Darmstadt, Germany, 2019, vol. 11477, pp. 277–291.
ista: Abusalah HM, Kamath Hosdurg C, Klein K, Pietrzak KZ, Walter M. 2019. Reversible
proofs of sequential work. Advances in Cryptology – EUROCRYPT 2019. International
Conference on the Theory and Applications of Cryptographic Techniques, LNCS, vol.
11477, 277–291.
mla: Abusalah, Hamza M., et al. “Reversible Proofs of Sequential Work.” Advances
in Cryptology – EUROCRYPT 2019, vol. 11477, Springer International Publishing,
2019, pp. 277–91, doi:10.1007/978-3-030-17656-3_10.
short: H.M. Abusalah, C. Kamath Hosdurg, K. Klein, K.Z. Pietrzak, M. Walter, in:,
Advances in Cryptology – EUROCRYPT 2019, Springer International Publishing, 2019,
pp. 277–291.
conference:
end_date: 2019-05-23
location: Darmstadt, Germany
name: International Conference on the Theory and Applications of Cryptographic Techniques
start_date: 2019-05-19
date_created: 2020-01-30T09:26:14Z
date_published: 2019-04-24T00:00:00Z
date_updated: 2023-09-06T15:26:06Z
day: '24'
department:
- _id: KrPi
doi: 10.1007/978-3-030-17656-3_10
ec_funded: 1
external_id:
isi:
- '000483516200010'
intvolume: ' 11477'
isi: 1
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://eprint.iacr.org/2019/252
month: '04'
oa: 1
oa_version: Submitted Version
page: 277-291
project:
- _id: 258AA5B2-B435-11E9-9278-68D0E5697425
call_identifier: H2020
grant_number: '682815'
name: Teaching Old Crypto New Tricks
publication: Advances in Cryptology – EUROCRYPT 2019
publication_identifier:
eissn:
- 1611-3349
isbn:
- '9783030176556'
- '9783030176563'
issn:
- 0302-9743
publication_status: published
publisher: Springer International Publishing
quality_controlled: '1'
scopus_import: '1'
status: public
title: Reversible proofs of sequential work
type: conference
user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1
volume: 11477
year: '2019'
...
---
_id: '7406'
abstract:
- lang: eng
text: "Background\r\nSynaptic vesicles (SVs) are an integral part of the neurotransmission
machinery, and isolation of SVs from their host neuron is necessary to reveal
their most fundamental biochemical and functional properties in in vitro assays.
Isolated SVs from neurons that have been genetically engineered, e.g. to introduce
genetically encoded indicators, are not readily available but would permit new
insights into SV structure and function. Furthermore, it is unclear if cultured
neurons can provide sufficient starting material for SV isolation procedures.\r\n\r\nNew
method\r\nHere, we demonstrate an efficient ex vivo procedure to obtain functional
SVs from cultured rat cortical neurons after genetic engineering with a lentivirus.\r\n\r\nResults\r\nWe
show that ∼108 plated cortical neurons allow isolation of suitable SV amounts
for functional analysis and imaging. We found that SVs isolated from cultured
neurons have neurotransmitter uptake comparable to that of SVs isolated from intact
cortex. Using total internal reflection fluorescence (TIRF) microscopy, we visualized
an exogenous SV-targeted marker protein and demonstrated the high efficiency of
SV modification.\r\n\r\nComparison with existing methods\r\nObtaining SVs from
genetically engineered neurons currently generally requires the availability of
transgenic animals, which is constrained by technical (e.g. cost and time) and
biological (e.g. developmental defects and lethality) limitations.\r\n\r\nConclusions\r\nThese
results demonstrate the modification and isolation of functional SVs using cultured
neurons and viral transduction. The ability to readily obtain SVs from genetically
engineered neurons will permit linking in situ studies to in vitro experiments
in a variety of genetic contexts."
acknowledged_ssus:
- _id: Bio
- _id: EM-Fac
article_processing_charge: No
article_type: original
author:
- first_name: Catherine
full_name: Mckenzie, Catherine
id: 3EEDE19A-F248-11E8-B48F-1D18A9856A87
last_name: Mckenzie
- first_name: Miroslava
full_name: Spanova, Miroslava
id: 44A924DC-F248-11E8-B48F-1D18A9856A87
last_name: Spanova
- first_name: Alexander J
full_name: Johnson, Alexander J
id: 46A62C3A-F248-11E8-B48F-1D18A9856A87
last_name: Johnson
orcid: 0000-0002-2739-8843
- first_name: Stephanie
full_name: Kainrath, Stephanie
id: 32CFBA64-F248-11E8-B48F-1D18A9856A87
last_name: Kainrath
- first_name: Vanessa
full_name: Zheden, Vanessa
id: 39C5A68A-F248-11E8-B48F-1D18A9856A87
last_name: Zheden
orcid: 0000-0002-9438-4783
- first_name: Harald H.
full_name: Sitte, Harald H.
last_name: Sitte
- first_name: Harald L
full_name: Janovjak, Harald L
id: 33BA6C30-F248-11E8-B48F-1D18A9856A87
last_name: Janovjak
orcid: 0000-0002-8023-9315
citation:
ama: Mckenzie C, Spanova M, Johnson AJ, et al. Isolation of synaptic vesicles from
genetically engineered cultured neurons. Journal of Neuroscience Methods.
2019;312:114-121. doi:10.1016/j.jneumeth.2018.11.018
apa: Mckenzie, C., Spanova, M., Johnson, A. J., Kainrath, S., Zheden, V., Sitte,
H. H., & Janovjak, H. L. (2019). Isolation of synaptic vesicles from genetically
engineered cultured neurons. Journal of Neuroscience Methods. Elsevier.
https://doi.org/10.1016/j.jneumeth.2018.11.018
chicago: Mckenzie, Catherine, Miroslava Spanova, Alexander J Johnson, Stephanie
Kainrath, Vanessa Zheden, Harald H. Sitte, and Harald L Janovjak. “Isolation of
Synaptic Vesicles from Genetically Engineered Cultured Neurons.” Journal of
Neuroscience Methods. Elsevier, 2019. https://doi.org/10.1016/j.jneumeth.2018.11.018.
ieee: C. Mckenzie et al., “Isolation of synaptic vesicles from genetically
engineered cultured neurons,” Journal of Neuroscience Methods, vol. 312.
Elsevier, pp. 114–121, 2019.
ista: Mckenzie C, Spanova M, Johnson AJ, Kainrath S, Zheden V, Sitte HH, Janovjak
HL. 2019. Isolation of synaptic vesicles from genetically engineered cultured
neurons. Journal of Neuroscience Methods. 312, 114–121.
mla: Mckenzie, Catherine, et al. “Isolation of Synaptic Vesicles from Genetically
Engineered Cultured Neurons.” Journal of Neuroscience Methods, vol. 312,
Elsevier, 2019, pp. 114–21, doi:10.1016/j.jneumeth.2018.11.018.
short: C. Mckenzie, M. Spanova, A.J. Johnson, S. Kainrath, V. Zheden, H.H. Sitte,
H.L. Janovjak, Journal of Neuroscience Methods 312 (2019) 114–121.
date_created: 2020-01-30T09:12:19Z
date_published: 2019-01-15T00:00:00Z
date_updated: 2023-09-06T15:27:29Z
day: '15'
department:
- _id: HaJa
- _id: Bio
doi: 10.1016/j.jneumeth.2018.11.018
ec_funded: 1
external_id:
isi:
- '000456220900013'
pmid:
- '30496761'
intvolume: ' 312'
isi: 1
language:
- iso: eng
month: '01'
oa_version: None
page: 114-121
pmid: 1
project:
- _id: 25548C20-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '303564'
name: Microbial Ion Channels for Synthetic Neurobiology
- _id: 26538374-B435-11E9-9278-68D0E5697425
call_identifier: FWF
grant_number: I03630
name: Molecular mechanisms of endocytic cargo recognition in plants
- _id: 2548AE96-B435-11E9-9278-68D0E5697425
call_identifier: FWF
grant_number: W1232-B24
name: Molecular Drug Targets
publication: Journal of Neuroscience Methods
publication_identifier:
issn:
- 0165-0270
publication_status: published
publisher: Elsevier
quality_controlled: '1'
scopus_import: '1'
status: public
title: Isolation of synaptic vesicles from genetically engineered cultured neurons
type: journal_article
user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1
volume: 312
year: '2019'
...
---
_id: '7437'
abstract:
- lang: eng
text: 'Most of today''s distributed machine learning systems assume reliable networks:
whenever two machines exchange information (e.g., gradients or models), the network
should guarantee the delivery of the message. At the same time, recent work exhibits
the impressive tolerance of machine learning algorithms to errors or noise arising
from relaxed communication or synchronization. In this paper, we connect these
two trends, and consider the following question: Can we design machine learning
systems that are tolerant to network unreliability during training? With this
motivation, we focus on a theoretical problem of independent interest-given a
standard distributed parameter server architecture, if every communication between
the worker and the server has a non-zero probability p of being dropped, does
there exist an algorithm that still converges, and at what speed? The technical
contribution of this paper is a novel theoretical analysis proving that distributed
learning over unreliable network can achieve comparable convergence rate to centralized
or distributed learning over reliable networks. Further, we prove that the influence
of the packet drop rate diminishes with the growth of the number of parameter
servers. We map this theoretical result onto a real-world scenario, training deep
neural networks over an unreliable network layer, and conduct network simulation
to validate the system improvement by allowing the networks to be unreliable.'
article_processing_charge: No
author:
- first_name: Chen
full_name: Yu, Chen
last_name: Yu
- first_name: Hanlin
full_name: Tang, Hanlin
last_name: Tang
- first_name: Cedric
full_name: Renggli, Cedric
last_name: Renggli
- first_name: Simon
full_name: Kassing, Simon
last_name: Kassing
- first_name: Ankit
full_name: Singla, Ankit
last_name: Singla
- first_name: Dan-Adrian
full_name: Alistarh, Dan-Adrian
id: 4A899BFC-F248-11E8-B48F-1D18A9856A87
last_name: Alistarh
orcid: 0000-0003-3650-940X
- first_name: Ce
full_name: Zhang, Ce
last_name: Zhang
- first_name: Ji
full_name: Liu, Ji
last_name: Liu
citation:
ama: 'Yu C, Tang H, Renggli C, et al. Distributed learning over unreliable networks.
In: 36th International Conference on Machine Learning, ICML 2019. Vol 2019-June.
IMLS; 2019:12481-12512.'
apa: 'Yu, C., Tang, H., Renggli, C., Kassing, S., Singla, A., Alistarh, D.-A., …
Liu, J. (2019). Distributed learning over unreliable networks. In 36th International
Conference on Machine Learning, ICML 2019 (Vol. 2019–June, pp. 12481–12512).
Long Beach, CA, United States: IMLS.'
chicago: Yu, Chen, Hanlin Tang, Cedric Renggli, Simon Kassing, Ankit Singla, Dan-Adrian
Alistarh, Ce Zhang, and Ji Liu. “Distributed Learning over Unreliable Networks.”
In 36th International Conference on Machine Learning, ICML 2019, 2019–June:12481–512.
IMLS, 2019.
ieee: C. Yu et al., “Distributed learning over unreliable networks,” in 36th
International Conference on Machine Learning, ICML 2019, Long Beach, CA, United
States, 2019, vol. 2019–June, pp. 12481–12512.
ista: 'Yu C, Tang H, Renggli C, Kassing S, Singla A, Alistarh D-A, Zhang C, Liu
J. 2019. Distributed learning over unreliable networks. 36th International Conference
on Machine Learning, ICML 2019. ICML: International Conference on Machine Learning
vol. 2019–June, 12481–12512.'
mla: Yu, Chen, et al. “Distributed Learning over Unreliable Networks.” 36th International
Conference on Machine Learning, ICML 2019, vol. 2019–June, IMLS, 2019, pp.
12481–512.
short: C. Yu, H. Tang, C. Renggli, S. Kassing, A. Singla, D.-A. Alistarh, C. Zhang,
J. Liu, in:, 36th International Conference on Machine Learning, ICML 2019, IMLS,
2019, pp. 12481–12512.
conference:
end_date: 2019-06-15
location: Long Beach, CA, United States
name: 'ICML: International Conference on Machine Learning'
start_date: 2019-06-10
date_created: 2020-02-02T23:01:06Z
date_published: 2019-06-01T00:00:00Z
date_updated: 2023-09-06T15:21:48Z
day: '01'
department:
- _id: DaAl
external_id:
arxiv:
- '1810.07766'
isi:
- '000684034307036'
isi: 1
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/1810.07766
month: '06'
oa: 1
oa_version: Preprint
page: 12481-12512
publication: 36th International Conference on Machine Learning, ICML 2019
publication_identifier:
isbn:
- '9781510886988'
publication_status: published
publisher: IMLS
quality_controlled: '1'
scopus_import: '1'
status: public
title: Distributed learning over unreliable networks
type: conference
user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1
volume: 2019-June
year: '2019'
...
---
_id: '7412'
abstract:
- lang: eng
text: We develop a framework for the rigorous analysis of focused stochastic local
search algorithms. These algorithms search a state space by repeatedly selecting
some constraint that is violated in the current state and moving to a random nearby
state that addresses the violation, while (we hope) not introducing many new violations.
An important class of focused local search algorithms with provable performance
guarantees has recently arisen from algorithmizations of the Lovász local lemma
(LLL), a nonconstructive tool for proving the existence of satisfying states by
introducing a background measure on the state space. While powerful, the state
transitions of algorithms in this class must be, in a precise sense, perfectly
compatible with the background measure. In many applications this is a very restrictive
requirement, and one needs to step outside the class. Here we introduce the notion
of measure distortion and develop a framework for analyzing arbitrary focused
stochastic local search algorithms, recovering LLL algorithmizations as the special
case of no distortion. Our framework takes as input an arbitrary algorithm of
such type and an arbitrary probability measure and shows how to use the measure
as a yardstick of algorithmic progress, even for algorithms designed independently
of the measure.
article_processing_charge: No
article_type: original
author:
- first_name: Dimitris
full_name: Achlioptas, Dimitris
last_name: Achlioptas
- first_name: Fotis
full_name: Iliopoulos, Fotis
last_name: Iliopoulos
- first_name: Vladimir
full_name: Kolmogorov, Vladimir
id: 3D50B0BA-F248-11E8-B48F-1D18A9856A87
last_name: Kolmogorov
citation:
ama: Achlioptas D, Iliopoulos F, Kolmogorov V. A local lemma for focused stochastical
algorithms. SIAM Journal on Computing. 2019;48(5):1583-1602. doi:10.1137/16m109332x
apa: Achlioptas, D., Iliopoulos, F., & Kolmogorov, V. (2019). A local lemma
for focused stochastical algorithms. SIAM Journal on Computing. SIAM. https://doi.org/10.1137/16m109332x
chicago: Achlioptas, Dimitris, Fotis Iliopoulos, and Vladimir Kolmogorov. “A Local
Lemma for Focused Stochastical Algorithms.” SIAM Journal on Computing.
SIAM, 2019. https://doi.org/10.1137/16m109332x.
ieee: D. Achlioptas, F. Iliopoulos, and V. Kolmogorov, “A local lemma for focused
stochastical algorithms,” SIAM Journal on Computing, vol. 48, no. 5. SIAM,
pp. 1583–1602, 2019.
ista: Achlioptas D, Iliopoulos F, Kolmogorov V. 2019. A local lemma for focused
stochastical algorithms. SIAM Journal on Computing. 48(5), 1583–1602.
mla: Achlioptas, Dimitris, et al. “A Local Lemma for Focused Stochastical Algorithms.”
SIAM Journal on Computing, vol. 48, no. 5, SIAM, 2019, pp. 1583–602, doi:10.1137/16m109332x.
short: D. Achlioptas, F. Iliopoulos, V. Kolmogorov, SIAM Journal on Computing 48
(2019) 1583–1602.
date_created: 2020-01-30T09:27:32Z
date_published: 2019-10-31T00:00:00Z
date_updated: 2023-09-06T15:25:29Z
day: '31'
department:
- _id: VlKo
doi: 10.1137/16m109332x
ec_funded: 1
external_id:
arxiv:
- '1809.01537'
isi:
- '000493900200005'
intvolume: ' 48'
isi: 1
issue: '5'
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://arxiv.org/abs/1809.01537
month: '10'
oa: 1
oa_version: Preprint
page: 1583-1602
project:
- _id: 25FBA906-B435-11E9-9278-68D0E5697425
call_identifier: FP7
grant_number: '616160'
name: 'Discrete Optimization in Computer Vision: Theory and Practice'
publication: SIAM Journal on Computing
publication_identifier:
eissn:
- 1095-7111
issn:
- 0097-5397
publication_status: published
publisher: SIAM
quality_controlled: '1'
scopus_import: '1'
status: public
title: A local lemma for focused stochastical algorithms
type: journal_article
user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1
volume: 48
year: '2019'
...
---
_id: '7418'
abstract:
- lang: eng
text: Multiple importance sampling (MIS) has become an indispensable tool in Monte
Carlo rendering, widely accepted as a near-optimal solution for combining different
sampling techniques. But an MIS combination, using the common balance or power
heuristics, often results in an overly defensive estimator, leading to high variance.
We show that by generalizing the MIS framework, variance can be substantially
reduced. Specifically, we optimize one of the combined sampling techniques so
as to decrease the overall variance of the resulting MIS estimator. We apply the
approach to the computation of direct illumination due to an HDR environment map
and to the computation of global illumination using a path guiding algorithm.
The implementation can be as simple as subtracting a constant value from the tabulated
sampling density done entirely in a preprocessing step. This produces a consistent
noise reduction in all our tests with no negative influence on run time, no artifacts
or bias, and no failure cases.
article_number: '151'
article_processing_charge: No
article_type: original
author:
- first_name: Ondřej
full_name: Karlík, Ondřej
last_name: Karlík
- first_name: Martin
full_name: Šik, Martin
last_name: Šik
- first_name: Petr
full_name: Vévoda, Petr
last_name: Vévoda
- first_name: Tomas
full_name: Skrivan, Tomas
id: 486A5A46-F248-11E8-B48F-1D18A9856A87
last_name: Skrivan
- first_name: Jaroslav
full_name: Křivánek, Jaroslav
last_name: Křivánek
citation:
ama: 'Karlík O, Šik M, Vévoda P, Skrivan T, Křivánek J. MIS compensation: Optimizing
sampling techniques in multiple importance sampling. ACM Transactions on Graphics.
2019;38(6). doi:10.1145/3355089.3356565'
apa: 'Karlík, O., Šik, M., Vévoda, P., Skrivan, T., & Křivánek, J. (2019). MIS
compensation: Optimizing sampling techniques in multiple importance sampling.
ACM Transactions on Graphics. ACM. https://doi.org/10.1145/3355089.3356565'
chicago: 'Karlík, Ondřej, Martin Šik, Petr Vévoda, Tomas Skrivan, and Jaroslav Křivánek.
“MIS Compensation: Optimizing Sampling Techniques in Multiple Importance Sampling.”
ACM Transactions on Graphics. ACM, 2019. https://doi.org/10.1145/3355089.3356565.'
ieee: 'O. Karlík, M. Šik, P. Vévoda, T. Skrivan, and J. Křivánek, “MIS compensation:
Optimizing sampling techniques in multiple importance sampling,” ACM Transactions
on Graphics, vol. 38, no. 6. ACM, 2019.'
ista: 'Karlík O, Šik M, Vévoda P, Skrivan T, Křivánek J. 2019. MIS compensation:
Optimizing sampling techniques in multiple importance sampling. ACM Transactions
on Graphics. 38(6), 151.'
mla: 'Karlík, Ondřej, et al. “MIS Compensation: Optimizing Sampling Techniques in
Multiple Importance Sampling.” ACM Transactions on Graphics, vol. 38, no.
6, 151, ACM, 2019, doi:10.1145/3355089.3356565.'
short: O. Karlík, M. Šik, P. Vévoda, T. Skrivan, J. Křivánek, ACM Transactions on
Graphics 38 (2019).
date_created: 2020-01-30T10:19:43Z
date_published: 2019-11-01T00:00:00Z
date_updated: 2023-09-06T15:22:23Z
day: '01'
department:
- _id: ChWo
doi: 10.1145/3355089.3356565
external_id:
isi:
- '000498397300001'
intvolume: ' 38'
isi: 1
issue: '6'
language:
- iso: eng
month: '11'
oa_version: None
publication: ACM Transactions on Graphics
publication_identifier:
eissn:
- 1557-7368
issn:
- 0730-0301
publication_status: published
publisher: ACM
quality_controlled: '1'
scopus_import: '1'
status: public
title: 'MIS compensation: Optimizing sampling techniques in multiple importance sampling'
type: journal_article
user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1
volume: 38
year: '2019'
...