TY - CONF AB - Proofs of sequential work (PoSW) are proof systems where a prover, upon receiving a statement χ and a time parameter T computes a proof ϕ(χ,T) which is efficiently and publicly verifiable. The proof can be computed in T sequential steps, but not much less, even by a malicious party having large parallelism. A PoSW thus serves as a proof that T units of time have passed since χ was received. PoSW were introduced by Mahmoody, Moran and Vadhan [MMV11], a simple and practical construction was only recently proposed by Cohen and Pietrzak [CP18]. In this work we construct a new simple PoSW in the random permutation model which is almost as simple and efficient as [CP18] but conceptually very different. Whereas the structure underlying [CP18] is a hash tree, our construction is based on skip lists and has the interesting property that computing the PoSW is a reversible computation. The fact that the construction is reversible can potentially be used for new applications like constructing proofs of replication. We also show how to “embed” the sloth function of Lenstra and Weselowski [LW17] into our PoSW to get a PoSW where one additionally can verify correctness of the output much more efficiently than recomputing it (though recent constructions of “verifiable delay functions” subsume most of the applications this construction was aiming at). AU - Abusalah, Hamza M AU - Kamath Hosdurg, Chethan AU - Klein, Karen AU - Pietrzak, Krzysztof Z AU - Walter, Michael ID - 7411 SN - 0302-9743 T2 - Advances in Cryptology – EUROCRYPT 2019 TI - Reversible proofs of sequential work VL - 11477 ER - TY - CONF AB - The Fiat-Shamir heuristic transforms a public-coin interactive proof into a non-interactive argument, by replacing the verifier with a cryptographic hash function that is applied to the protocol’s transcript. Constructing hash functions for which this transformation is sound is a central and long-standing open question in cryptography. We show that solving the END−OF−METERED−LINE problem is no easier than breaking the soundness of the Fiat-Shamir transformation when applied to the sumcheck protocol. In particular, if the transformed protocol is sound, then any hard problem in #P gives rise to a hard distribution in the class CLS, which is contained in PPAD. Our result opens up the possibility of sampling moderately-sized games for which it is hard to find a Nash equilibrium, by reducing the inversion of appropriately chosen one-way functions to #SAT. Our main technical contribution is a stateful incrementally verifiable procedure that, given a SAT instance over n variables, counts the number of satisfying assignments. This is accomplished via an exponential sequence of small steps, each computable in time poly(n). Incremental verifiability means that each intermediate state includes a sumcheck-based proof of its correctness, and the proof can be updated and verified in time poly(n). AU - Choudhuri, Arka Rai AU - Hubáček, Pavel AU - Kamath Hosdurg, Chethan AU - Pietrzak, Krzysztof Z AU - Rosen, Alon AU - Rothblum, Guy N. ID - 6677 SN - 9781450367059 T2 - Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing - STOC 2019 TI - Finding a Nash equilibrium is no easier than breaking Fiat-Shamir ER - TY - CONF AB - A proxy re-encryption (PRE) scheme is a public-key encryption scheme that allows the holder of a key pk to derive a re-encryption key for any other key 𝑝𝑘′. This re-encryption key lets anyone transform ciphertexts under pk into ciphertexts under 𝑝𝑘′ without having to know the underlying message, while transformations from 𝑝𝑘′ to pk should not be possible (unidirectional). Security is defined in a multi-user setting against an adversary that gets the users’ public keys and can ask for re-encryption keys and can corrupt users by requesting their secret keys. Any ciphertext that the adversary cannot trivially decrypt given the obtained secret and re-encryption keys should be secure. All existing security proofs for PRE only show selective security, where the adversary must first declare the users it wants to corrupt. This can be lifted to more meaningful adaptive security by guessing the set of corrupted users among the n users, which loses a factor exponential in Open image in new window , rendering the result meaningless already for moderate Open image in new window . Jafargholi et al. (CRYPTO’17) proposed a framework that in some cases allows to give adaptive security proofs for schemes which were previously only known to be selectively secure, while avoiding the exponential loss that results from guessing the adaptive choices made by an adversary. We apply their framework to PREs that satisfy some natural additional properties. Concretely, we give a more fine-grained reduction for several unidirectional PREs, proving adaptive security at a much smaller loss. The loss depends on the graph of users whose edges represent the re-encryption keys queried by the adversary. For trees and chains the loss is quasi-polynomial in the size and for general graphs it is exponential in their depth and indegree (instead of their size as for previous reductions). Fortunately, trees and low-depth graphs cover many, if not most, interesting applications. Our results apply e.g. to the bilinear-map based PRE schemes by Ateniese et al. (NDSS’05 and CT-RSA’09), Gentry’s FHE-based scheme (STOC’09) and the LWE-based scheme by Chandran et al. (PKC’14). AU - Fuchsbauer, Georg AU - Kamath Hosdurg, Chethan AU - Klein, Karen AU - Pietrzak, Krzysztof Z ID - 6430 SN - 03029743 TI - Adaptively secure proxy re-encryption VL - 11443 ER - TY - JOUR AB - In this paper, we evaluate clock signals generated in ring oscillators and self-timed rings and the way their jitter can be transformed into random numbers. We show that counting the periods of the jittery clock signal produces random numbers of significantly better quality than the methods in which the jittery signal is simply sampled (the case in almost all current methods). Moreover, we use the counter values to characterize and continuously monitor the source of randomness. However, instead of using the widely used statistical variance, we propose to use Allan variance to do so. There are two main advantages: Allan variance is insensitive to low frequency noises such as flicker noise that are known to be autocorrelated and significantly less circuitry is required for its computation than that used to compute commonly used variance. We also show that it is essential to use a differential principle of randomness extraction from the jitter based on the use of two identical oscillators to avoid autocorrelations originating from external and internal global jitter sources and that this fact is valid for both kinds of rings. Last but not least, we propose a method of statistical testing based on high order Markov model to show the reduced dependencies when the proposed randomness extraction is applied. AU - Allini, Elie Noumon AU - Skórski, Maciej AU - Petura, Oto AU - Bernard, Florent AU - Laban, Marek AU - Fischer, Viktor ID - 10286 IS - 3 JF - IACR Transactions on Cryptographic Hardware and Embedded Systems TI - Evaluation and monitoring of free running oscillators serving as source of randomness VL - 2018 ER - TY - CONF AB - Proofs of space (PoS) [Dziembowski et al., CRYPTO'15] are proof systems where a prover can convince a verifier that he "wastes" disk space. PoS were introduced as a more ecological and economical replacement for proofs of work which are currently used to secure blockchains like Bitcoin. In this work we investigate extensions of PoS which allow the prover to embed useful data into the dedicated space, which later can be recovered. Our first contribution is a security proof for the original PoS from CRYPTO'15 in the random oracle model (the original proof only applied to a restricted class of adversaries which can store a subset of the data an honest prover would store). When this PoS is instantiated with recent constructions of maximally depth robust graphs, our proof implies basically optimal security. As a second contribution we show three different extensions of this PoS where useful data can be embedded into the space required by the prover. Our security proof for the PoS extends (non-trivially) to these constructions. We discuss how some of these variants can be used as proofs of catalytic space (PoCS), a notion we put forward in this work, and which basically is a PoS where most of the space required by the prover can be used to backup useful data. Finally we discuss how one of the extensions is a candidate construction for a proof of replication (PoR), a proof system recently suggested in the Filecoin whitepaper. AU - Pietrzak, Krzysztof Z ID - 7407 SN - 1868-8969 T2 - 10th Innovations in Theoretical Computer Science Conference (ITCS 2019) TI - Proofs of catalytic space VL - 124 ER - TY - THES AB - A proof system is a protocol between a prover and a verifier over a common input in which an honest prover convinces the verifier of the validity of true statements. Motivated by the success of decentralized cryptocurrencies, exemplified by Bitcoin, the focus of this thesis will be on proof systems which found applications in some sustainable alternatives to Bitcoin, such as the Spacemint and Chia cryptocurrencies. In particular, we focus on proofs of space and proofs of sequential work. Proofs of space (PoSpace) were suggested as more ecological, economical, and egalitarian alternative to the energy-wasteful proof-of-work mining of Bitcoin. However, the state-of-the-art constructions of PoSpace are based on sophisticated graph pebbling lower bounds, and are therefore complex. Moreover, when these PoSpace are used in cryptocurrencies like Spacemint, miners can only start mining after ensuring that a commitment to their space is already added in a special transaction to the blockchain. Proofs of sequential work (PoSW) are proof systems in which a prover, upon receiving a statement x and a time parameter T, computes a proof which convinces the verifier that T time units had passed since x was received. Whereas Spacemint assumes synchrony to retain some interesting Bitcoin dynamics, Chia requires PoSW with unique proofs, i.e., PoSW in which it is hard to come up with more than one accepting proof for any true statement. In this thesis we construct simple and practically-efficient PoSpace and PoSW. When using our PoSpace in cryptocurrencies, miners can start mining on the fly, like in Bitcoin, and unlike current constructions of PoSW, which either achieve efficient verification of sequential work, or faster-than-recomputing verification of correctness of proofs, but not both at the same time, ours achieve the best of these two worlds. AU - Abusalah, Hamza M ID - 83 SN - 2663-337X TI - Proof systems for sustainable decentralized cryptocurrencies ER - TY - CONF AB - Universal hashing found a lot of applications in computer science. In cryptography the most important fact about universal families is the so called Leftover Hash Lemma, proved by Impagliazzo, Levin and Luby. In the language of modern cryptography it states that almost universal families are good extractors. In this work we provide a somewhat surprising characterization in the opposite direction. Namely, every extractor with sufficiently good parameters yields a universal family on a noticeable fraction of its inputs. Our proof technique is based on tools from extremal graph theory applied to the \'collision graph\' induced by the extractor, and may be of independent interest. We discuss possible applications to the theory of randomness extractors and non-malleable codes. AU - Obremski, Marciej AU - Skorski, Maciej ID - 108 TI - Inverted leftover hash lemma VL - 2018 ER - TY - JOUR AB - We introduce the notion of “non-malleable codes” which relaxes the notion of error correction and error detection. Informally, a code is non-malleable if the message contained in a modified codeword is either the original message, or a completely unrelated value. In contrast to error correction and error detection, non-malleability can be achieved for very rich classes of modifications. We construct an efficient code that is non-malleable with respect to modifications that affect each bit of the codeword arbitrarily (i.e., leave it untouched, flip it, or set it to either 0 or 1), but independently of the value of the other bits of the codeword. Using the probabilistic method, we also show a very strong and general statement: there exists a non-malleable code for every “small enough” family F of functions via which codewords can be modified. Although this probabilistic method argument does not directly yield efficient constructions, it gives us efficient non-malleable codes in the random-oracle model for very general classes of tampering functions—e.g., functions where every bit in the tampered codeword can depend arbitrarily on any 99% of the bits in the original codeword. As an application of non-malleable codes, we show that they provide an elegant algorithmic solution to the task of protecting functionalities implemented in hardware (e.g., signature cards) against “tampering attacks.” In such attacks, the secret state of a physical system is tampered, in the hopes that future interaction with the modified system will reveal some secret information. This problem was previously studied in the work of Gennaro et al. in 2004 under the name “algorithmic tamper proof security” (ATP). We show that non-malleable codes can be used to achieve important improvements over the prior work. In particular, we show that any functionality can be made secure against a large class of tampering attacks, simply by encoding the secret state with a non-malleable code while it is stored in memory. AU - Dziembowski, Stefan AU - Pietrzak, Krzysztof Z AU - Wichs, Daniel ID - 107 IS - 4 JF - Journal of the ACM TI - Non-malleable codes VL - 65 ER - TY - CONF AB - We show attacks on five data-independent memory-hard functions (iMHF) that were submitted to the password hashing competition (PHC). Informally, an MHF is a function which cannot be evaluated on dedicated hardware, like ASICs, at significantly lower hardware and/or energy cost than evaluating a single instance on a standard single-core architecture. Data-independent means the memory access pattern of the function is independent of the input; this makes iMHFs harder to construct than data-dependent ones, but the latter can be attacked by various side-channel attacks. Following [Alwen-Blocki'16], we capture the evaluation of an iMHF as a directed acyclic graph (DAG). The cumulative parallel pebbling complexity of this DAG is a measure for the hardware cost of evaluating the iMHF on an ASIC. Ideally, one would like the complexity of a DAG underlying an iMHF to be as close to quadratic in the number of nodes of the graph as possible. Instead, we show that (the DAGs underlying) the following iMHFs are far from this bound: Rig.v2, TwoCats and Gambit each having an exponent no more than 1.75. Moreover, we show that the complexity of the iMHF modes of the PHC finalists Pomelo and Lyra2 have exponents at most 1.83 and 1.67 respectively. To show this we investigate a combinatorial property of each underlying DAG (called its depth-robustness. By establishing upper bounds on this property we are then able to apply the general technique of [Alwen-Block'16] for analyzing the hardware costs of an iMHF. AU - Alwen, Joel F AU - Gazi, Peter AU - Kamath Hosdurg, Chethan AU - Klein, Karen AU - Osang, Georg F AU - Pietrzak, Krzysztof Z AU - Reyzin, Lenoid AU - Rolinek, Michal AU - Rybar, Michal ID - 193 T2 - Proceedings of the 2018 on Asia Conference on Computer and Communication Security TI - On the memory hardness of data independent password hashing functions ER - TY - CONF AB - We introduce a formal quantitative notion of “bit security” for a general type of cryptographic games (capturing both decision and search problems), aimed at capturing the intuition that a cryptographic primitive with k-bit security is as hard to break as an ideal cryptographic function requiring a brute force attack on a k-bit key space. Our new definition matches the notion of bit security commonly used by cryptographers and cryptanalysts when studying search (e.g., key recovery) problems, where the use of the traditional definition is well established. However, it produces a quantitatively different metric in the case of decision (indistinguishability) problems, where the use of (a straightforward generalization of) the traditional definition is more problematic and leads to a number of paradoxical situations or mismatches between theoretical/provable security and practical/common sense intuition. Key to our new definition is to consider adversaries that may explicitly declare failure of the attack. We support and justify the new definition by proving a number of technical results, including tight reductions between several standard cryptographic problems, a new hybrid theorem that preserves bit security, and an application to the security analysis of indistinguishability primitives making use of (approximate) floating point numbers. This is the first result showing that (standard precision) 53-bit floating point numbers can be used to achieve 100-bit security in the context of cryptographic primitives with general indistinguishability-based security definitions. Previous results of this type applied only to search problems, or special types of decision problems. AU - Micciancio, Daniele AU - Walter, Michael ID - 300 TI - On the bit security of cryptographic primitives VL - 10820 ER -