TY - CONF AB - In this work we present a short and unified proof for the Strong and Weak Regularity Lemma, based on the cryptographic tech-nique called low-complexity approximations. In short, both problems reduce to a task of finding constructively an approximation for a certain target function under a class of distinguishers (test functions), where dis-tinguishers are combinations of simple rectangle-indicators. In our case these approximations can be learned by a simple iterative procedure, which yields a unified and simple proof, achieving for any graph with density d and any approximation parameter the partition size. The novelty in our proof is: (a) a simple approach which yields both strong and weaker variant, and (b) improvements when d = o(1). At an abstract level, our proof can be seen a refinement and simplification of the “analytic” proof given by Lovasz and Szegedy. AU - Skórski, Maciej ED - Jäger, Gerhard ED - Steila, Silvia ID - 650 SN - 03029743 TI - A cryptographic view of regularity lemmas: Simpler unified proofs and refined bounds VL - 10185 ER - TY - CONF AB - A memory-hard function (MHF) ƒn with parameter n can be computed in sequential time and space n. Simultaneously, a high amortized parallel area-time complexity (aAT) is incurred per evaluation. In practice, MHFs are used to limit the rate at which an adversary (using a custom computational device) can evaluate a security sensitive function that still occasionally needs to be evaluated by honest users (using an off-the-shelf general purpose device). The most prevalent examples of such sensitive functions are Key Derivation Functions (KDFs) and password hashing algorithms where rate limits help mitigate off-line dictionary attacks. As the honest users' inputs to these functions are often (low-entropy) passwords special attention is given to a class of side-channel resistant MHFs called iMHFs. Essentially all iMHFs can be viewed as some mode of operation (making n calls to some round function) given by a directed acyclic graph (DAG) with very low indegree. Recently, a combinatorial property of a DAG has been identified (called "depth-robustness") which results in good provable security for an iMHF based on that DAG. Depth-robust DAGs have also proven useful in other cryptographic applications. Unfortunately, up till now, all known very depth-robust DAGs are impractically complicated and little is known about their exact (i.e. non-asymptotic) depth-robustness both in theory and in practice. In this work we build and analyze (both formally and empirically) several exceedingly simple and efficient to navigate practical DAGs for use in iMHFs and other applications. For each DAG we: *Prove that their depth-robustness is asymptotically maximal. *Prove bounds of at least 3 orders of magnitude better on their exact depth-robustness compared to known bounds for other practical iMHF. *Implement and empirically evaluate their depth-robustness and aAT against a variety of state-of-the art (and several new) depth-reduction and low aAT attacks. We find that, against all attacks, the new DAGs perform significantly better in practice than Argon2i, the most widely deployed iMHF in practice. Along the way we also improve the best known empirical attacks on the aAT of Argon2i by implementing and testing several heuristic versions of a (hitherto purely theoretical) depth-reduction attack. Finally, we demonstrate practicality of our constructions by modifying the Argon2i code base to use one of the new high aAT DAGs. Experimental benchmarks on a standard off-the-shelf CPU show that the new modifications do not adversely affect the impressive throughput of Argon2i (despite seemingly enjoying significantly higher aAT). AU - Alwen, Joel F AU - Blocki, Jeremiah AU - Harsha, Ben ID - 6527 SN - 9781450349468 T2 - Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security TI - Practical graphs for optimal side-channel resistant memory-hard functions ER - TY - CONF AB - This paper studies the complexity of estimating Rényi divergences of discrete distributions: p observed from samples and the baseline distribution q known a priori. Extending the results of Acharya et al. (SODA'15) on estimating Rényi entropy, we present improved estimation techniques together with upper and lower bounds on the sample complexity. We show that, contrarily to estimating Rényi entropy where a sublinear (in the alphabet size) number of samples suffices, the sample complexity is heavily dependent on events occurring unlikely in q, and is unbounded in general (no matter what an estimation technique is used). For any divergence of integer order bigger than 1, we provide upper and lower bounds on the number of samples dependent on probabilities of p and q (the lower bounds hold for non-integer orders as well). We conclude that the worst-case sample complexity is polynomial in the alphabet size if and only if the probabilities of q are non-negligible. This gives theoretical insights into heuristics used in the applied literature to handle numerical instability, which occurs for small probabilities of q. Our result shows that they should be handled with care not only because of numerical issues, but also because of a blow up in the sample complexity. AU - Skórski, Maciej ID - 6526 SN - 9781509040964 T2 - 2017 IEEE International Symposium on Information Theory (ISIT) TI - On the complexity of estimating Rènyi divergences ER - TY - CONF AB - De, Trevisan and Tulsiani [CRYPTO 2010] show that every distribution over n-bit strings which has constant statistical distance to uniform (e.g., the output of a pseudorandom generator mapping n-1 to n bit strings), can be distinguished from the uniform distribution with advantage epsilon by a circuit of size O( 2^n epsilon^2). We generalize this result, showing that a distribution which has less than k bits of min-entropy, can be distinguished from any distribution with k bits of delta-smooth min-entropy with advantage epsilon by a circuit of size O(2^k epsilon^2/delta^2). As a special case, this implies that any distribution with support at most 2^k (e.g., the output of a pseudoentropy generator mapping k to n bit strings) can be distinguished from any given distribution with min-entropy k+1 with advantage epsilon by a circuit of size O(2^k epsilon^2). Our result thus shows that pseudoentropy distributions face basically the same non-uniform attacks as pseudorandom distributions. AU - Pietrzak, Krzysztof Z AU - Skórski, Maciej ID - 697 SN - 18688969 TI - Non uniform attacks against pseudoentropy VL - 80 ER - TY - CONF AB - We revisit the problem of estimating entropy of discrete distributions from independent samples, studied recently by Acharya, Orlitsky, Suresh and Tyagi (SODA 2015), improving their upper and lower bounds on the necessary sample size n. For estimating Renyi entropy of order alpha, up to constant accuracy and error probability, we show the following * Upper bounds n = O(1) 2^{(1-1/alpha)H_alpha} for integer alpha>1, as the worst case over distributions with Renyi entropy equal to H_alpha. * Lower bounds n = Omega(1) K^{1-1/alpha} for any real alpha>1, with the constant being an inverse polynomial of the accuracy, as the worst case over all distributions on K elements. Our upper bounds essentially replace the alphabet size by a factor exponential in the entropy, which offers improvements especially in low or medium entropy regimes (interesting for example in anomaly detection). As for the lower bounds, our proof explicitly shows how the complexity depends on both alphabet and accuracy, partially solving the open problem posted in previous works. The argument for upper bounds derives a clean identity for the variance of falling-power sum of a multinomial distribution. Our approach for lower bounds utilizes convex optimization to find a distribution with possibly worse estimation performance, and may be of independent interest as a tool to work with Le Cam’s two point method. AU - Obremski, Maciej AU - Skórski, Maciej ID - 710 SN - 18688969 TI - Renyi entropy estimation revisited VL - 81 ER - TY - THES AB - In this thesis we discuss the exact security of message authentications codes HMAC , NMAC , and PMAC . NMAC is a mode of operation which turns a fixed input-length keyed hash function f into a variable input-length function. A practical single-key variant of NMAC called HMAC is a very popular and widely deployed message authentication code (MAC). PMAC is a block-cipher based mode of operation, which also happens to be the most famous fully parallel MAC. NMAC was introduced by Bellare, Canetti and Krawczyk Crypto’96, who proved it to be a secure pseudorandom function (PRF), and thus also a MAC, under two assumptions. Unfortunately, for many instantiations of HMAC one of them has been found to be wrong. To restore the provable guarantees for NMAC , Bellare [Crypto’06] showed its security without this assumption. PMAC was introduced by Black and Rogaway at Eurocrypt 2002. If instantiated with a pseudorandom permutation over n -bit strings, PMAC constitutes a provably secure variable input-length PRF. For adversaries making q queries, each of length at most ` (in n -bit blocks), and of total length σ ≤ q` , the original paper proves an upper bound on the distinguishing advantage of O ( σ 2 / 2 n ), while the currently best bound is O ( qσ/ 2 n ). In this work we show that this bound is tight by giving an attack with advantage Ω( q 2 `/ 2 n ). In the PMAC construction one initially XORs a mask to every message block, where the mask for the i th block is computed as τ i := γ i · L , where L is a (secret) random value, and γ i is the i -th codeword of the Gray code. Our attack applies more generally to any sequence of γ i ’s which contains a large coset of a subgroup of GF (2 n ). As for NMAC , our first contribution is a simpler and uniform proof: If f is an ε -secure PRF (against q queries) and a δ - non-adaptively secure PRF (against q queries), then NMAC f is an ( ε + `qδ )-secure PRF against q queries of length at most ` blocks each. We also show that this ε + `qδ bound is basically tight by constructing an f for which an attack with advantage `qδ exists. Moreover, we analyze the PRF-security of a modification of NMAC called NI by An and Bellare that avoids the constant rekeying on multi-block messages in NMAC and allows for an information-theoretic analysis. We carry out such an analysis, obtaining a tight `q 2 / 2 c bound for this step, improving over the trivial bound of ` 2 q 2 / 2 c . Finally, we investigate, if the security of PMAC can be further improved by using τ i ’s that are k -wise independent, for k > 1 (the original has k = 1). We observe that the security of PMAC will not increase in general if k = 2, and then prove that the security increases to O ( q 2 / 2 n ), if the k = 4. Due to simple extension attacks, this is the best bound one can hope for, using any distribution on the masks. Whether k = 3 is already sufficient to get this level of security is left as an open problem. Keywords: Message authentication codes, Pseudorandom functions, HMAC, PMAC. AU - Rybar, Michal ID - 838 SN - 2663-337X TI - (The exact security of) Message authentication codes ER - TY - JOUR AB - PMAC is a simple and parallel block-cipher mode of operation, which was introduced by Black and Rogaway at Eurocrypt 2002. If instantiated with a (pseudo)random permutation over n-bit strings, PMAC constitutes a provably secure variable input-length (pseudo)random function. For adversaries making q queries, each of length at most l (in n-bit blocks), and of total length σ ≤ ql, the original paper proves an upper bound on the distinguishing advantage of Ο(σ2/2n), while the currently best bound is Ο (qσ/2n).In this work we show that this bound is tight by giving an attack with advantage Ω (q2l/2n). In the PMAC construction one initially XORs a mask to every message block, where the mask for the ith block is computed as τi := γi·L, where L is a (secret) random value, and γi is the i-th codeword of the Gray code. Our attack applies more generally to any sequence of γi’s which contains a large coset of a subgroup of GF(2n). We then investigate if the security of PMAC can be further improved by using τi’s that are k-wise independent, for k > 1 (the original distribution is only 1-wise independent). We observe that the security of PMAC will not increase in general, even if the masks are chosen from a 2-wise independent distribution, and then prove that the security increases to O(q<2/2n), if the τi are 4-wise independent. Due to simple extension attacks, this is the best bound one can hope for, using any distribution on the masks. Whether 3-wise independence is already sufficient to get this level of security is left as an open problem. AU - Gazi, Peter AU - Pietrzak, Krzysztof Z AU - Rybar, Michal ID - 6196 IS - 2 JF - IACR Transactions on Symmetric Cryptology TI - The exact security of PMAC VL - 2016 ER - TY - CONF AB - Proofs of space (PoS) were suggested as more ecological and economical alternative to proofs of work, which are currently used in blockchain designs like Bitcoin. The existing PoS are based on rather sophisticated graph pebbling lower bounds. Much simpler and in several aspects more efficient schemes based on inverting random functions have been suggested, but they don’t give meaningful security guarantees due to existing time-memory trade-offs. In particular, Hellman showed that any permutation over a domain of size N can be inverted in time T by an algorithm that is given S bits of auxiliary information whenever (Formula presented). For functions Hellman gives a weaker attack with S2· T≈ N2 (e.g., S= T≈ N2/3). To prove lower bounds, one considers an adversary who has access to an oracle f: [ N] → [N] and can make T oracle queries. The best known lower bound is S· T∈ Ω(N) and holds for random functions and permutations. We construct functions that provably require more time and/or space to invert. Specifically, for any constant k we construct a function [N] → [N] that cannot be inverted unless Sk· T∈ Ω(Nk) (in particular, S= T≈ (Formula presented). Our construction does not contradict Hellman’s time-memory trade-off, because it cannot be efficiently evaluated in forward direction. However, its entire function table can be computed in time quasilinear in N, which is sufficient for the PoS application. Our simplest construction is built from a random function oracle g: [N] × [N] → [ N] and a random permutation oracle f: [N] → N] and is defined as h(x) = g(x, x′) where f(x) = π(f(x′)) with π being any involution without a fixed point, e.g. flipping all the bits. For this function we prove that any adversary who gets S bits of auxiliary information, makes at most T oracle queries, and inverts h on an ϵ fraction of outputs must satisfy S2· T∈ Ω(ϵ2N2). AU - Abusalah, Hamza M AU - Alwen, Joel F AU - Cohen, Bram AU - Khilko, Danylo AU - Pietrzak, Krzysztof Z AU - Reyzin, Leonid ID - 559 SN - 978-331970696-2 TI - Beyond Hellman’s time-memory trade-offs with applications to proofs of space VL - 10625 ER - TY - CONF AB - For many cryptographic primitives, it is relatively easy to achieve selective security (where the adversary commits a-priori to some of the choices to be made later in the attack) but appears difficult to achieve the more natural notion of adaptive security (where the adversary can make all choices on the go as the attack progresses). A series of several recent works shows how to cleverly achieve adaptive security in several such scenarios including generalized selective decryption (Panjwani, TCC ’07 and Fuchsbauer et al., CRYPTO ’15), constrained PRFs (Fuchsbauer et al., ASIACRYPT ’14), and Yao garbled circuits (Jafargholi and Wichs, TCC ’16b). Although the above works expressed vague intuition that they share a common technique, the connection was never made precise. In this work we present a new framework that connects all of these works and allows us to present them in a unified and simplified fashion. Moreover, we use the framework to derive a new result for adaptively secure secret sharing over access structures defined via monotone circuits. We envision that further applications will follow in the future. Underlying our framework is the following simple idea. It is well known that selective security, where the adversary commits to n-bits of information about his future choices, automatically implies adaptive security at the cost of amplifying the adversary’s advantage by a factor of up to 2n. However, in some cases the proof of selective security proceeds via a sequence of hybrids, where each pair of adjacent hybrids locally only requires some smaller partial information consisting of m ≪ n bits. The partial information needed might be completely different between different pairs of hybrids, and if we look across all the hybrids we might rely on the entire n-bit commitment. Nevertheless, the above is sufficient to prove adaptive security, at the cost of amplifying the adversary’s advantage by a factor of only 2m ≪ 2n. In all of our examples using the above framework, the different hybrids are captured by some sort of a graph pebbling game and the amount of information that the adversary needs to commit to in each pair of hybrids is bounded by the maximum number of pebbles in play at any point in time. Therefore, coming up with better strategies for proving adaptive security translates to various pebbling strategies for different types of graphs. AU - Jafargholi, Zahra AU - Kamath Hosdurg, Chethan AU - Klein, Karen AU - Komargodski, Ilan AU - Pietrzak, Krzysztof Z AU - Wichs, Daniel ED - Katz, Jonathan ED - Shacham, Hovav ID - 637 SN - 978-331963687-0 TI - Be adaptive avoid overcommitting VL - 10401 ER - TY - CONF AB - Security of cryptographic applications is typically defined by security games. The adversary, within certain resources, cannot win with probability much better than 0 (for unpredictability applications, like one-way functions) or much better than 1/2 (indistinguishability applications for instance encryption schemes). In so called squared-friendly applications the winning probability of the adversary, for different values of the application secret randomness, is not only close to 0 or 1/2 on average, but also concentrated in the sense that its second central moment is small. The class of squared-friendly applications, which contains all unpredictability applications and many indistinguishability applications, is particularly important for key derivation. Barak et al. observed that for square-friendly applications one can beat the "RT-bound", extracting secure keys with significantly smaller entropy loss. In turn Dodis and Yu showed that in squared-friendly applications one can directly use a "weak" key, which has only high entropy, as a secure key. In this paper we give sharp lower bounds on square security assuming security for "weak" keys. We show that any application which is either (a) secure with weak keys or (b) allows for entropy savings for keys derived by universal hashing, must be square-friendly. Quantitatively, our lower bounds match the positive results of Dodis and Yu and Barak et al. (TCC\'13, CRYPTO\'11) Hence, they can be understood as a general characterization of squared-friendly applications. While the positive results on squared-friendly applications where derived by one clever application of the Cauchy-Schwarz Inequality, for tight lower bounds we need more machinery. In our approach we use convex optimization techniques and some theory of circular matrices. AU - Skórski, Maciej ID - 1174 SN - 18688969 TI - Lower bounds on key derivation for square-friendly applications VL - 66 ER - TY - CONF AB - The algorithm Argon2i-B of Biryukov, Dinu and Khovratovich is currently being considered by the IRTF (Internet Research Task Force) as a new de-facto standard for password hashing. An older version (Argon2i-A) of the same algorithm was chosen as the winner of the recent Password Hashing Competition. An important competitor to Argon2i-B is the recently introduced Balloon Hashing (BH) algorithm of Corrigan-Gibs, Boneh and Schechter. A key security desiderata for any such algorithm is that evaluating it (even using a custom device) requires a large amount of memory amortized across multiple instances. Alwen and Blocki (CRYPTO 2016) introduced a class of theoretical attacks against Argon2i-A and BH. While these attacks yield large asymptotic reductions in the amount of memory, it was not, a priori, clear if (1) they can be extended to the newer Argon2i-B, (2) the attacks are effective on any algorithm for practical parameter ranges (e.g., 1GB of memory) and (3) if they can be effectively instantiated against any algorithm under realistic hardware constrains. In this work we answer all three of these questions in the affirmative for all three algorithms. This is also the first work to analyze the security of Argon2i-B. In more detail, we extend the theoretical attacks of Alwen and Blocki (CRYPTO 2016) to the recent Argon2i-B proposal demonstrating severe asymptotic deficiencies in its security. Next we introduce several novel heuristics for improving the attack's concrete memory efficiency even when on-chip memory bandwidth is bounded. We then simulate our attacks on randomly sampled Argon2i-A, Argon2i-B and BH instances and measure the resulting memory consumption for various practical parameter ranges and for a variety of upperbounds on the amount of parallelism available to the attacker. Finally we describe, implement, and test a new heuristic for applying the Alwen-Blocki attack to functions employing a technique developed by Corrigan-Gibs et al. for improving concrete security of memory-hard functions. We analyze the collected data and show the effects various parameters have on the memory consumption of the attack. In particular, we can draw several interesting conclusions about the level of security provided by these functions. · For the Alwen-Blocki attack to fail against practical memory parameters, Argon2i-B must be instantiated with more than 10 passes on memory - beyond the "paranoid" parameter setting in the current IRTF proposal. · The technique of Corrigan-Gibs for improving security can also be overcome by the Alwen-Blocki attack under realistic hardware constraints. · On a positive note, both the asymptotic and concrete security of Argon2i-B seem to improve on that of Argon2i-A. AU - Alwen, Joel F AU - Blocki, Jeremiah ID - 1176 SN - 978-150905761-0 TI - Towards practical attacks on Argon2i and balloon hashing ER - TY - JOUR AB - We construct efficient authentication protocols and message authentication codes (MACs) whose security can be reduced to the learning parity with noise (LPN) problem. Despite a large body of work—starting with the (Formula presented.) protocol of Hopper and Blum in 2001—until now it was not even known how to construct an efficient authentication protocol from LPN which is secure against man-in-the-middle attacks. A MAC implies such a (two-round) protocol. AU - Kiltz, Eike AU - Pietrzak, Krzysztof Z AU - Venturi, Daniele AU - Cash, David AU - Jain, Abhishek ID - 1187 IS - 4 JF - Journal of Cryptology TI - Efficient authentication from hard learning problems VL - 30 ER - TY - JOUR AB - Boldyreva, Palacio and Warinschi introduced a multiple forking game as an extension of general forking. The notion of (multiple) forking is a useful abstraction from the actual simulation of cryptographic scheme to the adversary in a security reduction, and is achieved through the intermediary of a so-called wrapper algorithm. Multiple forking has turned out to be a useful tool in the security argument of several cryptographic protocols. However, a reduction employing multiple forking incurs a significant degradation of (Formula presented.) , where (Formula presented.) denotes the upper bound on the underlying random oracle calls and (Formula presented.) , the number of forkings. In this work we take a closer look at the reasons for the degradation with a tighter security bound in mind. We nail down the exact set of conditions for success in the multiple forking game. A careful analysis of the cryptographic schemes and corresponding security reduction employing multiple forking leads to the formulation of ‘dependence’ and ‘independence’ conditions pertaining to the output of the wrapper in different rounds. Based on the (in)dependence conditions we propose a general framework of multiple forking and a General Multiple Forking Lemma. Leveraging (in)dependence to the full allows us to improve the degradation factor in the multiple forking game by a factor of (Formula presented.). By implication, the cost of a single forking involving two random oracles (augmented forking) matches that involving a single random oracle (elementary forking). Finally, we study the effect of these observations on the concrete security of existing schemes employing multiple forking. We conclude that by careful design of the protocol (and the wrapper in the security reduction) it is possible to harness our observations to the full extent. AU - Kamath Hosdurg, Chethan AU - Chatterjee, Sanjit ID - 1177 IS - 4 JF - Algorithmica TI - A closer look at multiple-forking: Leveraging (in)dependence for a tighter bound VL - 74 ER - TY - CONF AB - Computational notions of entropy have recently found many applications, including leakage-resilient cryptography, deterministic encryption or memory delegation. The two main types of results which make computational notions so useful are (1) Chain rules, which quantify by how much the computational entropy of a variable decreases if conditioned on some other variable (2) Transformations, which quantify to which extend one type of entropy implies another. Such chain rules and transformations typically lose a significant amount in quality of the entropy, and are the reason why applying these results one gets rather weak quantitative security bounds. In this paper we for the first time prove lower bounds in this context, showing that existing results for transformations are, unfortunately, basically optimal for non-adaptive black-box reductions (and it’s hard to imagine how non black-box reductions or adaptivity could be useful here.) A variable X has k bits of HILL entropy of quality (ϵ,s) if there exists a variable Y with k bits min-entropy which cannot be distinguished from X with advantage ϵ by distinguishing circuits of size s. A weaker notion is Metric entropy, where we switch quantifiers, and only require that for every distinguisher of size s, such a Y exists. We first describe our result concerning transformations. By definition, HILL implies Metric without any loss in quality. Metric entropy often comes up in applications, but must be transformed to HILL for meaningful security guarantees. The best known result states that if a variable X has k bits of Metric entropy of quality (ϵ,s) , then it has k bits of HILL with quality (2ϵ,s⋅ϵ2). We show that this loss of a factor Ω(ϵ−2) in circuit size is necessary. In fact, we show the stronger result that this loss is already necessary when transforming so called deterministic real valued Metric entropy to randomised boolean Metric (both these variants of Metric entropy are implied by HILL without loss in quality). The chain rule for HILL entropy states that if X has k bits of HILL entropy of quality (ϵ,s) , then for any variable Z of length m, X conditioned on Z has k−m bits of HILL entropy with quality (ϵ,s⋅ϵ2/2m). We show that a loss of Ω(2m/ϵ) in circuit size necessary here. Note that this still leaves a gap of ϵ between the known bound and our lower bound. AU - Pietrzak, Krzysztof Z AU - Maciej, Skorski ID - 1179 TI - Pseudoentropy: Lower-bounds for chain rules and transformations VL - 9985 ER - TY - CONF AB - We study the time-and memory-complexities of the problem of computing labels of (multiple) randomly selected challenge-nodes in a directed acyclic graph. The w-bit label of a node is the hash of the labels of its parents, and the hash function is modeled as a random oracle. Specific instances of this problem underlie both proofs of space [Dziembowski et al. CRYPTO’15] as well as popular memory-hard functions like scrypt. As our main tool, we introduce the new notion of a probabilistic parallel entangled pebbling game, a new type of combinatorial pebbling game on a graph, which is closely related to the labeling game on the same graph. As a first application of our framework, we prove that for scrypt, when the underlying hash function is invoked n times, the cumulative memory complexity (CMC) (a notion recently introduced by Alwen and Serbinenko (STOC’15) to capture amortized memory-hardness for parallel adversaries) is at least Ω(w · (n/ log(n))2). This bound holds for adversaries that can store many natural functions of the labels (e.g., linear combinations), but still not arbitrary functions thereof. We then introduce and study a combinatorial quantity, and show how a sufficiently small upper bound on it (which we conjecture) extends our CMC bound for scrypt to hold against arbitrary adversaries. We also show that such an upper bound solves the main open problem for proofs-of-space protocols: namely, establishing that the time complexity of computing the label of a random node in a graph on n nodes (given an initial kw-bit state) reduces tightly to the time complexity for black pebbling on the same graph (given an initial k-node pebbling). AU - Alwen, Joel F AU - Chen, Binyi AU - Kamath Hosdurg, Chethan AU - Kolmogorov, Vladimir AU - Pietrzak, Krzysztof Z AU - Tessaro, Stefano ID - 1231 TI - On the complexity of scrypt and proofs of space in the parallel random oracle model VL - 9666 ER - TY - CONF AB - About three decades ago it was realized that implementing private channels between parties which can be adaptively corrupted requires an encryption scheme that is secure against selective opening attacks. Whether standard (IND-CPA) security implies security against selective opening attacks has been a major open question since. The only known reduction from selective opening to IND-CPA security loses an exponential factor. A polynomial reduction is only known for the very special case where the distribution considered in the selective opening security experiment is a product distribution, i.e., the messages are sampled independently from each other. In this paper we give a reduction whose loss is quantified via the dependence graph (where message dependencies correspond to edges) of the underlying message distribution. In particular, for some concrete distributions including Markov distributions, our reduction is polynomial. AU - Fuchsbauer, Georg AU - Heuer, Felix AU - Kiltz, Eike AU - Pietrzak, Krzysztof Z ID - 1233 TI - Standard security does imply security against selective opening for markov distributions VL - 9562 ER - TY - CONF AB - A memory-hard function (MHF) f is equipped with a space cost σ and time cost τ parameter such that repeatedly computing fσ,τ on an application specific integrated circuit (ASIC) is not economically advantageous relative to a general purpose computer. Technically we would like that any (generalized) circuit for evaluating an iMHF fσ,τ has area × time (AT) complexity at Θ(σ2 ∗ τ). A data-independent MHF (iMHF) has the added property that it can be computed with almost optimal memory and time complexity by an algorithm which accesses memory in a pattern independent of the input value. Such functions can be specified by fixing a directed acyclic graph (DAG) G on n = Θ(σ ∗ τ) nodes representing its computation graph. In this work we develop new tools for analyzing iMHFs. First we define and motivate a new complexity measure capturing the amount of energy (i.e. electricity) required to compute a function. We argue that, in practice, this measure is at least as important as the more traditional AT-complexity. Next we describe an algorithm A for repeatedly evaluating an iMHF based on an arbitrary DAG G. We upperbound both its energy and AT complexities per instance evaluated in terms of a certain combinatorial property of G. Next we instantiate our attack for several general classes of DAGs which include those underlying many of the most important iMHF candidates in the literature. In particular, we obtain the following results which hold for all choices of parameters σ and τ (and thread-count) such that n = σ ∗ τ. -The Catena-Dragonfly function of [FLW13] has AT and energy complexities O(n1.67). -The Catena-Butterfly function of [FLW13] has complexities is O(n1.67). -The Double-Buffer and the Linear functions of [CGBS16] both have complexities in O(n1.67). -The Argon2i function of [BDK15] (winner of the Password Hashing Competition [PHC]) has complexities O(n7/4 log(n)). -The Single-Buffer function of [CGBS16] has complexities O(n7/4 log(n)). -Any iMHF can be computed by an algorithm with complexities O(n2/ log1 −ε(n)) for all ε > 0. In particular when τ = 1 this shows that the goal of constructing an iMHF with AT-complexity Θ(σ2 ∗ τ ) is unachievable. Along the way we prove a lemma upper-bounding the depth-robustness of any DAG which may prove to be of independent interest. AU - Alwen, Joel F AU - Blocki, Jeremiah ID - 1365 TI - Efficiently computing data-independent memory-hard functions VL - 9815 ER - TY - CONF AB - We study the problem of devising provably secure PRNGs with input based on the sponge paradigm. Such constructions are very appealing, as efficient software/hardware implementations of SHA-3 can easily be translated into a PRNG in a nearly black-box way. The only existing sponge-based construction, proposed by Bertoni et al. (CHES 2010), fails to achieve the security notion of robustness recently considered by Dodis et al. (CCS 2013), for two reasons: (1) The construction is deterministic, and thus there are high-entropy input distributions on which the construction fails to extract random bits, and (2) The construction is not forward secure, and presented solutions aiming at restoring forward security have not been rigorously analyzed. We propose a seeded variant of Bertoni et al.’s PRNG with input which we prove secure in the sense of robustness, delivering in particular concrete security bounds. On the way, we make what we believe to be an important conceptual contribution, developing a variant of the security framework of Dodis et al. tailored at the ideal permutation model that captures PRNG security in settings where the weakly random inputs are provided from a large class of possible adversarial samplers which are also allowed to query the random permutation. As a further application of our techniques, we also present an efficient sponge-based key-derivation function (which can be instantiated from SHA-3 in a black-box fashion), which we also prove secure when fed with samples from permutation-dependent distributions. AU - Gazi, Peter AU - Tessaro, Stefano ID - 1366 TI - Provably robust sponge-based PRNGs and KDFs VL - 9665 ER - TY - JOUR AB - A modular approach to constructing cryptographic protocols leads to simple designs but often inefficient instantiations. On the other hand, ad hoc constructions may yield efficient protocols at the cost of losing conceptual simplicity. We suggest a new design paradigm, structure-preserving cryptography, that provides a way to construct modular protocols with reasonable efficiency while retaining conceptual simplicity. A cryptographic scheme over a bilinear group is called structure-preserving if its public inputs and outputs consist of elements from the bilinear groups and their consistency can be verified by evaluating pairing-product equations. As structure-preserving schemes smoothly interoperate with each other, they are useful as building blocks in modular design of cryptographic applications. This paper introduces structure-preserving commitment and signature schemes over bilinear groups with several desirable properties. The commitment schemes include homomorphic, trapdoor and length-reducing commitments to group elements, and the structure-preserving signature schemes are the first ones that yield constant-size signatures on multiple group elements. A structure-preserving signature scheme is called automorphic if the public keys lie in the message space, which cannot be achieved by compressing inputs via a cryptographic hash function, as this would destroy the mathematical structure we are trying to preserve. Automorphic signatures can be used for building certification chains underlying privacy-preserving protocols. Among a vast number of applications of structure-preserving protocols, we present an efficient round-optimal blind-signature scheme and a group signature scheme with an efficient and concurrently secure protocol for enrolling new members. AU - Abe, Masayuki AU - Fuchsbauer, Georg AU - Groth, Jens AU - Haralambiev, Kristiyan AU - Ohkubo, Miyako ID - 1592 IS - 2 JF - Journal of Cryptology TI - Structure preserving signatures and commitments to group elements VL - 29 ER - TY - CONF AB - At Crypto 2015 Fuchsbauer, Hanser and Slamanig (FHS) presented the first standard-model construction of efficient roundoptimal blind signatures that does not require complexity leveraging. It is conceptually simple and builds on the primitive of structure-preserving signatures on equivalence classes (SPS-EQ). FHS prove the unforgeability of their scheme assuming EUF-CMA security of the SPS-EQ scheme and hardness of a version of the DH inversion problem. Blindness under adversarially chosen keys is proven under an interactive variant of the DDH assumption. We propose a variant of their scheme whose blindness can be proven under a non-interactive assumption, namely a variant of the bilinear DDH assumption. We moreover prove its unforgeability assuming only unforgeability of the underlying SPS-EQ but no additional assumptions as needed for the FHS scheme. AU - Fuchsbauer, Georg AU - Hanser, Christian AU - Kamath Hosdurg, Chethan AU - Slamanig, Daniel ID - 1225 TI - Practical round-optimal blind signatures in the standard model from weaker assumptions VL - 9841 ER -