TY - THES AB - A search problem lies in the complexity class FNP if a solution to the given instance of the problem can be verified efficiently. The complexity class TFNP consists of all search problems in FNP that are total in the sense that a solution is guaranteed to exist. TFNP contains a host of interesting problems from fields such as algorithmic game theory, computational topology, number theory and combinatorics. Since TFNP is a semantic class, it is unlikely to have a complete problem. Instead, one studies its syntactic subclasses which are defined based on the combinatorial principle used to argue totality. Of particular interest is the subclass PPAD, which contains important problems like computing Nash equilibrium for bimatrix games and computational counterparts of several fixed-point theorems as complete. In the thesis, we undertake the study of averagecase hardness of TFNP, and in particular its subclass PPAD. Almost nothing was known about average-case hardness of PPAD before a series of recent results showed how to achieve it using a cryptographic primitive called program obfuscation. However, it is currently not known how to construct program obfuscation from standard cryptographic assumptions. Therefore, it is desirable to relax the assumption under which average-case hardness of PPAD can be shown. In the thesis we take a step in this direction. First, we show that assuming the (average-case) hardness of a numbertheoretic problem related to factoring of integers, which we call Iterated-Squaring, PPAD is hard-on-average in the random-oracle model. Then we strengthen this result to show that the average-case hardness of PPAD reduces to the (adaptive) soundness of the Fiat-Shamir Transform, a well-known technique used to compile a public-coin interactive protocol into a non-interactive one. As a corollary, we obtain average-case hardness for PPAD in the random-oracle model assuming the worst-case hardness of #SAT. Moreover, the above results can all be strengthened to obtain average-case hardness for the class CLS ⊆ PPAD. Our main technical contribution is constructing incrementally-verifiable procedures for computing Iterated-Squaring and #SAT. By incrementally-verifiable, we mean that every intermediate state of the computation includes a proof of its correctness, and the proof can be updated and verified in polynomial time. Previous constructions of such procedures relied on strong, non-standard assumptions. Instead, we introduce a technique called recursive proof-merging to obtain the same from weaker assumptions. AU - Kamath Hosdurg, Chethan ID - 7896 SN - 2663-337X TI - On the average-case hardness of total search problems ER - TY - JOUR AB - Cryptographic security is usually defined as a guarantee that holds except when a bad event with negligible probability occurs, and nothing is guaranteed in that bad case. However, in settings where such failure can happen with substantial probability, one needs to provide guarantees even for the bad case. A typical example is where a (possibly weak) password is used instead of a secure cryptographic key to protect a session, the bad event being that the adversary correctly guesses the password. In a situation with multiple such sessions, a per-session guarantee is desired: any session for which the password has not been guessed remains secure, independently of whether other sessions have been compromised. A new formalism for stating such gracefully degrading security guarantees is introduced and applied to analyze the examples of password-based message authentication and password-based encryption. While a natural per-message guarantee is achieved for authentication, the situation of password-based encryption is more delicate: a per-session confidentiality guarantee only holds against attackers for which the distribution of password-guessing effort over the sessions is known in advance. In contrast, for more general attackers without such a restriction, a strong, composable notion of security cannot be achieved. AU - Demay, Gregory AU - Gazi, Peter AU - Maurer, Ueli AU - Tackmann, Bjorn ID - 5887 IS - 1 JF - Journal of Computer Security SN - 0926227X TI - Per-session security: Password-based cryptography revisited VL - 27 ER - TY - CONF AB - We construct a verifiable delay function (VDF) by showing how the Rivest-Shamir-Wagner time-lock puzzle can be made publicly verifiable. Concretely, we give a statistically sound public-coin protocol to prove that a tuple (N,x,T,y) satisfies y=x2T (mod N) where the prover doesn’t know the factorization of N and its running time is dominated by solving the puzzle, that is, compute x2T, which is conjectured to require T sequential squarings. To get a VDF we make this protocol non-interactive using the Fiat-Shamir heuristic.The motivation for this work comes from the Chia blockchain design, which uses a VDF as akey ingredient. For typical parameters (T≤2 40, N= 2048), our proofs are of size around 10K B, verification cost around three RSA exponentiations and computing the proof is 8000 times faster than solving the puzzle even without any parallelism. AU - Pietrzak, Krzysztof Z ID - 6528 SN - 1868-8969 T2 - 10th Innovations in Theoretical Computer Science Conference TI - Simple verifiable delay functions VL - 124 ER - TY - CHAP AB - Randomness is an essential part of any secure cryptosystem, but many constructions rely on distributions that are not uniform. This is particularly true for lattice based cryptosystems, which more often than not make use of discrete Gaussian distributions over the integers. For practical purposes it is crucial to evaluate the impact that approximation errors have on the security of a scheme to provide the best possible trade-off between security and performance. Recent years have seen surprising results allowing to use relatively low precision while maintaining high levels of security. A key insight in these results is that sampling a distribution with low relative error can provide very strong security guarantees. Since floating point numbers provide guarantees on the relative approximation error, they seem a suitable tool in this setting, but it is not obvious which sampling algorithms can actually profit from them. While previous works have shown that inversion sampling can be adapted to provide a low relative error (Pöppelmann et al., CHES 2014; Prest, ASIACRYPT 2017), other works have called into question if this is possible for other sampling techniques (Zheng et al., Eprint report 2018/309). In this work, we consider all sampling algorithms that are popular in the cryptographic setting and analyze the relationship of floating point precision and the resulting relative error. We show that all of the algorithms either natively achieve a low relative error or can be adapted to do so. AU - Walter, Michael ED - Buchmann, J ED - Nitaj, A ED - Rachidi, T ID - 6726 SN - 0302-9743 T2 - Progress in Cryptology – AFRICACRYPT 2019 TI - Sampling the integers with low relative error VL - 11627 ER - TY - CONF AB - It is well established that the notion of min-entropy fails to satisfy the \emph{chain rule} of the form H(X,Y)=H(X|Y)+H(Y), known for Shannon Entropy. Such a property would help to analyze how min-entropy is split among smaller blocks. Problems of this kind arise for example when constructing extractors and dispersers. We show that any sequence of variables exhibits a very strong strong block-source structure (conditional distributions of blocks are nearly flat) when we \emph{spoil few correlated bits}. This implies, conditioned on the spoiled bits, that \emph{splitting-recombination properties} hold. In particular, we have many nice properties that min-entropy doesn't obey in general, for example strong chain rules, "information can't hurt" inequalities, equivalences of average and worst-case conditional entropy definitions and others. Quantitatively, for any sequence X1,…,Xt of random variables over an alphabet X we prove that, when conditioned on m=t⋅O(loglog|X|+loglog(1/ϵ)+logt) bits of auxiliary information, all conditional distributions of the form Xi|X