10.1007/978-3-642-40041-4_4
Alwen, Joel F
Joel F
Alwen
Krenn, Stephan
Stephan
Krenn
Pietrzak, Krzysztof Z
Krzysztof Z
Pietrzak
Wichs, Daniel
Daniel
Wichs
Learning with rounding, revisited: New reduction properties and applications
LNCS
Springer
2013
2018-12-11T11:56:37Z
2020-01-16T12:36:31Z
conference
https://research-explorer.app.ist.ac.at/record/2259
https://research-explorer.app.ist.ac.at/record/2259.json
587898 bytes
application/pdf
The learning with rounding (LWR) problem, introduced by Banerjee, Peikert and Rosen at EUROCRYPT ’12, is a variant of learning with errors (LWE), where one replaces random errors with deterministic rounding. The LWR problem was shown to be as hard as LWE for a setting of parameters where the modulus and modulus-to-error ratio are super-polynomial. In this work we resolve the main open problem and give a new reduction that works for a larger range of parameters, allowing for a polynomial modulus and modulus-to-error ratio. In particular, a smaller modulus gives us greater efficiency, and a smaller modulus-to-error ratio gives us greater security, which now follows from the worst-case hardness of GapSVP with polynomial (rather than super-polynomial) approximation factors.
As a tool in the reduction, we show that there is a “lossy mode” for the LWR problem, in which LWR samples only reveal partial information about the secret. This property gives us several interesting new applications, including a proof that LWR remains secure with weakly random secrets of sufficient min-entropy, and very simple constructions of deterministic encryption, lossy trapdoor functions and reusable extractors.
Our approach is inspired by a technique of Goldwasser et al. from ICS ’10, which implicitly showed the existence of a “lossy mode” for LWE. By refining this technique, we also improve on the parameters of that work to only requiring a polynomial (instead of super-polynomial) modulus and modulus-to-error ratio.