Learning parity with noise (LPN)

 

tl;dr: A very useful cryptographic assumption that is related to coding theory.

$ \def\b{\mathbf{b}} \def\e{\mathbf{e}} \def\s{\mathbf{s}} \def\A{\mathbf{A}} \def\E{\mathcal{E}} \def\G{\mathbf{G}} $

Preliminaries

  • bolded, lowercase variables (e.g., $\s$) denote column vectors in $\F^m\bydef \F^{m\times 1}$.
  • $n$ is the dimension
  • $m$ is the number of samples

Introduction

The learning parity with noise (LPN) assumption was proposed in a 1993 CRYPTO paper by Blum, Furst, Kearns and Lipton1.

Informally, the computational variant says that, for a public matrix $\A \in \F^{n \times m}$, a secret vector $\s\in \F^m$ and noise $\e\in \F^n$ sampled from an error distribution $\E$, it is hard to recover $\s$ from $(\A,\A\s+\e)$.

There is also a decisional variant that says it is hard to distinguish $(\A,\A\s+\e)$ from $(\A,\b)$, when $\b\randget \F^n$ (uniformly) and $\e$ is sampled appropriately from $\E$.

There is also a dual LPN variant introduced by Micciancio and Mol2, which says that for a public matrix $\G\randget \mathcal{G}$, where $\mathcal{G}$ is a generator distribution, and noise $\e$ sampled from $\E$, it is hard to recover $\e$ from $(\G,\G\e)$

References

For cited works, see below 👇👇

  1. Cryptographic Primitives Based on Hard Learning Problems, by Blum, Avrim and Furst, Merrick and Kearns, Michael and Lipton, Richard J., in Advances in Cryptology — CRYPTO’ 93, 1994 

  2. Pseudorandom Knapsacks and the Sample Complexity of LWE Search-to-Decision Reductions, by Micciancio, Daniele and Mol, Petros, in Advances in Cryptology – CRYPTO 2011, 2011Â