DeKART: ZK range proofs from univariate polynomials

 

tl;dr: We fix up our previous non-ZK, univariate DeKART scheme and also speed up its verifier by trading off prover time. This is joint work with Dan Boneh, Trisha Datta, Kamilla Nazirkhanova and Rex Fernando.

$ % \def\one#1{\left[#1\right]_\textcolor{green}{1}} \def\two#1{\left[#1\right]_\textcolor{red}{2}} \def\three#1{\left[#1\right]_\textcolor{blue}{\top}} \def\pair#1#2{e\left(#1, #2\right)} \def\GGen{\mathsf{GGen}} $

$ \def\FS{\mathcal{FS}} \def\fsget{\stackrel{\FS}{\leftarrow}} \def\FSo{ { \FS(\cdot) } } $

$ % % Field operations % % #1 is the number of field additions \def\Fadd#1{#1\ \green{\F^+}} % #1 is the number of field multiplications \def\Fmul#1{#1\ \red{\F}^\red{\times}} % % Abstract group % % #1 is the group % #2 is the # of group additions \def\Gadd#1#2{#2\ \green{#1}^\green{+}} % #2 is the # of scalar muls \def\Gmul#1#2{#2\ \orange{#1}^\orange{\times}} % #2 is the MSM size \def\msm#1#2{\red{#1}^{#2}} % do not use directly use either \fmsm or \vmsm \def\vmsm#1#2{\red{\mathsf{var}}\text{-}\msm{#1}{#2}} \def\fmsm#1#2{\msm{#1}{#2}} \def\fmsmSmall#1#2#3{\fmsm{#1}{#2}/{#3}} % ...#3 is the max scalar size \def\vmsmSmall#1#2#3{\vmsm{#1}{#2}/{#3}} % % \mathbb{G} group % \def\GaddG#1{\Gadd{\Gr}{#1}} \def\GmulG#1{\Gmul{\Gr}{#1}} \def\msmG#1{\msm{\Gr}{#1}} \def\vmsmG#1{\vmsm{\Gr}{#1}} \def\fmsmG#1{\fmsm{\Gr}{#1}} \def\fmsmGSmall#1#2{\fmsmSmall{\Gr}{#1}/{#2}} \def\vmsmGSmall#1#2{\vmsmSmall{\Gr}{#1}/{#2}} % % G_1 group % % Note: replicating the colors here because cannot get subscript to align with superscript (e.g., $\msmOne{n}$ would render akwardly) \def\GaddOne#1{\Gadd{\Gr}{#1}_\green{1}} \def\GmulOne#1{\Gmul{\Gr}{#1}_\orange{1}} \def\msmOne#1{\msm{\Gr}{#1}_\red{1}} \def\vmsmOne#1{\vmsm{\Gr}{#1}_\red{1}} \def\fmsmOne#1{\fmsm{\Gr}{#1}_\red{1}} \def\fmsmOneSmall#1#2{\fmsmSmall{\Gr}{#1}_\red{1}/{#2}} \def\vmsmOneSmall#1#2{\vmsmSmall{\Gr}{#1}_\red{1}/{#2}} % % G_2 group % % Note: same replication as for G_1 \def\GaddTwo#1{\Gadd{\Gr}{#1}_\green{2}} \def\GmulTwo#1{\Gmul{\Gr}{#1}_\orange{2}} \def\msmTwo#1{\msm{\Gr}{#1}_\red{2}} \def\vmsmTwo#1{\vmsm{\Gr}{#1}_\red{2}} \def\fmsmTwo#1{\fmsm{\Gr}{#1}_\red{2}} \def\fmsmTwoSmall#1#2{\fmsmSmall{\Gr}{#1}_\red{2}/{#2}} \def\vmsmTwoSmall#1#2{\vmsmSmall{\Gr}{#1}_\red{2}/{#2}} % % Target group % % Note: same replication as for G_1 \def\GaddTarget#1{\Gadd{\Gr}{#1}_\green{T}} \def\GmulTarget#1{\Gmul{\Gr}{#1}_\orange{T}} \def\msmTarget#1{\msm{\Gr}{#1}_\red{T}} \def\vmsmTarget#1{\vmsm{\Gr}{#1}_\red{T}} \def\fmsmTarget#1{\fmsm{\Gr}{#1}_\red{T}} \def\fmsmTargetSmall#1#2{\fmsmSmall{\Gr}{#1}_\red{T}/{#2}} \def\vmsmTargetSmall#1#2{\vmsmSmall{\Gr}{#1}_\red{T}/{#2}} % % A single pairing \def\pairing{\mathbb{P}} % #1 is the # of pairings \def\multipair#1{\mathbb{P}^{#1}} $

$ \def\crs#1{\textcolor{green}{#1}} \def\tauOne{\crs{\one{\tau}}} \def\tauTwo{\crs{\two{\tau}}} \def\xiOne{\crs{\one{\xi}}} \def\xiTwo{\crs{\two{\xi}}} % \def\dekart{\mathsf{DeKART}} \def\dekartUni{\dekart^\mathsf{FFT}} \def\dekartSetup{\dekart.\mathsf{Setup}} \def\dekartProve{\dekart.\mathsf{Prove}} % \def\bad#1{\textcolor{red}{\text{#1}}} \def\good#1{\textcolor{green}{\text{#1}}} % \def\S{\mathbb{S}} \def\lagrS{\mathcal{S}} \def\sOne#1{\crs{\one{\lagrS_{#1}(\tau)}}} \def\VS{V^*_\S} \def\vanishSfrac{\frac{X^{n+1} - 1}{X - 1}} \def\vanishS{(X^{n+1} - 1)/(X - 1)} % \def\L{\mathbb{L}} \def\lagrL{\mathcal{L}} \def\lOne#1{\crs{\one{\lagrL_{#1}(\tau)}}} \def\VL{V^*_\L} \def\vanishL{\frac{X^L - 1}{X - 1}} % \def\vkHkzg{\vk_\mathsf{HKZG}} \def\hkzgSetup{\mathsf{HKZG.Setup}} \def\hkzgCommit{\mathsf{HKZG.Commit}} \def\hkzgOpen{\mathsf{HKZG.Open}} \def\hkzgVerify{\mathsf{HKZG.Verify}} % \def\piPok{\pi_\mathsf{PoK}} \def\zkpokProve{\Sigma_\mathsf{PoK}.\mathsf{Prove}} \def\zkpokVerify{\Sigma_\mathsf{PoK}.\mathsf{Verify}} \def\relPok{\mathcal{R}_\mathsf{pok}} $

Notation

The notation for this blog post is the same as in the old post.

  • Let $\GGen(1^\lambda) \rightarrow \mathcal{G}$ denote a probabilistic polynomial-time algorithm that outputs bilinear groups $\mathcal{G} \bydef (\Gr_1, \Gr_2, \Gr_T)$ of prime order $p\approx 2^{2\lambda}$, denoted additively, such that:
    • $\one{1}$ generates $\Gr_1$
    • $\two{1}$ generates $\Gr_2$
    • $\three{1}$ generates $\Gr_T$
  • We use $\one{a}\bydef a\cdot \one{1}$ and $\two{b}\bydef b\cdot \two{1}$ and $\three{c}\bydef c\cdot \three{1}$ to denote multiplying a scalar by the group generator
  • Let $\F$ denote the scalar field of order $p$ associated with the bilinear groups

  • For time complexities, we use:
    • $\Fadd{n}$ for $n$ field additions in $\F$
    • $\Fmul{n}$ for $n$ field multiplications in $\F$

    • $\Gadd{\Gr}{n}$ for $n$ additions in $\Gr$
    • $\Gmul{\Gr}{n}$ for $n$ individual scalar multiplications in $\Gr$
    • $\fmsm{\Gr}{n}$ for a size-$n$ MSM in $\Gr$ where the group element bases are known ahead of time (i.e., fixed-base)
      • when the scalars are always from a set $S$, then we use $\fmsmSmall{\Gr}{n}{S}$
    • $\vmsm{\Gr}{n}$ for a size-$n$ MSM in $\Gr$ where the group element bases are not known ahead of time (i.e., variable-base)
      • when the scalars are always from a set $S$, then we use $\vmsmSmall{\Gr}{n}{S}$
    • $\pairing$ for one pairing
    • $\multipair{n}$ for a size-$n$ multipairing
  • We use $a\fsget S$ to denote sampling from a set $S$ in a deterministic manner using the Fiat-Shamir transcript $\FS$ derived so far

Preliminary: ZKPoKs

We assume a ZK PoK for the following relation: \begin{align} \term{\relPok}(X, X_1, X_2; w_1, w_2) = 1 \Leftrightarrow X = w_1 \cdot X_1 + w_2 \cdot X_2 \end{align}

What kind of soundness assumption do we need? Is the 2-special soundness of $\Sigma_\mathsf{PoK}$ enough?

$\Sigma_\mathsf{PoK}.\mathsf{Prove}(X, X_1, X_2; w_1, w_2)\rightarrow \pi$

Step 1: Add $(X, X_1, X_2)$ to the $\FS$ transcript.

Step 2: Compute the commitment: \begin{align} x_1, x_2 &\randget \F\\
\term{A} &\gets x_1 \cdot X_1 + x_2 \cdot X_2 \end{align}

Step 3: Add $A$ to the $\FS$ transcript.

Step 4: Derive the challenge pseudo-randomly via Fiat-Shamir: \begin{align} \term{e} &\fsget \F \end{align}

Step 5: Compute the response \begin{align} \term{\sigma_1} &\gets x_1 - e \cdot w_1\\
\term{\sigma_2} &\gets x_2 - e \cdot w_2 \end{align}

The final proof is: \begin{align} \pi \gets (A, \sigma_1, \sigma_2) \in \Gr\times \F^2 \end{align}

$\Sigma_\mathsf{PoK}.\mathsf{Verify}(X, X_1, X_2; \pi)$

Step 0: Parse the proof as: \begin{align} (A, \sigma_1, \sigma_2) \parse \pi \end{align}

Step 1: Add $(X, X_1, X_2)$ to the $\FS$ transcript.

Step 2: Add $A$ to the $\FS$ transcript.

Step 3: Derive the challenge pseudo-randomly via Fiat-Shamir: \begin{align} \term{e} &\fsget \F \end{align}

Step 4: Add $(\sigma_1,\sigma_2)$ to the $\FS$ transcript.

Step 5: Check the proof: \begin{align} \textbf{assert}\ A \equals e\cdot X + \sigma_1 \cdot X_1 + \sigma_2 \cdot X_2 \end{align}

Correctness holds because: \begin{align} A &\equals e\cdot X + \sigma_1 \cdot X_1 + \sigma_2 \cdot X_2\Leftrightarrow\\
x_1 X_1 + x_2 X_2 &\equals e\cdot (w_1 X_1 + w_2 X_2) + (x_1 - e w_1) \cdot X_1 + (x_2 - e w_2) \cdot X_2\Leftrightarrow\\
x_1 X_1 + x_2 X_2 &\equals e w_1 X_1 + e w_2 X_2 + x_1 X_1 - e w_1 X_1 + x_2 X_2 - e w_2 X_2\Leftrightarrow\\
x_1 X_1 + x_2 X_2 &\equals x_1 X_1 + x_2 X_2\Leftrightarrow 1 \stackrel{!}{=} 1 \end{align}

Preliminary: Hiding KZG

This hiding KZG variant was (first?) introduced in the Zeromorph paper1.

$\mathsf{HKZG.Setup}(m; \mathcal{G}, \xi, \tau) \rightarrow (\mathsf{vk},\mathsf{ck})$

The algorithm is given:

  1. a bilinear group $\term{\mathcal{G}}$ with generators $\one{1},\two{1},\three{1}$ and associated field $\F$, as explained in the preliminaries
  2. random trapdoors $\term{\xi,\tau}\in \F$

Pick an $m$th root of unity $\term{\theta}$ and let: \begin{align} \term{\mathbb{H}} &\bydef \{\theta^0, \theta^1, \ldots, \theta^{m-1}\}\\
\term{\ell_i(X)} &\bydef \prod_{j\in\mathbb{H}, j\ne i} \frac{X - \theta^j}{\theta^i - \theta^j} \end{align}

Return the public parameters: \begin{align} \vk &\gets (\xiTwo, \tauTwo)\\
\ck &\gets (\xiOne, \tauOne, (\crs{\one{\ell_i(\tau)}})_{i\in[m)}) \end{align}

Note: We assume that the bilinear group $\mathcal{G}$ is implicitly part of the VK and CK above.

$\mathsf{HKZG.Commit}(\mathsf{ck}, f; \rho) \rightarrow C$

Parse the commitment key: \begin{align} \left(\xiOne, \cdot, \left(\crs{\one{\ell_i(\tau)}}\right)_{i\in[m)}\right) \parse\ck \end{align}

Commit to $f$, but additively blind by $\rho\cdot \xiOne$: \begin{align} C &\gets \rho \cdot \xiOne + \sum_{i\in[m)} f(\theta^i) \cdot \crs{\one{\ell_i(\tau)}}\\
&\bydef \rho \cdot \xiOne + \one{f(\tau)} \end{align}

$\mathsf{HKZG.Open}(\mathsf{ck}, f, \rho, x; s) \rightarrow \pi$

Parse the commitment key: \begin{align} \left(\xiOne, \tauOne, \left(\crs{\one{\ell_i(\tau)}}\right)_{i\in[m)}\right) \parse\ck \end{align}

Assuming $x\notin\mathbb{H}$, commit to a blinded quotient polynomial: \begin{align} \label{eq:kzg-pi-1} \pi_1 &\leftarrow s \cdot \xiOne + \sum_{i \in [m)} \frac{f(\theta^i) - f(x)}{\theta^i - x} \cdot \crs{\one{\ell_i(\tau)}}\\
&\bydef s \cdot \xiOne + \one{\frac{f(\tau) - f(x)}{\tau - x}}\\
\label{eq:kzg-pi-2} &\bydef \hkzgCommit\left(\ck, \frac{f(X) - f(x)}{(X - x)}; s\right) \end{align}

When $x\notin \mathbb{H}$, we can evaluate $f(x)$ in $\Fmul{O(n)}$ operations given the $f(\theta^i)$’s via the Barycentric formula and create the proof via Eq. \ref{eq:kzg-pi-1}. (Batch inversion should be used to compute all the $(\theta^i - x)^{-1}$’s fast.) When $x\in \mathbb{H}$, we could use differentiation tricks to interpolate the quotient $\frac{f(X) - f(x)}{X - x}$ in Lagrange basis and create the proof via Eq. \ref{eq:kzg-pi-2}.

Compute an additional blinded component: \begin{align} \pi_2 \leftarrow \one{\rho} - s \cdot (\tauOne - \one{x}) \end{align}

Return the proof: \begin{align} \pi\gets (\pi_1,\pi_2) \end{align}

$\mathsf{HKZG.Verify}(\mathsf{vk}, C, x, y; \pi) \rightarrow \{0,1\}$

Parse the verification key: \begin{align} \left(\xiTwo, \tauTwo\right) \parse \vk \end{align}

Parse the proof $(\pi_1,\pi_2)\parse\pi$ and assert that: \begin{align} e(C - \one{y}, \two{1}) \equals e(\pi_1, \tauTwo - \two{x}) + e(\pi_2,\xiTwo) \end{align}

Correctness of openings

Correctness holds since, assuming that $C \bydef \hkzgCommit(\ck, f; \rho)$ and $\pi \bydef \hkzgOpen(\ck, f, \rho, x; s)$, then the paring check in $\hkzgVerify(\ck, C, x, f(x); \pi)$ is equivalent to: \begin{align} e(\cancel{\rho\cdot\xiOne} + \one{f(\tau)} - \one{f(x)}, \two{1}) &\equals \pair{s \cdot \xiOne + \one{\frac{f(\tau) - f(x)}{\tau - x}}}{ \tauTwo - \two{x}} + e(\cancel{\one{\rho}}-s\cdot(\tauOne-\one{x}),\cancel{\xiTwo})\Leftrightarrow\\
e(\one{f(\tau)} - \one{f(x)}, \two{1}) &\equals \bluedashedbox{\pair{s \cdot \xiOne + \one{\frac{f(\tau) - f(x)}{\tau - x}}}{ \tauTwo - \two{x}}} - e(s\cdot(\tauOne-\one{x}), \xiTwo)\Leftrightarrow\\
e(\one{f(\tau)} - \one{f(x)}, \two{1}) &\equals \bluedashedbox{\pair{\one{\frac{f(\tau) - f(x)}{\tau - x}}}{ \tauTwo - \two{x}} + \cancel{\pair{s\cdot\xiOne}{\tauTwo - \two{x}}}} - \cancel{e(s\cdot(\tauOne-\one{x}), \xiTwo)}\Leftrightarrow\\
e(\one{f(\tau)} - \one{f(x)}, \two{1}) &\equals \pair{\one{\frac{f(\tau) - f(x)}{\cancel{\tau - x}}}}{\cancel{\tauTwo - \two{x}}}\Leftrightarrow\\
e(\one{f(\tau)} - \one{f(x)}, \two{1}) &\stackrel{!}{=} \pair{\one{f(\tau) - f(x)}}{\two{1}}\\
\end{align}

The scheme

A few notes:

  • Values are represented in radix $\term{b}$
    • e.g., $\term{z_{i,j}}\in[b)$ denotes the $j$th chunk of $z_i \bydef \sum_{j\in[\ell)} b^j \cdot \emph{z_{i,j}}$
  • The goal is to prove that each value $z_i \in [b^\ell)$ by exhibiting a valid radix-$b$ decomposition as shown above
    • $\term{\ell}$ is the number of chunks ($z_{i,j}$’s) in this decomposition
  • We will have $\term{n}$ values we want to prove ($z_i$’s)
  • The degrees of committed polynomials will be either $n$ or $(b-1)n$

$\mathsf{Dekart}_b^\mathsf{FFT}.\mathsf{Setup}(n; \mathcal{G})\rightarrow (\mathsf{prk},\mathsf{ck},\mathsf{vk})$

Assume $n=2^c$ for some $c\in\N$2 s.t. $n \mid p-1$ (where $p$ is the order of the bilinear group $\mathcal{G}$) and let $\term{L} \bydef b(n+1) = 2^{d}$, for some $d\in\N$ s.t. $L \mid p-1$ as well.

For efficiency, we restrict ourselves to $(n+1)$ and $b$ that are powers of two, so that $L \bydef b(n+1)$ is a power of two as well. Ideally though, since the highest-degree polynomial involved in our scheme is $(b-1)n$, we could have used a smaller $L = (b-1)n + 1$ $= bn - (n - 1)$. But this $L$ may not be a power of two, which means FFTs would be trickier.

Pick random trapdoors for the hiding KZG scheme: \begin{align} \term{\xi,\tau}\randget\F \end{align}

Compute KZG public parameters for committing to polynomials interpolated from $n+1$ evaluations: \begin{align} ((\xiTwo,\tauTwo), \term{\ck_\S}) \gets \hkzgSetup(n+1; \mathcal{G}, \xi, \tau) \end{align} where:

  • $\term{\S}\bydef\{\omega^0,\omega^1,\ldots,\omega^{\emph{n}}\}$
  • $\term{\omega}$ is a primitive $(n+1)$th root of unity in $\F$
  • $\term{\lagrS_i(X)} \bydef \prod_{j\in\S, j\ne i} \frac{X - \omega^j}{\omega^i - \omega^j}, \forall i\in[n+1)$
  • $\term{\VS(X)}\bydef \vanishSfrac$ is a vanishing polynomial of degree $n$ whose $n$ roots are in $\S\setminus\{\omega^0\}$
  • $\emph{\ck_\S} \bydef \left(\xiOne,\tauOne, (\sOne{i})_{i\in[n+1)}\right)$

Compute KZG public parameters, reusing the same $(\xi,\tau)$, for committing to polynomials interpolated from $L$ evaluations: \begin{align} (\cdot, \term{\ck_\L}) \gets \hkzgSetup(L; \mathcal{G}, \xi, \tau) \end{align} where:

  • $\term{\L}\bydef\{\zeta^0,\zeta^1,\ldots,\zeta^{\emph{L-1}}\}$
  • $\term{\zeta}$ is a primitive $L$th root of unity in $\F$
  • $\term{\lagrL_i(X)} \bydef \prod_{j\in\L, j\ne i} \frac{X - \zeta^j}{\zeta^i - \zeta^j}, \forall i\in[L)$

Note: The Lagrange polynomial $\lagrS_i(X)$ is of degree $n$, while $\lagrL_i(X)$ is of degree $L-1$.

Compute the range proof’s proving key: \begin{align} \term{\vk} &\gets \left(\overbrace{\xiTwo, \tauTwo}^{\term{\vkHkzg}}, \xiOne, \sOne{0}\right)\\
\term{\ck} &\gets \ck_\S\\
\term{\prk} &\gets \left(\vk, \ck_\S, \ck_\L\right) \end{align}

When $b=2$, we will be able to simplify by letting $L = n+1$ and thus $\S = \L$ and $\ck_\L = \ck_\S$.

$\mathsf{Dekart}_b^\mathsf{FFT}.\mathsf{Commit}(\ck,z_1,\ldots,z_{n}; \rho)\rightarrow C$

Parse the commitment key: \begin{align} \left(\xiOne, \tauOne, \left(\sOne{i}\right)_{i\in[n+1)}\right) \parse\ck \end{align}

Represent the $n$ values and a prepended $0$ value as a degree-$n$ polynomial: \begin{align} \term{f(X)} \bydef 0\cdot \lagrS_0(X) + \sum_{i\in[n]} z_i \cdot \lagrS_i(X) \end{align}

Commit to the polynomial via hiding KZG: \begin{align} \term{\rho} &\randget \F\\
C &\gets \hkzgCommit(\ck_\S, f; \rho) \bydef \rho \cdot \xiOne + \one{f(\tau)} = \rho\cdot \xiOne + \sum_{i\in[n]} z_i \cdot \sOne{i} \end{align}

Note that $f(\omega^i) = z_i,\forall i\in[n]$ but the $f(\omega^0)$ evaluation is set to zero.

$\mathsf{Dekart}_b^\mathsf{FFT}.\mathsf{Prove}(\mathsf{prk}, C, \ell; z_1,\ldots,z_{n}, \rho)\rightarrow \pi$

Step 1a: Parse the public parameters: \begin{align} \left(\vk, \ck_\S, \ck_\L\right)\parse \prk\\
\left(\xiOne, \tauOne, \left(\sOne{i}\right)_{i\in[n+1)}\right) \parse \ck_\S\\
\left(\xiOne, \tauOne, \left(\lOne{i}\right)_{i\in[L)}\right)\parse \ck_\L \end{align}

Step 1b: Add $(\vk, C, b, \ell)$ to the $\FS$ transcript.

Step 2a: Re-randomize the commitment $C\bydef \rho\cdot \xiOne+\one{f(\tau)}$ and mask the degree-$n$ committed polynomial $f(X)$: \begin{align} \term{r}, \term{\Delta{\rho}} &\randget \F\\
\term{\hat{f}(X)} &\bydef r \cdot \lagrS_0(X) + \emph{f(X)}\\
\term{\hat{C}} &\gets \Delta{\rho} \cdot \xiOne + r\cdot \sOne{0} + \emph{C}\\
&\bydef \hkzgCommit(\ck_\S, \hat{f}; \rho + \Delta{\rho}) \end{align}

Step 2b: Add $\hat{C}$ to the $\FS$ transcript.

Step 3a: Prove knowledge of $r$ and $\Delta{\rho}$ such that $\hat{C} - C = \Delta{\rho} \cdot \xiOne + r\cdot \sOne{0}$. \begin{align} \term{\piPok} \gets \zkpokProve\left(\underbrace{(\hat{C}-C, \xiOne, \sOne{0})}_{\text{statement}}; \underbrace{(\Delta{\rho}, r)}_{\text{witness}}\right) \end{align}

Step 3b: Add $\piPok$ to the $\FS$ transcript.

Step 4a: Represent all $j$th chunks $(z_{1,j},\ldots,z_{n,j})$ as a degree-$n$ polynomial and commit to it: \begin{align} \term{r_j}, \term{\rho_j} &\randget \F\\
\term{f_j(X)} &\bydef r_j \cdot \lagrS_0(X) + \sum_{i\in[n]} z_{i,j}\cdot \lagrS_i(X)\\
\term{C_j} &\gets \rho_j \cdot \xiOne + r_j\cdot \sOne{0} + \sum_{i\in[n]} z_{i,j}\cdot \sOne{i}\\
&\bydef \hkzgCommit(\ck_\S, f_j; \rho_j) \end{align}

Step 4b: Add $(C_j)_{j\in[\ell)}$ to the $\FS$ transcript.

Step 5a: For each $j\in[\ell)$, define a quotient polynomial, whose existence would show that, $\forall i\in[n]$, $f_j(\omega^i) \in [b)$: \begin{align} \forall j\in[\ell), \term{h_j(X)} &\bydef \frac{f_j(X)(f_j(X) - 1) \cdots \left(f_j(X) - (b-1)\right)}{\VS(X)}\\
\end{align}

Note: Numerator is degree $bn$ and denominator is degree $n \Rightarrow h_j(X)$ is degree $(b-1)n$

Step 5b: Define a(nother) quotient polynomial, whose existence would show that, $\forall i\in[n]$, $\hat{f}(\omega^i) = \sum_{j\in[\ell)} b^j \cdot f_j(\omega^i)$: \begin{align} \term{g(X)} &\bydef \frac{\hat{f}(X) - \sum_{j\in[\ell)} b^j \cdot f_j(X)}{\VS(X)}\\
\end{align}

Note: Numerator is degree $n$ and denominator is degree $n \Rightarrow g(X)$ is degree 0! (A constant!)

Step 6: Combine all the quotients into a single one, using (pseudo)random challenges from the verifier: \begin{align} \term{\beta,\beta_0, \ldots,\beta_{\ell-1}} &\fsget \{0,1\}^\lambda\\
\label{eq:hx} \term{h(X)} &\gets \beta \cdot g(X) + \sum_{j\in[\ell)} \beta_j \cdot h_j(X) \end{align}

Note: The goal of the prover is to convince the verifier that: \begin{align} \label{eq:hx-check} h(X) \cdot \VS(X) \equals \beta \cdot \left(\hat{f}(X) - \sum_{j\in[\ell)} b^j \cdot f_j(X)\right) + \sum_{j\in[\ell)} \beta_j\cdot f_j(X)(f_j(X) - 1) \cdots \left(f_j(X) - (b-1)\right)\\
\end{align}

Step 7a: Commit to $h(X)$, of degree $(b-1)n$, by interpolating it over the larger $\L$ domain: \begin{align} \term{\rho_h} &\randget \F\\
\label{eq:D} \term{D} &\gets \rho_h\cdot \xiOne + \sum_{i\in[L)} h(\zeta^i) \cdot \lOne{i}\\
&\bydef \hkzgCommit(\ck_\L, h; \rho_h) \end{align}

Note: We discuss how to interpolate $h(X)$ efficiently by either evaluating it at all $\omega^i$’s (when $b=2$) or at all $\zeta^i$’s (when $b>2$) in the appendix.

Step 7b: Add $D$ to the $\FS$ transcript.

Step 8: The verifier asks us to take a (pseudo)random linear combination of $h(X)$, $\hat{f}(X)$ and the $f_j(X)$’s: \begin{align} \term{\mu, \mu_h, \mu_0,\ldots,\mu_{\ell-1}} &\fsget \{0,1\}^\lambda\\
\label{eq:ux} \term{u(X)} &\bydef \mu \cdot \hat{f}(X) + \mu_h\cdot h(X) + \sum_{j\in[\ell)} \mu_j\cdot f_j(X) \end{align}

Step 9: We get a (pseudo)random challenge from the verifier and open $u(X)$ at it: \begin{align} \term{\gamma} &\fsget \F\setminus\S\\
\term{a} &\gets \hat{f}(\gamma)\\
\term{a_h} &\gets h(\gamma)\\
\term{a_j} &\gets f_j(\gamma),\forall j\in[\ell)\\
\end{align}

Step 10: We compute a hiding KZG opening proof for $u(\gamma)$: \begin{align} \term{s} &\randget \F\\
\term{\pi_\gamma} &\gets \hkzgOpen(\ck_\L, u, \term{\rho_u}, \gamma; s) \end{align} where $\emph{\rho_u} \bydef \mu \cdot (\rho + \Delta{\rho}) + \mu_h \cdot \rho_h + \sum_{j\in[\ell)} \mu_j\cdot \rho_j$ is the blinding factor for the implicit commitment to $u(X)$, which the prover need not compute: \begin{align} \term{U} &\bydef \mu \cdot \hat{C} + \mu_h \cdot D + \sum_{j\in[\ell)} \mu_j\cdot C_j\\
&\bydef \hkzgCommit(\ck_\L, u; \rho_u) \end{align}

When $b > 2$, committing to $u(X)$ requires evaluating $u(\zeta^i)$ for all $i\in[L)$, which in turn requires having all $\hat{f}(\zeta^i)$’s and $f_j(\zeta^i)$’s. Fortunately, we already have them from the differentiation-based $h(X)$ interpolation for $b > 2$.

Return the proof $\pi$: \begin{align} \term{\pi}\gets \left(\hat{C}, \piPok, (C_j)_{j\in[\ell)}, D, a, a_h, (a_j)_{j\in[\ell)}, \pi_\gamma\right) \end{align}

Proof size and prover time

Proof size:

  • $(\ell+2)\Gr_1$ for the $\hat{C}$, $C_j$’s and $D$
  • $2$ $\Gr_1$ for $\pi_\gamma$
  • $1 \Gr_1 + 2\F$ for $|\piPok|$
  • $(\ell+2)\F$ for $a, a_h$ and the $a_j$’s (i.e., for $\hat{f}(\gamma), h(\gamma)$, and the $f_j(\gamma)$’s)

$\Rightarrow$ in total, $\emph{|\pi|=(\ell+5)\Gr_1 + (\ell+4)\F}$,

Prover time is dominated by:

  • $\GaddOne{\ell n}$ for all $C_j$’s
    • Assuming precomputed $2\cdot \sOne{i}, \ldots, (b-1)\cdot \sOne{i},\forall i\in[n]$
    • i.e., one for each possible chunk value in $[b)$
  • $1$ $\fmsmOne{2}$ MSM to blind $\hat{C}$ with $\rho$ and $\Delta{r}$
  • $(\ell+1)$ $\fmsmOne{2}$ MSMs to blind all $C_j$’s with $r_j$ and $\rho_j$
  • $1 \fmsmOne{2}$ for $\zkpokProve$
  • $\Fmul{O(\ell L\log{L})}$ to interpolate $h(X)$, where $L\bydef bn$
  • 1 $\fmsmOne{L+1}$ MSM for committing to $h(X)$
  • $\Fmul{O(\ell n)}$ to interpolate $\hat{f}(\gamma)$ and $f_j(\gamma)$ evals via the Barycentric formula
    • (Note: $h(\gamma)$ can be evaluated directly via Eq. \ref{eq:hx}.)
    • there will be $(\ell+1)$ size-$n$ Barycentric interpolations
    • the following need only be done once:
      • batch invert all $\frac{1}{\gamma -\omega^i}$’s $\Rightarrow \Fmul{O(n)}$
      • compute $\frac{\gamma^n - 1}{n}$ in $\Fmul{\log_2{n}} + 1$
    • each interpolation will then involve:
      • do $\Fmul{2n}$ and $\Fadd{n}$ (i.e., two $\F$ multiplication for each $y_i \cdot \omega^i \cdot \frac{1}{\gamma-\omega^i}$)
      • do $\Fmul{1}$ to accumulate $\frac{\gamma^n - 1}{n}$
  • 1 $\fmsmOne{L+1}$ MSM for computing $\pi_\gamma$ via $\hkzgOpen(\cdot)$

$\mathsf{Dekart}_b^\mathsf{FFT}.\mathsf{Verify}(\mathsf{vk}, C, \ell; \pi)\rightarrow \{0,1\}$

Step 1: Parse the $\vk$ and the proof $\pi$: \begin{align} \left(\vkHkzg, \xiOne, \sOne{0}\right) &\parse \vk\\
\left(\hat{C}, \piPok, (C_j)_{j\in[\ell)}, D, a, a_h, (a_{j})_{j\in[\ell)}, \pi_\gamma\right) &\parse \pi \end{align}

Step 2a: Add $(\vk, C, b, \ell)$ to the $\FS$ transcript.

Step 2b: Add $\hat{C}$ to the $\FS$ transcript.

Step 3: Verify ZKPoK: \begin{align} \textbf{assert}
\zkpokVerify\left(\hat{C}-C, \xiOne, \sOne{0}; \piPok\right) \equals 1 \end{align}

Step 4a: Add $\piPok$ to the $\FS$ transcript.

Step 4b: Add $((C_j)_{j\in[\ell})$ to the $\FS$ transcript.

Step 5: Generate (pseudo)random challenges for combing the quotient polynomials: \begin{align} \beta,\beta_0,\ldots,\beta_{\ell-1} &\fsget \{0,1\}^\lambda \end{align}

Step 6: Add $D$ to the $\FS$ transcript.

Step 7: Generate (pseudo)random challenges for the batch KZG opening on $\hat{f}(X), h(X)$ and the $f_j(X)$’s: \begin{align} \mu,\mu_h,\mu_0,\ldots,\mu_{\ell-1}\fsget \{0,1\}^\lambda \end{align}

Step 8: Reconstruct the commitment to $u(X)$ from Eq. \ref{eq:ux} \begin{align} \term{U} \gets \mu\cdot \hat{C} + \mu_h\cdot D + \sum_{j\in[\ell)} \mu_j \cdot C_j \end{align}

Step 9: Generate a (pseudo)random evaluation point for the batch KZG opening: \begin{align} \gamma \fsget \F \end{align}

Step 10: Verify that $a \equals \hat{f}(\gamma$), $a_h \equals h(\gamma)$ and $a_j \equals f_j(\gamma),\forall j\in[\ell)$: \begin{align} \label{eq:kzg-batch-verify} \term{a_u} &\gets \mu \cdot a + \mu_h \cdot a_h + \sum_{j\in[\ell)} \mu_j\cdot a_j\\
\textbf{assert}\ &\hkzgVerify(\vkHkzg, U, \gamma, a_u; \pi_\gamma) \equals 1 \end{align}

Step 11: Make sure that the radix-$b$ representation holds and that chunks are $<b$ as per Eq. \ref{eq:hx-check}: \begin{align} \textbf{assert}\ h(\gamma) \cdot \VS(\gamma) &\equals \beta \cdot \left(\hat{f}(\gamma) - \sum_{j\in[\ell)} b^j \cdot f_j(\gamma)\right) + \sum_{j\in[\ell)} \beta_j\cdot f_j(\gamma)(f_j(\gamma) - 1) \cdots (f_j(\gamma)- (b-1))\Leftrightarrow\\
\Leftrightarrow \textbf{assert}\ a_h \cdot \VS(\gamma) &\equals \beta \cdot \left(a - \sum_{j\in[\ell)} b^j \cdot a_j\right) + \sum_{j\in[\ell)} \beta_j\cdot a_j(a_j - 1) \cdots (a_j - (b-1)) \end{align}

Verifier time

The verifier work is dominated by:

  • 1 $\vmsmOne{3}$ MSM for verifying $\piPok$
  • 1 $\vmsmOne{\ell+2}$ MSM for deriving the KZG commitment $U$
  • $\GmulOne{1} + \GaddOne{1}$ for computing $\one{\tau - a_u}$ inside $\hkzgVerify(\cdot)$
  • $\GmulTwo{1} + \GaddTwo{1}$ for computing $\two{\tau - \gamma}$ inside $\hkzgVerify(\cdot)$
  • size-$3$ multipairing for the rest of $\hkzgVerify(\cdot)$
  • $\Fmul{(\ell+2)} + \Fadd{(\ell+2)}$ for computing $a_u$
  • $\Fmul{(1 + (\ell+1) + \ell b)} + \Fadd{(1 + \ell + 1 + \ell)}$ for the final zerocheck

Implementation

Benchmarks for $b=2$ over BLS12-381

These benchmarks are from our (in-progress) DeKART implementation in arkworks v0.5.0 here. To reproduce, see this README.

The Bulletproof proof size is $32\times \left(9 + 2\cdot \log_2{(n\cdot \ell)}\right)$ bytes.

The DeKART verifier time only varies with $\ell$; not with $n$. This means that by using higher $b$, we can decrease $\ell$ (e.g., from $\log_2{\texttt{MAX_VALUE}}$ to $\log_b{\texttt{MAX_VALUE}}$). This will reduce proof size and speed up our verifier by a factor of $\log_2{b}$. However, it will make proving slower by a factor of $b$ (e.g., 16x slower proving by for a 4x faster-to-verify and smaller proof 👌). Nonetheless, it will be great for applications like confidential assets.

$\ell = 8$ numbers

Scheme n Proving time (ms) Verify time (ms) Total time (ms) Proof size (bytes)
Bulletproofs 2 3.24 0.52 3.76 544
DeKART (BLS12-381) 1 4.28 (0.76x) 3.04 (0.17x) 7.32 (0.51x) 1,008 (1.85x)
Bulletproofs 4 6.25 0.89 7.14 608
DeKART (BLS12-381) 3 4.34 (1.44x) 2.96 (0.30x) 7.30 (0.98x) 1,008 (1.66x)
Bulletproofs 8 11.66 1.40 13.06 672
DeKART (BLS12-381) 7 4.66 (2.50x) 2.92 (0.48x) 7.58 (1.72x) 1,008 (1.50x)
Bulletproofs 16 22.32 2.48 24.80 736
DeKART (BLS12-381) 15 5.34 (4.18x) 2.98 (0.83x) 8.32 (2.98x) 1,008 (1.37x)
Bulletproofs 32 45.50 4.44 49.94 800
DeKART (BLS12-381) 31 10.34 (4.40x) 2.95 (1.51x) 13.29 (3.76x) 1,008 (1.26x)
Bulletproofs 64 91.64 7.17 98.81 864
DeKART (BLS12-381) 63 11.76 (7.79x) 2.89 (2.48x) 14.65 (6.74x) 1,008 (1.17x)
Bulletproofs 128 171.31 12.73 184.04 928
DeKART (BLS12-381) 127 18.71 (9.16x) 2.93 (4.34x) 21.64 (8.50x) 1,008 (1.09x)
Bulletproofs 256 339.23 23.93 363.16 992
DeKART (BLS12-381) 255 31.73 (10.69x) 2.89 (8.28x) 34.62 (10.49x) 1,008 (1.02x)
Bulletproofs 512 664.35 46.06 710.41 1,056
DeKART (BLS12-381) 511 42.50 (15.63x) 2.90 (15.88x) 45.40 (15.65x) 1,008 (0.95x)
Bulletproofs 1024 1,346.50 90.55 1,437.05 1,120
DeKART (BLS12-381) 1023 75.59 (17.81x) 2.91 (31.12x) 78.50 (18.31x) 1,008 (0.90x)
Bulletproofs 2048 2,653.50 180.76 2,834.26 1,184
DeKART (BLS12-381) 2047 141.58 (18.74x) 2.94 (61.48x) 144.52 (19.61x) 1,008 (0.85x)

$\ell = 16$ numbers

Scheme n Proving time (ms) Verify time (ms) Total time (ms) Proof size (bytes)
Bulletproofs 2 5.46 0.82 6.28 608
DeKART (BLS12-381) 1 6.45 (0.85x) 3.21 (0.26x) 9.66 (0.65x) 1,648 (2.71x)
Bulletproofs 4 11.04 1.37 12.41 672
DeKART (BLS12-381) 3 6.71 (1.65x) 3.36 (0.41x) 10.07 (1.23x) 1,648 (2.45x)
Bulletproofs 8 21.23 2.42 23.65 736
DeKART (BLS12-381) 7 7.05 (3.01x) 3.20 (0.76x) 10.25 (2.31x) 1,648 (2.24x)
Bulletproofs 16 40.63 4.03 44.66 800
DeKART (BLS12-381) 15 8.58 (4.74x) 3.18 (1.27x) 11.76 (3.80x) 1,648 (2.06x)
Bulletproofs 32 83.00 6.98 89.98 864
DeKART (BLS12-381) 31 16.09 (5.16x) 3.22 (2.17x) 19.31 (4.66x) 1,648 (1.91x)
Bulletproofs 64 159.72 12.12 171.84 928
DeKART (BLS12-381) 63 17.97 (8.89x) 3.20 (3.79x) 21.17 (8.12x) 1,648 (1.78x)
Bulletproofs 128 306.54 22.53 329.07 992
DeKART (BLS12-381) 127 28.48 (10.76x) 3.15 (7.15x) 31.63 (10.40x) 1,648 (1.66x)
Bulletproofs 256 600.51 43.83 644.34 1,056
DeKART (BLS12-381) 255 48.73 (12.32x) 3.18 (13.78x) 51.91 (12.41x) 1,648 (1.56x)
Bulletproofs 512 1,197.00 88.54 1,285.54 1,120
DeKART (BLS12-381) 511 61.91 (19.33x) 3.16 (28.02x) 65.07 (19.76x) 1,648 (1.47x)
Bulletproofs 1024 2,369.10 171.48 2,540.58 1,184
DeKART (BLS12-381) 1023 111.55 (21.24x) 3.12 (54.96x) 114.67 (22.16x) 1,648 (1.39x)
Bulletproofs 2048 4,763.80 349.29 5,113.09 1,248
DeKART (BLS12-381) 2047 205.36 (23.20x) 3.15 (110.89x) 208.51 (24.52x) 1,648 (1.32x)

$\ell = 32$ numbers

Scheme n Proving time (ms) Verify time (ms) Total time (ms) Proof size (bytes)
Bulletproofs 2 9.95 1.42 11.37 672
DeKART (BLS12-381) 1 11.32 (0.88x) 3.77 (0.38x) 15.09 (0.75x) 2,928 (4.36x)
Bulletproofs 4 19.91 2.30 22.21 736
DeKART (BLS12-381) 3 11.61 (1.71x) 3.68 (0.62x) 15.29 (1.45x) 2,928 (3.98x)
Bulletproofs 8 40.13 3.96 44.09 800
DeKART (BLS12-381) 7 12.11 (3.31x) 3.69 (1.07x) 15.80 (2.79x) 2,928 (3.66x)
Bulletproofs 16 76.14 6.69 82.83 864
DeKART (BLS12-381) 15 12.74 (5.98x) 3.67 (1.82x) 16.41 (5.05x) 2,928 (3.39x)
Bulletproofs 32 149.45 11.95 161.40 928
DeKART (BLS12-381) 31 27.80 (5.38x) 3.72 (3.21x) 31.52 (5.12x) 2,928 (3.16x)
Bulletproofs 64 288.81 22.40 311.21 992
DeKART (BLS12-381) 63 29.82 (9.69x) 3.68 (6.09x) 33.50 (9.29x) 2,928 (2.95x)
Bulletproofs 128 572.05 42.26 614.31 1,056
DeKART (BLS12-381) 127 47.78 (11.97x) 3.70 (11.42x) 51.48 (11.93x) 2,928 (2.77x)
Bulletproofs 256 1,135.90 83.23 1,219.13 1,120
DeKART (BLS12-381) 255 81.28 (13.98x) 3.86 (21.56x) 85.14 (14.32x) 2,928 (2.61x)
Bulletproofs 512 2,240.80 167.07 2,407.87 1,184
DeKART (BLS12-381) 511 102.28 (21.91x) 3.86 (43.28x) 106.14 (22.69x) 2,928 (2.47x)
Bulletproofs 1024 4,527.10 328.03 4,855.13 1,248
DeKART (BLS12-381) 1023 180.72 (25.05x) 3.66 (89.63x) 184.38 (26.33x) 2,928 (2.35x)
Bulletproofs 2048 8,911.40 663.67 9,575.07 1,312
DeKART (BLS12-381) 2047 343.12 (25.97x) 3.75 (176.98x) 346.87 (27.60x) 2,928 (2.23x)

$\ell = 64$ numbers

Scheme n Proving time (ms) Verify time (ms) Total time (ms) Proof size (bytes)
Bulletproofs 2 19.59 2.42 22.01 736
DeKART (BLS12-381) 1 21.03 (0.93x) 4.34 (0.56x) 25.37 (0.87x) 5,488 (7.46x)
Bulletproofs 4 37.80 3.89 41.69 800
DeKART (BLS12-381) 3 20.99 (1.80x) 4.39 (0.89x) 25.38 (1.64x) 5,488 (6.86x)
Bulletproofs 8 74.05 6.67 80.72 864
DeKART (BLS12-381) 7 21.42 (3.46x) 4.32 (1.54x) 25.74 (3.14x) 5,488 (6.35x)
Bulletproofs 16 143.14 11.62 154.76 928
DeKART (BLS12-381) 15 23.03 (6.22x) 4.37 (2.66x) 27.40 (5.65x) 5,488 (5.91x)
Bulletproofs 32 288.07 21.73 309.80 992
DeKART (BLS12-381) 31 51.75 (5.57x) 4.39 (4.95x) 56.14 (5.52x) 5,488 (5.53x)
Bulletproofs 64 549.63 42.65 592.28 1,056
DeKART (BLS12-381) 63 53.66 (10.24x) 4.35 (9.80x) 58.01 (10.21x) 5,488 (5.20x)
Bulletproofs 128 1,100.30 84.91 1,185.21 1,120
DeKART (BLS12-381) 127 88.91 (12.38x) 4.36 (19.47x) 93.27 (12.71x) 5,488 (4.90x)
Bulletproofs 256 2,208.40 163.01 2,371.41 1,184
DeKART (BLS12-381) 255 149.66 (14.76x) 4.33 (37.65x) 153.99 (15.40x) 5,488 (4.64x)
Bulletproofs 512 4,351.90 329.06 4,680.96 1,248
DeKART (BLS12-381) 511 182.69 (23.82x) 4.28 (76.88x) 186.97 (25.04x) 5,488 (4.40x)
Bulletproofs 1024 8,576.10 650.01 9,226.11 1,312
DeKART (BLS12-381) 1023 320.90 (26.73x) 4.29 (151.52x) 325.19 (28.37x) 5,488 (4.18x)
Bulletproofs 2048 17,469.00 1,307.60 18,776.60 1,376
DeKART (BLS12-381) 2047 619.91 (28.18x) 4.49 (291.22x) 624.40 (30.07x) 5,488 (3.99x)

Conclusion

Your thoughts or comments are welcome on this thread.

Avenues for further reducing proof size and verifier time:

  1. $\approx 300\ \mu$s: Hiding KZG variant that uses 2 pairings instead of 1. (Maybe original KZG but with smaller degree for the blinded polynomial?)

  2. $\approx 350\ \mu$s Instead of separately computing the $\zkpokVerify$ MSM and the $U$ MSM, could we, in a single MSM, compute $U\cdot A’$ (where $A’$ should equal the commitment $A$ in the $\piPok$ proof) and then divide $A$ out and check the KZG proof against the result? Seems wrong, but should check.
  3. Somehow include a provably-correct $U$ in the proof
  4. Somehow avoid including the scalars (evaluations) in the proof and just commit to them in a single group element

Acknowledgements

This is joint work with Dan Boneh, Trisha Datta, Kamilla Nazirkhanova and Rex Fernando.

Future work

We could accept a $b_\max$ and $n_\max$ as arguments to $\dekartSetup$ and change the $b$ subscript from $\mathsf{Dekart}_b$ to be an actual argument. If we do, then this setup should only output powers-of-$\tau$ of max degree $L-1 = b(n+1) - 1$ instead of Lagrange commitments. Then, a $\dekart.\mathsf{Specialize}$ algorithm can be introduce to obtain a CRS for a specific $n$, including dealing with non-powers of two.

Once $b$ is an input to the setup, prove and verification algorithm, it should be checked for being smaller than in the setup. (Or just check that $b$ and $n$ “match” $L$? Could we allow for larger $b$ while shrinking $n$?)

Appendix: Computing $h(X)$ for $b=2$

Recall that, when $b=2$, the degree of $h(X)$ is $n \Rightarrow$ we no longer need two different FFT domains: i.e., $\S = \L$ and $L = n + 1$. This is why the algorithm below can stay rather simple.

We borrow differentiation tricks from Groth16 to avoid doing FFT-based polynomial multiplication. This keeps our FFTs of size $(n+1)$, as opposed to size $2(n+1)$.

Our goal will be to obtain all $(h(\omega^i))_{i\in[n+1)}$ evaluations and then do a $\fmsmOne{n+2}$ MSM to commit to it and obtain $\emph{D}$ from Eq. \ref{eq:D}.

Recall that: \begin{align} \label{eq:hx-njx} h(X) &= \frac{\beta \cdot \left(\hat{f}(X) - \sum_{j\in[\ell)} 2^j \cdot f_j(X)\right) + \sum_{j\in[\ell)} \beta_j\cdot f_j(X)(f_j(X) - 1)}{\VS(X)}\\
\label{eq:njx} &= \frac{\beta \cdot \hat{f}(X) - \beta\cdot\sum_{j\in[\ell)} 2^j \cdot f_j(X) + \sum_{j\in[\ell)} \beta_j\cdot \overbrace{f_j(X)(f_j(X) - 1)}^{\bydef \term{N_j(X)}}}{\vanishS}\\
&= \frac{\beta \cdot \hat{f}(X) - \beta\cdot \sum_{j\in[\ell)} 2^j \cdot f_j(X) + \sum_{j\in[\ell)} \beta_j\cdot \term{N_j(X)}}{\vanishS} \end{align} If we try and compute $h(\omega^i)$ using the formula above, we will only succed for $i = 0$: \begin{align} \label{eq:h_omega_0} h(\omega^0) &= \frac{\beta \cdot \left(r - \sum_{j\in[\ell)} 2^j \cdot r_j\right) + \sum_{j\in[\ell)} \beta_j\cdot r_j(r_j - 1)}{n+1}\\
\end{align}

Note: $\VS(X) \bydef 1 + X + X^2 + \ldots + X^n \Rightarrow \VS(\omega^0) = \VS(1) = n+1$.

Unfortunately, for $i\in[n]$, the formula yields 0/0. But we can apply differentiation tricks, which tell us that, $\forall i\in[n]$: \begin{align} \label{eq:h_omega_i} h(\omega^i) &= \left.\frac{\beta \cdot \emph{\hat{f}’(X)} - \beta\cdot\sum_{j\in[\ell)} 2^j \cdot \emph{f_j’(X)} + \sum_{j\in[\ell)}\beta_j\cdot \emph{N_j’(X)}}{\emph{\left(\vanishS\right)’}}\right|_{X=\omega^i} \end{align}

So, as long as we can evaluate the derivatives highlighted above efficiently, we can interpolate $h(X)$!

Step 1: The derivative of the denominator $\VS(X)$ can be evaluated as follows: \begin{align} (\VS(X))’ = \left(\vanishS\right)’ &= \frac{(X^{n+1} - 1)’(X-1) - (X-1)’(X^{n+1} - 1)}{(X-1)^2}\\
&= \frac{(n+1)X^n(X-1) - (X^{n+1} - 1)}{(X-1)^2}\\
%&= \frac{(n+\cancel{1})X^{n+1} - (n+1)X^n - (\cancel{X^{n+1}} - 1)}{(X-1)^2}\\
%&= \frac{nX^{n+1} - (n+1)X^n + 1}{(X-1)^2}\\
\end{align} Plugging in any root of unity $\omega^i\ne \omega^0$, we get: \begin{align} \emph{\left.\left(\vanishS\right)’\right|_{X=\omega^i}} &= \frac{(n+1)(\omega^i)^n(\omega^i-1) - (\overbrace{(\omega^i)^{n+1}}^{1} - 1)}{(\omega^i-1)^2}\\
&= \frac{(n+1)(\omega^i)^n(\omega^i-1)}{(\omega^i-1)^2}\\
&= \frac{(n+1)(\omega^i)^n}{\omega^i-1}\\
\label{eq:vanishSprime_0} &= \frac{(n+1)\omega^{-i}}{\omega^i-1} = \emph{\frac{n+1}{\omega^i(\omega^i - 1)}}\\
%&= \frac{-(n+1)(\omega^i)^n + 1}{(\omega^i-1)^2}\\
%&= \frac{-(n+1)\omega^{-i} + 1}{(\omega^i-1)^2}\\
\end{align}

The expression above can only be used, and need only be used, to evaluate the derivative at $\omega^i$ when $i\ne0$. In fact, the inverses of these evaluations should be precomputed during the setup! (It is wiser to precompute the inverses so that we can evaluate Eq. \ref{eq:h_omega_i} without the need for a batch inversion.)

Step 2: The derivative of $\hat{f}(X)$ must be evaluated at all the roots of unity: First, a size-$(n+1)$ inverse FFT can be used to get the coefficients $\term{\hat{f}_i}$ of $\hat{f}(X)$ s.t.: \begin{align} \hat{f}(X) = \sum_{i\in [n+1)} \hat{f}_i \cdot X^i \end{align}

Second, the derivative’s coefficients can be computed in time $\Fmul{n}$ as: \begin{align} \hat{f}’(X) = \sum_{i\in [1, n+1)} i \cdot \hat{f}_i \cdot X^{i-1} \end{align}

Third, a size-$(n+1)$ FFT can be used to compute all evaluations of $\hat{f}’(X)$ at the roots of unity: \begin{align} \hat{f}’(\omega^0), \hat{f}’(\omega^1), \ldots \hat{f}’(\omega^n) \end{align}

Step 3: The derivatives of $f_j(X)$ must be evaluated at all the roots of unity:

This can be done just like for $\hat{f}(X)$ in Step 3 above: an inverse FFT, a differentiation, followed by an FFT.

We will compute each $f_j’(X)$ derivative individually, rather than compute one derivative for the $\sum_j 2^j\cdot f_j(X)$ polynomial. This is because we will need the individual $f_j’(X)$ derivatives anyway in Step 4 below.

Step 4: The derivative of $N_j(X)$ must be evaluated at all the roots of unity: Recall that: \begin{align} \emph{N_j(X)} &\bydef f_j(X)(f_j(X) - 1)\Rightarrow\\
\term{N_j’(X)} &= \left(f_j(X)(f_j(X) - 1)\right)’\Rightarrow\\
&= f_j(X)(f_j(X) - 1)’ + f_j’(X)(f_j(X) - 1)\Rightarrow\\
&= f_j(X)f_j’(X) + f_j’(X)f_j(X) - f_j’(X)\Rightarrow\\
&= 2f_j(X)f_j’(X) - f_j’(X)\\
&= f_j’(X) \left[ 2f_j(X) - 1 \right] \end{align} Note that: \begin{align} \forall i \in [n],\ &f_j(\omega^i)\in\{0,1\}\Rightarrow\\
\forall i \in [n],\ &2f_j(\omega^i) - 1\in \{-1,1\}\Rightarrow\\
\forall i \in [n],\ &N_j’(\omega^i) = \pm f_j’(\omega^i) \end{align} As a result, computing $N_j’(\omega^i)$ will cost at most $\Fadd{1}$.

Time complexity

tl;dr: Time complexity is dominated by: $2(\ell+1)$ size-$(n+1)$ FFTs, $\Fmul{3\ell n}$ and $\Fadd{3\ell n}$.

To compute all $\hat{f}’(\omega^i)$’s and $f_j’(\omega^i)$’s:

  1. $\ell+1$ size-$(n+1)$ inverse FFT for the coefficients in monomial basis
  2. $\Fmul{(\ell+1)n}$ for doing the differentiation on the coefficients
  3. $\ell+1$ size-$(n+1)$ FFT for the evaluations

Then, to compute all $N_j’(\omega^i)$’s, given the above:

  1. $\Fadd{\ell n}$

Then, to evaluate all $h(\omega^i)$’s, for $i\in[n]$, given the above:

  1. $\Fmul{(2\ell+2)n}$ as per Eq. \ref{eq:h_omega_i_expanded}
    • $\ell$ multiplications come from the first sum over $2^j\cdot f_j’$
    • 1 multiplication comes from multiplying by $\beta$
    • $\ell$ come from the second sum over $\beta_j\cdot (\pm f_j’)$
    • 1 last multiplication from dividing by the precomputed inverted denominator
  2. $\Fadd{(2\ell+2)n}$
    • $\ell$ additions come from the first sum
    • $\ell$ additions come from the second sum
    • 2 additions come from adding everything in the numerator

Lastly, to evaluate $h(\omega^0)$, as per Eq. \ref{eq:h_omega_0}

  1. $\Fmul{3\ell+2}$
    • $\ell$ multiplications come from the first sum over $2^j\cdot r_j$
    • 1 multiplication comes from multiplying by $\beta$
    • $2\ell$ come from the second sum over $\beta_j\cdot r_j(r_j-1)$
    • 1 last multiplication from dividing by the precomputed inverted denominator
  2. $\Fadd{2\ell+2}$, just like with the $h(\omega^i)$’s

Note that, $\forall i\in[n]$ we can rewrite Eq. \ref{eq:h_omega_i} with the insights from above as: \begin{align} \label{eq:h_omega_i_expanded} h(\omega^i) &= \frac{\beta \cdot \emph{\hat{f}’(\omega^i)} - \beta\cdot\sum_{j\in[\ell)} 2^j \cdot \emph{f_j’(\omega^i)} + \sum_{j\in[\ell)}\beta_j\cdot \emph{N_j’(\omega^i)}}{\emph{(n+1)/(\omega^i(\omega^i-1))}}\\
&= \frac{\beta \left(\hat{f}’(\omega^i) - \sum_{j\in[\ell)} 2^j \cdot f_j’(\omega^i)\right) + \sum_{j\in[\ell)}\beta_j\cdot \emph{\left(\pm f_j’(\omega^i)\right)}}{(n+1)/(\omega^i(\omega^i-1))}\\
\end{align}

Doing this $h(X)$ interpolation faster is an open problem, which is why in the paper we explore a multinear variant of DeKART3.

Appendix: Computing $h(X)$ for $b\ne 2$

Recall that, when $b > 2$, the formula for $h(X)$ in Eq. \ref{eq:hx-njx} changes to: \begin{align} h(X) &= \frac{\beta \cdot \left(\hat{f}(X) - \sum_{j\in[\ell)} \emph{b}^j \cdot f_j(X)\right) + \sum_{j\in[\ell)} \beta_j\cdot f_j(X)(f_j(X) - 1)\emph{\ldots(f_j(X) - (b-1))}}{\VS(X)}\\
&= \frac{\beta \cdot \left(\hat{f}(X) - \sum_{j\in[\ell)} b^j \cdot f_j(X)\right) + \sum_{j\in[\ell)} \beta_j\cdot \overbrace{f_j(X)(f_j(X) - 1)\ldots(f_j(X) - (b-1))}^{\bydef \term{N_j(X)}}}{\vanishS}\\
&= \frac{\beta \cdot \left(\hat{f}(X) - \sum_{j\in[\ell)} b^j \cdot f_j(X)\right) + \sum_{j\in[\ell)} \beta_j\cdot \term{N_j(X)}}{\vanishS} \end{align}

As a result, the degree of $N_j(X)$ is $bn$. Thus, the degree of $h(X)$ becomes $(b-1)n$

So, to interpolate $h(X)$, we would have to evaluate it over the larger size-$L$ domain $\L$ (instead of the size-$(n+1)$ domain $\S$ used in Eq. \ref{eq:h_omega_i}).

Our new challenge is that the 0/0 trick used before, when $b=2$, will no longer apply here: some of the $\zeta^i\in\L$ will not necessarily fall in the $h(\zeta^i) = 0/0$ case, while others will.

Our approach will have two parts. First, we’ll use similar differentiation tricks as in the $b=2$ case to compute $h(\zeta^i)$ for $i$’s that give us 0/0. Then, we’ll re-use the coefficient form of the $\hat{f}$ and $f_j$ polynomials from the first part and do size-$L$ FFTs to get the $\hat{f}(\zeta^i)$ and $f_j(\zeta^i)$ evaluations that do not give us division by 0 errors.

References

For cited works, see below 👇👇

  1. Zeromorph: Zero-Knowledge Multilinear-Evaluation Proofs from Homomorphic Univariate Commitments, by Tohru Kohrita and Patrick Towa, in Cryptology ePrint Archive, Paper 2023/917, 2023, [URL] 

  2. To use DeKART for non-powers of two $n$’s, just run the $\dekartSetup$ algorithm with the smallest $n’ > n$ such that $n’$ is a power of two. Then, run the $\dekartProve$ algorithm with a vector of $n’$ values such that (1) the first $n$ values are the values you want to prove and (2) the last $n’-n$ values are set to zero. 

  3. DekartProof: Efficient Vector Range Proofs and Their Applications, by Dan Boneh and Trisha Datta and Rex Fernando and Kamilla Nazirkhanova and Alin Tomescu, in Cryptology {ePrint} Archive, Paper 2025/1159, 2025, [URL]