<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en"><generator uri="https://jekyllrb.com/" version="4.4.1">Jekyll</generator><link href="https://alinush.github.io//feed.xml" rel="self" type="application/atom+xml" /><link href="https://alinush.github.io//" rel="alternate" type="text/html" hreflang="en" /><updated>2026-03-13T16:40:00+00:00</updated><id>https://alinush.github.io//feed.xml</id><title type="html">Alin Tomescu</title><subtitle>You taught me the courage of stars before you left...</subtitle><author><name>Alin Tomescu</name></author><entry><title type="html">Complete vs. full vs. perfect binary trees</title><link href="https://alinush.github.io//binary-trees" rel="alternate" type="text/html" title="Complete vs. full vs. perfect binary trees" /><published>2026-02-05T00:00:00+00:00</published><updated>2026-02-05T00:00:00+00:00</updated><id>https://alinush.github.io//complete-vs-full-vs-perfect-binary-trees</id><content type="html" xml:base="https://alinush.github.io//binary-trees"><![CDATA[<p class="info"><strong>tl;dr:</strong> The terms <em>full</em>, <em>complete</em>, and <em>perfect</em> binary tree are often confused with each other. In this short post, we define each one, give examples, and work out <strong>all</strong> the relationships between them — including the perhaps-surprising fact that <em>full + complete does <strong>not</strong> imply perfect</em>.</p>

<p>Real quickly, the Venn diagram below shows how the three classes relate to each other: <strong>Perfect</strong> is a strict subset of <strong>Full $\cap$ Complete</strong>.</p>

<div style="text-align: center;">
<img src="/pictures/binary-trees/binary-trees-venn.png" alt="Venn diagram of full, complete, and perfect binary trees" style="max-width: 550px; width: 100%;" />
</div>

<!--more-->

<!-- Here you can define LaTeX macros -->
<div style="display: none;">$
$</div>
<p><!-- $ --></p>

<h2 id="definitions">Definitions</h2>

<h3 id="full-binary-tree">Full binary tree</h3>

<p>A <strong>full</strong> (a.k.a., <strong>proper</strong> or <strong>strict</strong>) binary tree is one where every node has <strong>0 or 2 children</strong>. In other words, no node has exactly one child.</p>

<p><strong>Example 1</strong> — A single node (trivially full):</p>

<pre><code class="language-mermaid">graph TD
    A(( ))
</code></pre>

<p><strong>Example 2</strong> — A 5-node full tree:</p>

<pre><code class="language-mermaid">graph TD
    A(( )) --- B(( ))
    A --- C(( ))
    C --- D(( ))
    C --- E(( ))
</code></pre>

<p><strong>Example 3</strong> — A 7-node full tree that is <strong>not</strong> perfect (leaves at depths 1, 2, and 3):</p>

<pre><code class="language-mermaid">graph TD
    A(( )) --- B(( ))
    A --- C(( ))
    B --- D(( ))
    B --- E(( ))
    D --- F(( ))
    D --- G(( ))
</code></pre>

<h3 id="complete-binary-tree">Complete binary tree</h3>

<p>A <strong>complete</strong> binary tree must have two properties:</p>
<ol>
  <li>Every level is fully-filled <strong>except possibly the last</strong>,</li>
  <li>The last level is filled <strong>from left to right</strong>.</li>
</ol>

<p class="note">This is the shape you get when you insert elements into a binary heap one by one.</p>

<p><strong>Example 1</strong> — A 4-node complete tree (note: <strong>not</strong> full, since the left child has only one child):</p>

<pre><code class="language-mermaid">graph TD
    A(( )) --- B(( ))
    A --- C(( ))
    B --- D(( ))
    B --- inv[ ]

    style inv fill-opacity:0, stroke-opacity:0;
    linkStyle 3 stroke:none;
</code></pre>

<p><strong>Example 2</strong> — A 5-node complete tree (this one <strong>is</strong> also full):</p>

<pre><code class="language-mermaid">graph TD
    A(( )) --- B(( ))
    A --- C(( ))
    B --- D(( ))
    B --- E(( ))
</code></pre>

<p><strong>Example 3</strong> — A 6-node complete tree (note: <strong>not</strong> full, since the right child has only one child):</p>

<pre><code class="language-mermaid">graph TD
    A(( )) --- B(( ))
    A --- C(( ))
    B --- D(( ))
    B --- E(( ))
    C --- F(( ))
    C --- inv[ ]

    style inv fill-opacity:0, stroke-opacity:0;
    linkStyle 5 stroke:none;
</code></pre>

<h3 id="perfect-binary-tree">Perfect binary tree</h3>

<p>A <strong>perfect</strong> binary tree must have two properties:</p>
<ol>
  <li><strong>All</strong> internal nodes have exactly 2 children,</li>
  <li>All leaves are at the <strong>same depth</strong>.</li>
</ol>

<p class="note">A perfect binary tree of height $h$ has exactly $2^{h+1} - 1$ nodes.</p>

<p><strong>Example 1</strong> — Height 0 (1 node):</p>

<pre><code class="language-mermaid">graph TD
    A(( ))
</code></pre>

<p><strong>Example 2</strong> — Height 1 (3 nodes):</p>

<pre><code class="language-mermaid">graph TD
    A(( )) --- B(( ))
    A --- C(( ))
</code></pre>

<p><strong>Example 3</strong> — Height 2 (7 nodes):</p>

<pre><code class="language-mermaid">graph TD
    A(( )) --- B(( ))
    A --- C(( ))
    B --- D(( ))
    B --- E(( ))
    C --- F(( ))
    C --- G(( ))
</code></pre>

<hr />

<h2 id="relationships">Relationships</h2>

<p>There are six pairwise implications to consider. Let’s work through all of them.</p>

<h3 id="full-nrightarrow-complete">Full $\nRightarrow$ Complete</h3>

<p>Counterexample: this tree is full (every node has 0 or 2 children), but <strong>not</strong> complete because the second level is not filled from left to right — the right subtree is deeper than the left.</p>

<pre><code class="language-mermaid">graph TD
    A(( )) --- B(( ))
    A --- C(( ))
    C --- D(( ))
    C --- E(( ))
</code></pre>

<h3 id="full-nrightarrow-perfect">Full $\nRightarrow$ Perfect</h3>

<p>Counterexample: this full tree has leaves at depths 1 and 3, so it is <strong>not</strong> perfect.</p>

<pre><code class="language-mermaid">graph TD
    A(( )) --- B(( ))
    A --- C(( ))
    B --- D(( ))
    B --- E(( ))
    D --- F(( ))
    D --- G(( ))
</code></pre>

<h3 id="complete-nrightarrow-full">Complete $\nRightarrow$ Full</h3>

<p>Counterexample: this 4-node complete tree has a node with exactly <strong>one</strong> child, violating fullness.</p>

<pre><code class="language-mermaid">graph TD
    A(( )) --- B(( ))
    A --- C(( ))
    B --- D(( ))
    B --- inv[ ]

    style inv fill-opacity:0, stroke-opacity:0;
    linkStyle 3 stroke:none;
</code></pre>

<h3 id="complete-nrightarrow-perfect">Complete $\nRightarrow$ Perfect</h3>

<p>Counterexample: this 5-node complete tree has leaves at depths 1 and 2.</p>

<pre><code class="language-mermaid">graph TD
    A(( )) --- B(( ))
    A --- C(( ))
    B --- D(( ))
    B --- E(( ))
</code></pre>

<h3 id="perfect-rightarrow-full">Perfect $\Rightarrow$ Full</h3>

<p>By definition, every internal node of a perfect tree has exactly 2 children, which is exactly the requirement for fullness.</p>

<h3 id="perfect-rightarrow-complete">Perfect $\Rightarrow$ Complete</h3>

<p>In a perfect tree, <em>every</em> level is completely filled (including the last). This trivially satisfies the definition of completeness.</p>

<h3 id="full--complete-nrightarrow-perfect">Full + Complete $\nRightarrow$ Perfect</h3>

<p>Many people assume that if a tree is <em>both</em> full and complete, then it must be perfect. <strong>This is false.</strong></p>

<p><strong>Counterexample:</strong> Consider this 5-node tree.</p>

<pre><code class="language-mermaid">graph TD
    A(( )) --- B(( ))
    A --- C(( ))
    B --- D(( ))
    B --- E(( ))
</code></pre>

<ul>
  <li><strong>Full?</strong> Yes — every node has 0 or 2 children. ✅</li>
  <li><strong>Complete?</strong> Yes — levels 0 and 1 are fully-filled, and level 2 is filled from the left. ✅</li>
  <li><strong>Perfect?</strong> No — level 2 is not fully-filled $\Rightarrow$ all leaves are <strong>not</strong> at the same depth ❌</li>
</ul>

<p>The key insight is that <em>completeness</em> only requires the <strong>last</strong> level to be left-filled; it does <strong>not</strong> require the last level to be <em>fully</em> filled. And <em>fullness</em> only bans single-child nodes; it says nothing about leaf depths. Combined, the two properties are still not strong enough to force all leaves to the same depth.</p>

<h4 id="when-does-full--complete--perfect">When <em>does</em> full + complete $=$ perfect?</h4>

<p>When the number of nodes $n = 2^{h+1} - 1$ for some $h$ (i.e., the last level is fully-filled). In that case — and only that case — a complete tree is also perfect (and, trivially, also full).</p>

<hr />]]></content><author><name>Alin Tomescu</name></author><category term="trees" /><summary type="html"><![CDATA[tl;dr: The terms full, complete, and perfect binary tree are often confused with each other. In this short post, we define each one, give examples, and work out all the relationships between them — including the perhaps-surprising fact that full + complete does not imply perfect. Real quickly, the Venn diagram below shows how the three classes relate to each other: Perfect is a strict subset of Full $\cap$ Complete.]]></summary></entry><entry><title type="html">TIL: Field multiplications are faster than hashing!</title><link href="https://alinush.github.io//field-muls-vs-hashing" rel="alternate" type="text/html" title="TIL: Field multiplications are faster than hashing!" /><published>2026-02-04T00:00:00+00:00</published><updated>2026-02-04T00:00:00+00:00</updated><id>https://alinush.github.io//til-field-multiplications-are-faster-than-hashing</id><content type="html" xml:base="https://alinush.github.io//field-muls-vs-hashing"><![CDATA[<p class="info"><strong>tl;dr:</strong> I ran some benchmarks and was surprised to learn that multiplying two BLS12-381 scalar field elements is <strong>~5.5x faster</strong> than hashing 64 bytes with Blake3.</p>

<!--more-->

<!-- Here you can define LaTeX macros -->
<div style="display: none;">$
$</div>
<p><!-- $ --></p>

<h2 id="benchmark-results">Benchmark results</h2>

<table>
  <thead>
    <tr>
      <th>Operation</th>
      <th>Time</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>Blake3 hash (64 bytes)</td>
      <td>~52.7 ns</td>
    </tr>
    <tr>
      <td>BLS12-381 scalar field mul</td>
      <td>~9.5 ns</td>
    </tr>
  </tbody>
</table>

<p>Field multiplication wins handily.</p>

<h2 id="why">Why?</h2>

<p>The scalar field multiplication in <code class="language-plaintext highlighter-rouge">blstrs</code> is a single 256-bit Montgomery multiplication implemented in hand-tuned assembly.
Blake3, while blazingly fast for a hash function, still has to run its compression function which involves many more operations.</p>

<h2 id="reproduce-it-yourself">Reproduce it yourself</h2>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git clone https://github.com/alinush/bench-crypto
<span class="nb">cd </span>bench-blake3-vs-field
cargo bench
</code></pre></div></div>

<h2 id="references">References</h2>

<p>For cited works, see below 👇👇</p>]]></content><author><name>Alin Tomescu</name></author><category term="benchmarks" /><summary type="html"><![CDATA[tl;dr: I ran some benchmarks and was surprised to learn that multiplying two BLS12-381 scalar field elements is ~5.5x faster than hashing 64 bytes with Blake3.]]></summary></entry><entry><title type="html">Learning parity with noise (LPN)</title><link href="https://alinush.github.io//lpn" rel="alternate" type="text/html" title="Learning parity with noise (LPN)" /><published>2025-12-29T00:00:00+00:00</published><updated>2025-12-29T00:00:00+00:00</updated><id>https://alinush.github.io//learning-parity-with-noise-lpn</id><content type="html" xml:base="https://alinush.github.io//lpn"><![CDATA[<p class="info"><strong>tl;dr:</strong> A very useful cryptographic assumption that is related to coding theory.</p>

<!--more-->

<!-- Here you can define LaTeX macros -->
<div style="display: none;">$
\def\b{\mathbf{b}}
\def\e{\mathbf{e}}
\def\s{\mathbf{s}}
\def\A{\mathbf{A}}
\def\E{\mathcal{E}}
\def\G{\mathbf{G}}
$</div>
<p><!-- $ --></p>

<h2 id="preliminaries">Preliminaries</h2>

<ul>
  <li>bolded, lowercase variables (e.g., $\s$) denote <strong>column</strong> vectors in $\F^m\bydef \F^{m\times 1}$.</li>
  <li>$n$ is the dimension</li>
  <li>$m$ is the number of samples</li>
</ul>

<h2 id="introduction">Introduction</h2>

<p>The <strong>learning parity with noise (LPN)</strong> assumption was proposed in a 1993 CRYPTO paper by Blum, Furst, Kearns and Lipton<sup id="fnref:BFKL94"><a href="#fn:BFKL94" class="footnote" rel="footnote" role="doc-noteref">1</a></sup>.</p>

<p>Informally, the <strong>computational variant</strong> says that, for a public matrix $\A \in \F^{n \times m}$, a secret vector $\s\in \F^m$ and noise $\e\in \F^n$ sampled from an <strong>error distribution</strong> $\E$, it is hard to recover $\s$ from $(\A,\A\s+\e)$.</p>

<p>There is also a <strong>decisional variant</strong> that says it is hard to distinguish $(\A,\A\s+\e)$ from $(\A,\b)$, when $\b\randget \F^n$ (uniformly) and $\e$ is sampled appropriately from $\E$.</p>

<p>There is also a <strong>dual LPN variant</strong> introduced by Micciancio and Mol<sup id="fnref:MM11"><a href="#fn:MM11" class="footnote" rel="footnote" role="doc-noteref">2</a></sup>, which says that for a public matrix $\G\randget \mathcal{G}$, where $\mathcal{G}$ is a <strong>generator distribution</strong>, and noise $\e$ sampled from $\E$, it is hard to recover $\e$ from $(\G,\G\e)$</p>

<h2 id="references">References</h2>

<p>For cited works, see below 👇👇</p>

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:BFKL94">
      <p><strong>Cryptographic Primitives Based on Hard Learning Problems</strong>, by Blum, Avrim and Furst, Merrick and Kearns, Michael and Lipton, Richard J., <em>in Advances in Cryptology — CRYPTO’ 93</em>, 1994 <a href="#fnref:BFKL94" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:MM11">
      <p><strong>Pseudorandom Knapsacks and the Sample Complexity of LWE Search-to-Decision Reductions</strong>, by Micciancio, Daniele and Mol, Petros, <em>in Advances in Cryptology – CRYPTO 2011</em>, 2011 <a href="#fnref:MM11" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name>Alin Tomescu</name></author><category term="LPN" /><summary type="html"><![CDATA[tl;dr: A very useful cryptographic assumption that is related to coding theory.]]></summary></entry><entry><title type="html">Goldwasser-Kalai-Rothblum (GKR) proofs</title><link href="https://alinush.github.io//gkr" rel="alternate" type="text/html" title="Goldwasser-Kalai-Rothblum (GKR) proofs" /><published>2025-12-18T00:00:00+00:00</published><updated>2025-12-18T00:00:00+00:00</updated><id>https://alinush.github.io//goldwasser-kalai-rothblum-gkr-proofs</id><content type="html" xml:base="https://alinush.github.io//gkr"><![CDATA[<p class="info"><strong>tl;dr:</strong> A very much celebrated protocol.</p>

<!--more-->

<!-- Here you can define LaTeX macros -->
<div style="display: none;">$
$</div>
<p><!-- $ --></p>

<h2 id="resources">Resources</h2>

<ul>
  <li><a href="https://org.weids.dev/agenda/notes/gkr-sum-check-tutorial.html">GKR sumcheck tutorial</a>, by Angold Wang</li>
</ul>

<h2 id="references">References</h2>

<p>For cited works, see below 👇👇</p>]]></content><author><name>Alin Tomescu</name></author><summary type="html"><![CDATA[tl;dr: A very much celebrated protocol.]]></summary></entry><entry><title type="html">Post-quantum signature schemes</title><link href="https://alinush.github.io//post-quantum-signatures" rel="alternate" type="text/html" title="Post-quantum signature schemes" /><published>2025-12-08T00:00:00+00:00</published><updated>2025-12-08T00:00:00+00:00</updated><id>https://alinush.github.io//post-quantum-signature-schemes</id><content type="html" xml:base="https://alinush.github.io//post-quantum-signatures"><![CDATA[<p class="info"><strong>tl;dr:</strong> Some notes on post-quantum (PQ) signature schemes.</p>

<!--more-->

<!-- Here you can define LaTeX macros -->
<div style="display: none;">$
$</div>
<p><!-- $ --></p>

<h2 id="notes">Notes</h2>

<p>This is a research blog post on the state-of-the-art PQ signature schemes.</p>

<p>Steps:</p>

<ul class="task-list">
  <li class="task-list-item"><input type="checkbox" class="task-list-item-checkbox" disabled="disabled" checked="checked" />Surveyed NIST’s Round 2 additonal candidates<sup id="fnref:nist-round-2-additional"><a href="#fn:nist-round-2-additional" class="footnote" rel="footnote" role="doc-noteref">1</a></sup>
    <ul>
      <li>Filtered after Round 1. (See status report<sup id="fnref:ABCplus24"><a href="#fn:ABCplus24" class="footnote" rel="footnote" role="doc-noteref">2</a></sup>.)</li>
      <li>See all schemes <a href="/pictures/nist-round-1.png">here</a>, with the advanced ones marked in $\textcolor{blue}{\text{blue}}$
        <ul>
          <li><strong>A criticism:</strong> The MPC-in-the-Head schemes actually rely on all sorts of exotic assuptions. The caption makes it seem like they do not, which is confusing.</li>
        </ul>
      </li>
    </ul>
  </li>
  <li class="task-list-item"><input type="checkbox" class="task-list-item-checkbox" disabled="disabled" checked="checked" />Looked at <a href="#faest">FAEST</a>
    <ul>
      <li>Not as fast to verify as SPHINCS+</li>
      <li>Better if faster signing is desired (e.g., consensus signatures)</li>
    </ul>
  </li>
  <li class="task-list-item"><input type="checkbox" class="task-list-item-checkbox" disabled="disabled" checked="checked" />Looked at <a href="#slh-dsa-sphincs">SLH-DSA (SPHINCS+)</a>
    <ul>
      <li>This is ideal for the blockchain setting: minimal assumptions (just hashing), verification time descreases with signature size (by trading off signing time), standardized, sufficiently-succinct (e.g., 7.67 KiB)</li>
    </ul>
  </li>
  <li class="task-list-item"><input type="checkbox" class="task-list-item-checkbox" disabled="disabled" checked="checked" />Found all FIPS standards<sup id="fnref:fips"><a href="#fn:fips" class="footnote" rel="footnote" role="doc-noteref">3</a></sup>; not that many, hm.</li>
</ul>

<h2 id="faest">FAEST</h2>

<p class="note">See <a href="https://faest.info/">FAEST website</a> for a full list of resources.</p>

<p>Notes:</p>

<ul>
  <li>introduced in 2023<sup id="fnref:BBdplus23"><a href="#fn:BBdplus23" class="footnote" rel="footnote" role="doc-noteref">4</a></sup></li>
  <li>based on AES128/192/256 and <strong>Vector Oblivious Linear Evaluation (VOLE) in the head (VOLEiTH)</strong>
    <ul>
      <li>VOLEitH is constructed only from symmetric-key primitives</li>
    </ul>
  </li>
  <li>NIST cites it as <em>“somewhat complex”</em> and says <em>“the security proof is very technical.”</em><sup id="fnref:ABCplus24:1"><a href="#fn:ABCplus24" class="footnote" rel="footnote" role="doc-noteref">2</a></sup></li>
</ul>

<h3 id="construction">Construction</h3>

<p>\begin{align}
\pk &amp;= (x,y)\\<br />
\sk &amp;= k
\end{align}
such that:
\begin{align}
E_k(x) = y,\ \text{where}\ E\ \text{is a block cipher}
\end{align}</p>

<p>Signing works by doing a ZKPoK of $k$ using VOLEitH and the QuickSilver information-theoretic proof system<sup id="fnref:YSWW21e"><a href="#fn:YSWW21e" class="footnote" rel="footnote" role="doc-noteref">5</a></sup> (under the Fiat-Shamir transform).</p>

<h3 id="sizes">Sizes</h3>

<p>Secret and public keys are small: 32 bytes at 128-bit security.</p>

<p>Signature sizes are described below.</p>

<h3 id="performance">Performance</h3>

<p>Benchmarked on:</p>
<blockquote>
  <p><em>“a single core of a consumer notebook with an AMD Ryzen 7 5800H processor, with a base clock speed of 3.2 GHz and 16 GiB memory.”</em> 
<em>“Simultaneous Multi-Threading was enabled.”</em>
<em>The computer was running Linux 6.1.30, and the implementations were built with GCC 12.2.1.”</em></p>
</blockquote>

<p><strong>Unoptimized</strong> reference implementation is <strong>too slow</strong> (tens of milliseconds to verify):</p>

<div align="center"><img style="width:60%" src="/pictures/faest-v1-ref-impl.png" /></div>

<p>But <strong>x86-64 AVX2</strong> implementation could be <strong>practical</strong> at 128-bit security level (0.87 ms to verify 6,336-byte signatures):</p>

<p class="warning">These numbers are <strong>multi-threaded</strong>!</p>

<div align="center"><img style="width:80%" src="/pictures/faest-v1-x86_64-avx2-impl.png" /></div>

<p class="note">Perhaps this recent QuickSilver improvement<sup id="fnref:BCCplus23e"><a href="#fn:BCCplus23e" class="footnote" rel="footnote" role="doc-noteref">6</a></sup> will help?
I quote: <em>“For a circuit of size $\sizeof{C} = 2^{27}$, it shows up to 83.6× improvement on communication, compared to the general VOLE-ZK Quicksilver. In terms of running time, it is 70% faster when bandwidth is 10Mbps”</em>.<br />
<br />
<a href="https://x.com/pratiks_crypto/status/1998113981794529597">Pratik Sarkar suggests</a> it likely does <em>not</em>.
And even if it does, it would require additively-homomorphic encryption like BGV, which would introduce additional assumptions.</p>

<h2 id="slh-dsa-sphincs">SLH-DSA (SPHINCS+)</h2>

<p class="note">See <a href="https://sphincs.org/index.html">SPHINCS website</a> for a full list of resources.</p>

<ul>
  <li>FIPS-standardized<sup id="fnref:FIPS205"><a href="#fn:FIPS205" class="footnote" rel="footnote" role="doc-noteref">7</a></sup></li>
  <li>stateless, hash-based (hence the “SLH” acronym?)</li>
  <li><em>“an SLH-DSA key pair contains $2^{63}, 2^{64}, 2^{66}$, or $2^{68}$ <strong>forest of random subsets (FORS)</strong> keys”</em></li>
  <li><em>“FORS allows each key pair to safely sign a small number of messages”</em></li>
  <li><em>“An XMSS key consists of $2^{h’}$ WOTS$^+$ keys and can sign $2^{h’}$ messages”</em></li>
  <li>a rather-involved construction; would need to dig deeper to see if there’s a simple design underneath</li>
  <li><em>“The SHA2-based parameter sets are 2x slower than the SHAKE-based ones”</em></li>
</ul>

<h3 id="construction-1">Construction</h3>

<p>Key generation, at a high-level, works like this:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>(sk_seed, sk_prf, pk_seed) ← random()
root ← build_merkle_tree(sk_seed, pk_seed)
SK = (sk_seed, sk_prf, pk_seed, root)
PK = (pk_seed, root)
</code></pre></div></div>

<p>So, in the context of blockchain accounts, the user’s mnemonic should be used to deterministically derive the <code class="language-plaintext highlighter-rouge">pk_seed</code>.
Otherwise, wallet recovery won’t work.</p>

<h3 id="sizes-1">Sizes</h3>

<p>Key and signature sizes from FIPS-205<sup id="fnref:FIPS205:1"><a href="#fn:FIPS205" class="footnote" rel="footnote" role="doc-noteref">7</a></sup>:</p>

<div align="center"><img style="width:80%" src="/pictures/slh-dsa-sizes.png" /></div>

<h3 id="sphincs-shake-128f-benchmarks"><code class="language-plaintext highlighter-rouge">sphincs-shake-128f</code> benchmarks</h3>

<p class="note">16.69 KiB signature size, signing time is 17 ms and verification is 1.1 ms!</p>

<p>Benchmarking the <strong>reference implementation</strong> in C<sup id="fnref:sphincsplus-git"><a href="#fn:sphincsplus-git" class="footnote" rel="footnote" role="doc-noteref">8</a></sup> on my Apple Macbook Pro, M1 Max below.
They only provide an ARM implementation for the SHAKE variant $\Rightarrow$ not sure what the SHA2 numbers would look like on ARM.</p>

<p class="todo">Are these numbers single-threaded?</p>

<p class="todo">Got this <code class="language-plaintext highlighter-rouge">kpc_get_thread_counters failed, run as sudo?</code> error (I think) during the <code class="language-plaintext highlighter-rouge">thash</code> benchmarks.</p>

<p>Building this variant on ARM via:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git clone https://github.com/sphincs/sphincsplus/
cd shake-a64/
make clean
make benchmark
</code></pre></div></div>

<p>The results, edited for clarity of exposition:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>cc -Wall -Wextra -Wpedantic -Wmissing-prototypes -O3 -std=c99 -fomit-frame-pointer -flto -DPARAMS=sphincs-shake-128f  -o test/benchmark test/cycles.c hash_shake.c hash_shakex2.c thash_shake_robustx2.c address.c randombytes.c merkle.c wots.c utils.c utilsx2.c fors.c sign.c fips202.c fips202x2.c f1600x2_const.c f1600x2.s test/benchmark.c
wrong fixed counters count

arameters: n = 16, h = 66, d = 22, b = 6, k = 33, w = 16

Running 10 iterations.

thash                avg.        0.53 us
f1600x2              avg.        0.28 us
thashx2              avg.        0.58 us

Generating keypair.. avg.     1,294.10 us
  - WOTS pk gen 2x.. avg.       294.70 us
Signing..            avg.    17,255.30 us --&gt; 17 ms
  - FORS signing..   avg.       853.00 us
  - WOTS pk gen x2.. avg.       176.80 us
Verifying..          avg.     1,096.50 us --&gt; 1.1 ms

Signature size: 17,088 bytes (16.69 KiB)

Public key size: 32 bytes (0.03 KiB)
Secret key size: 64 bytes (0.06 KiB)
</code></pre></div></div>

<h3 id="sphincs-shake-128s-benchmarks"><code class="language-plaintext highlighter-rouge">sphincs-shake-128s</code> benchmarks</h3>

<p class="success"><strong>Clear winner:</strong> hash-based, 7.67 KiB signatures created in 336 ms that verify in 0.4 ms!</p>

<p>Building this variant on ARM by via:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git clone https://github.com/sphincs/sphincsplus/
cd shake-a64/
gsed -i 's/shake-128f/shake-128s/g' Makefile
make clean
make benchmark
</code></pre></div></div>

<p class="warning">Note that the <code class="language-plaintext highlighter-rouge">Makefile</code> was modified to build the shorter <code class="language-plaintext highlighter-rouge">s</code>-variant of SPHINCS+.</p>

<p>The results for <code class="language-plaintext highlighter-rouge">sphincs-shake-128s</code>, edited for clarity of exposition:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>cc -Wall -Wextra -Wpedantic -Wmissing-prototypes -O3 -std=c99 -fomit-frame-pointer -flto -DPARAMS=sphincs-shake-128s  -o test/benchmark test/cycles.c hash_shake.c hash_shakex2.c thash_shake_robustx2.c address.c randombytes.c merkle.c wots.c utils.c utilsx2.c fors.c sign.c fips202.c fips202x2.c f1600x2_const.c f1600x2.s test/benchmark.c
wrong fixed counters count
Parameters: n = 16, h = 63, d = 7, b = 12, k = 14, w = 16

Running 10 iterations.

thash                avg.        0.94 us
f1600x2              avg.        0.38 us
thashx2              avg.        0.66 us

Generating keypair.. avg.    46,510.80 us
  - WOTS pk gen 2x.. avg.       178.80 us
Signing..            avg.   336,702.20 us --&gt; 336.7 ms
  - FORS signing..   avg.    22,816.50 us
  - WOTS pk gen x2.. avg.       176.40 us
Verifying..          avg.       396.10 us --&gt; 0.396 ms

Signature size: 7,856 bytes (7.67 KiB)

Public key size: 32 bytes (0.03 KiB)
Secret key size: 64 bytes (0.06 KiB)
</code></pre></div></div>

<h3 id="performance-of-rustcryptosignatures">Performance of <code class="language-plaintext highlighter-rouge">RustCrypto/signatures</code></h3>

<p class="note">Seems like a single-threaded implementation.</p>

<p>Bechmarked via:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git clone https://github.com/RustCrypto/signatures
cd slh-dsa/
cargo bench
</code></pre></div></div>

<p class="warning">SHA2 variants are faster.
It may be because they leverage <a href="https://developer.arm.com/documentation/ddi0602/2025-09/SIMD-FP-Instructions/SHA256SU0--SHA256-schedule-update-0-">native SHA2 instructions</a>.</p>

<table>
  <thead>
    <tr>
      <th>Scheme</th>
      <th>Signing Time</th>
      <th>Verification Time</th>
      <th>Sig. size (bytes)</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>SLH-DSA-SHAKE-128<strong>s</strong></td>
      <td>1.06 s</td>
      <td>0.98 ms</td>
      <td>7,856</td>
    </tr>
    <tr>
      <td>SLH-DSA-SHAKE-192<strong>s</strong></td>
      <td>1.81 s</td>
      <td>1.46 ms</td>
      <td>16,224</td>
    </tr>
    <tr>
      <td>SLH-DSA-SHAKE-256<strong>s</strong></td>
      <td>1.58 s</td>
      <td>2.11 ms</td>
      <td>29,792</td>
    </tr>
    <tr>
      <td>SLH-DSA-SHAKE-128f</td>
      <td>50.29 ms</td>
      <td>3.14 ms</td>
      <td>17,088</td>
    </tr>
    <tr>
      <td>SLH-DSA-SHAKE-192f</td>
      <td>81.56 ms</td>
      <td>4.35 ms</td>
      <td>35,664</td>
    </tr>
    <tr>
      <td>SLH-DSA-SHAKE-256f</td>
      <td>166.38 ms</td>
      <td>4.59 ms</td>
      <td>49,856</td>
    </tr>
    <tr>
      <td>SLH-DSA-SHA2-128<strong>s</strong></td>
      <td>137.45 ms</td>
      <td>144.93 µs</td>
      <td>same</td>
    </tr>
    <tr>
      <td>SLH-DSA-SHA2-192<strong>s</strong></td>
      <td>285.07 ms</td>
      <td>232.93 µs</td>
      <td>same</td>
    </tr>
    <tr>
      <td>SLH-DSA-SHA2-256<strong>s</strong></td>
      <td>254.05 ms</td>
      <td>340.57 µs</td>
      <td>same</td>
    </tr>
    <tr>
      <td>SLH-DSA-SHA2-128f</td>
      <td>6.61 ms</td>
      <td>439.25 µs</td>
      <td>same</td>
    </tr>
    <tr>
      <td>SLH-DSA-SHA2-192f</td>
      <td>12.38 ms</td>
      <td>661.47 µs</td>
      <td>same</td>
    </tr>
    <tr>
      <td>SLH-DSA-SHA2-256f</td>
      <td>24.27 ms</td>
      <td>673.78 µs</td>
      <td>same</td>
    </tr>
  </tbody>
</table>

<!--
Original numbers:
sign: SLH-DSA-SHAKE-128s
                        time:   [1.0467 s 1.0654 s 1.0893 s]
sign: SLH-DSA-SHAKE-192s
                        time:   [1.8093 s 1.8197 s 1.8328 s]
sign: SLH-DSA-SHAKE-256s
                        time:   [1.5799 s 1.5827 s 1.5855 s]
sign: SLH-DSA-SHAKE-128f
                        time:   [50.081 ms 50.292 ms 50.802 ms]
sign: SLH-DSA-SHAKE-192f
                        time:   [81.099 ms 81.560 ms 82.367 ms]
sign: SLH-DSA-SHAKE-256f
                        time:   [165.23 ms 166.38 ms 168.39 ms]
sign: SLH-DSA-SHA2-128s time:   [137.21 ms 137.45 ms 137.75 ms]
sign: SLH-DSA-SHA2-192s time:   [278.83 ms 285.07 ms 294.38 ms]
sign: SLH-DSA-SHA2-256s time:   [253.64 ms 254.05 ms 254.46 ms]
sign: SLH-DSA-SHA2-128f time:   [6.6013 ms 6.6140 ms 6.6239 ms]
sign: SLH-DSA-SHA2-192f time:   [12.268 ms 12.382 ms 12.545 ms]
sign: SLH-DSA-SHA2-256f time:   [24.214 ms 24.279 ms 24.354 ms]

verify: SLH-DSA-SHAKE-128s
                        time:   [980.97 µs 983.51 µs 988.95 µs]
verify: SLH-DSA-SHAKE-192s
                        time:   [1.4479 ms 1.4635 ms 1.4714 ms]
verify: SLH-DSA-SHAKE-256s
                        time:   [2.1014 ms 2.1136 ms 2.1391 ms]
verify: SLH-DSA-SHAKE-128f
                        time:   [3.1024 ms 3.1490 ms 3.2435 ms]
verify: SLH-DSA-SHAKE-192f
                        time:   [4.3089 ms 4.3528 ms 4.3891 ms]
verify: SLH-DSA-SHAKE-256f
                        time:   [4.5738 ms 4.5958 ms 4.6214 ms]
verify: SLH-DSA-SHA2-128s
                        time:   [143.83 µs 144.93 µs 146.24 µs]
verify: SLH-DSA-SHA2-192s
                        time:   [230.55 µs 232.93 µs 234.93 µs]
verify: SLH-DSA-SHA2-256s
                        time:   [339.58 µs 340.57 µs 342.29 µs]
verify: SLH-DSA-SHA2-128f
                        time:   [435.58 µs 439.25 µs 442.55 µs]
verify: SLH-DSA-SHA2-192f
                        time:   [659.27 µs 661.47 µs 665.01 µs]
verify: SLH-DSA-SHA2-256f
                        time:   [672.90 µs 673.78 µs 674.49 µs]

-->

<h2 id="todo">TODO</h2>

<h3 id="sis-based-schemes">SIS-based schemes</h3>

<p class="todo">GPV hash-and-sign signatures and their plain-lattice descendants<sup id="fnref:GPV07e"><a href="#fn:GPV07e" class="footnote" rel="footnote" role="doc-noteref">9</a></sup>.
(Also see <a href="https://www.cs.columbia.edu/~tal/6261/SP13/lecture7-GPV.pdf?utm_source=chatgpt.com">this</a>.)
“Fiat-Shamir with Aborts” signatures<sup id="fnref:Lyub09"><a href="#fn:Lyub09" class="footnote" rel="footnote" role="doc-noteref">10</a></sup>$,$<sup id="fnref:Lyub12"><a href="#fn:Lyub12" class="footnote" rel="footnote" role="doc-noteref">11</a></sup>.
<a href="https://csrc.nist.gov/csrc/media/Projects/pqc-dig-sig/documents/round-1/spec-files/Squirrels-spec-web.pdf">Squirrels</a>.
<a href="https://csrc.nist.gov/csrc/media/Projects/pqc-dig-sig/documents/round-1/spec-files/HuFu-spec-web.pdf">HuFu</a>.
(Also see <a href="https://csrc.nist.gov/csrc/media/events/workshop-on-cybersecurity-in-a-post-quantum-world/documents/papers/session9-oneill-paper.pdf?utm_source=chatgpt.com">this survey</a>.)</p>

<h3 id="ml-dsa">ML-DSA</h3>

<p class="todo">Investigate <a href="https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.204.pdf">ML-DSA</a>, based on MLWE.</p>

<table>
  <thead>
    <tr>
      <th> </th>
      <th>Private Key</th>
      <th>Public Key</th>
      <th>Signature Size</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>ML-DSA-44</td>
      <td>2,560</td>
      <td>1,312</td>
      <td>2,420</td>
    </tr>
    <tr>
      <td>ML-DSA-65</td>
      <td>4,032</td>
      <td>1,952</td>
      <td>3,309</td>
    </tr>
    <tr>
      <td>ML-DSA-87</td>
      <td>4,896</td>
      <td>2,592</td>
      <td>4,627</td>
    </tr>
  </tbody>
</table>

<h3 class="todo" id="others">Others</h3>
<p><a href="https://hawk-sign.info/">HAWK</a>.</p>

<h2 id="references">References</h2>

<p>For cited works, see below 👇👇</p>

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:nist-round-2-additional">
      <p><a href="https://csrc.nist.gov/projects/pqc-dig-sig/round-2-additional-signatures">Post-Quantum Cryptography: Additional Digital Signature Schemes (Round 2)</a> <a href="#fnref:nist-round-2-additional" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:ABCplus24">
      <p><strong>Status Report on the First Round of the Additional Digital Signature Schemes for the NIST Post-Quantum Cryptography Standardization Process</strong>, by Gorjan Alagic and Maxime Bros and Pierre Ciadoux and David Cooper and Quynh Dang and Thinh Dang and John Kelsey and Jacob Lichtinger and Yi-Kai Liu and Carl Miller and Dustin Moody and Rene Peralta and Ray Perlner and Angela Robinson and Hamilton Silberg and Daniel Smith-Tone and Noah Waller, 2024, <a href="https://nvlpubs.nist.gov/nistpubs/ir/2024/NIST.IR.8528.pdf">[URL]</a> <a href="#fnref:ABCplus24" class="reversefootnote" role="doc-backlink">&#8617;</a> <a href="#fnref:ABCplus24:1" class="reversefootnote" role="doc-backlink">&#8617;<sup>2</sup></a></p>
    </li>
    <li id="fn:fips">
      <p><a href="https://csrc.nist.gov/publications/fips">FIPS publications</a> <a href="#fnref:fips" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:BBdplus23">
      <p><strong>FAEST: Algorithm Specifications (Version 1.0)</strong>, by Carsten Baum and Lennart Braun and Cyprien Delpech de Saint Guilhem and Michael Klooß and Christian Majenz and Shibam Mukherjee and Sebastian Ramacher and Christian Rechberger and Emmanuela Orsini and Lawrence Roy and Peter Scholl, 2023, <a href="https://faest.info/faest-spec-v1.0.pdf">[URL]</a> <a href="#fnref:BBdplus23" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:YSWW21e">
      <p><strong>QuickSilver}: Efficient and Affordable Zero-Knowledge Proofs for Circuits and Polynomials over Any Field</strong>, by Kang Yang and Pratik Sarkar and Chenkai Weng and Xiao Wang, <em>in Cryptology {ePrint} Archive, Paper 2021/076</em>, 2021, <a href="https://eprint.iacr.org/2021/076">[URL]</a> <a href="#fnref:YSWW21e" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:BCCplus23e">
      <p><strong>An Efficient {ZK} Compiler from {SIMD} Circuits to General Circuits</strong>, by Dung Bui and Haotian Chu and Geoffroy Couteau and Xiao Wang and Chenkai Weng and Kang Yang and Yu Yu, <em>in Cryptology {ePrint} Archive, Paper 2023/1610</em>, 2023, <a href="https://eprint.iacr.org/2023/1610">[URL]</a> <a href="#fnref:BCCplus23e" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:FIPS205">
      <p><strong>FIPS 205: Stateless Hash-Based Digital Signature Standard</strong>, by National Institute of Standards and Technology (NIST), 2024, <a href="https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.205.pdf">[URL]</a> <a href="#fnref:FIPS205" class="reversefootnote" role="doc-backlink">&#8617;</a> <a href="#fnref:FIPS205:1" class="reversefootnote" role="doc-backlink">&#8617;<sup>2</sup></a></p>
    </li>
    <li id="fn:sphincsplus-git">
      <p><a href="https://github.com/sphincs/sphincsplus.git">sphincs/sphincsplus</a> <a href="#fnref:sphincsplus-git" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:GPV07e">
      <p><strong>Trapdoors for Hard Lattices and New Cryptographic Constructions</strong>, by Craig Gentry and Chris Peikert and Vinod Vaikuntanathan, <em>in Cryptology {ePrint} Archive, Paper 2007/432</em>, 2007, <a href="https://eprint.iacr.org/2007/432">[URL]</a> <a href="#fnref:GPV07e" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:Lyub09">
      <p><strong>Fiat-Shamir with Aborts: Applications to Lattice and Factoring-Based Signatures</strong>, by Lyubashevsky, Vadim, <em>in Advances in Cryptology – ASIACRYPT 2009</em>, 2009 <a href="#fnref:Lyub09" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:Lyub12">
      <p><strong>Lattice Signatures without Trapdoors</strong>, by Lyubashevsky, Vadim, <em>in Advances in Cryptology – EUROCRYPT 2012</em>, 2012 <a href="#fnref:Lyub12" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name>Alin Tomescu</name></author><category term="benchmarks" /><category term="digital signatures" /><category term="post-quantum" /><summary type="html"><![CDATA[tl;dr: Some notes on post-quantum (PQ) signature schemes.]]></summary></entry><entry><title type="html">Curve trees</title><link href="https://alinush.github.io//curve-trees" rel="alternate" type="text/html" title="Curve trees" /><published>2025-12-02T00:00:00+00:00</published><updated>2025-12-02T00:00:00+00:00</updated><id>https://alinush.github.io//curve-trees</id><content type="html" xml:base="https://alinush.github.io//curve-trees"><![CDATA[<p class="info"><strong>tl;dr:</strong> A few notes on the beautiful curve tree<sup id="fnref:CHK22e"><a href="#fn:CHK22e" class="footnote" rel="footnote" role="doc-noteref">1</a></sup> work by Campanelli, Hall-Andersen and Kamp.</p>

<!--more-->

<!-- Here you can define LaTeX macros -->
<div style="display: none;">$
$</div>
<p><!-- $ --></p>

<h2 id="mathbbvcash-anonymous-payments-experiments">$\mathbb{V}$cash anonymous payments experiments</h2>

<p>Ran a subset of the (modified) benchmarks, in a <strong>single thread</strong>, on my Apple Macbook Pro M1 Max.
The benchmared scheme does not implement a proper PRF-based nullifier scheme, AFAICT.
It does prove values are in-range using Bulletproofs.
(I think it combines the range proof statement with the curve tree statement over the curve used in the leaves, and proves it all in one.)</p>

<p>See <a href="/files/curve-tree-vcash-benches.diff">diff</a> here.</p>

<p>Results over Pasta and Vellas curves:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Single_threadedPour_Curves:pasta_L:1024_D:4_ProofSize: 3970 bytes

Single_threadedPour_Curves:pasta_L:1024_D:4/prove
                        time:   [7.1789 s 7.2110 s 7.2427 s]

Single_threadedPour_Curves:pasta_L:1024_D:4_batch_verification/1
                        time:   [298.39 ms 299.27 ms 300.13 ms]

Single_threadedPour_Curves:pasta_L:1024_D:4_batch_verification/100
                        time:   [2.1153 s 2.1255 s 2.1355 s]
</code></pre></div></div>

<p>Results over secp256k1 and secp256r1 curves:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Single_threadedPour_Curves:secp&amp;q_L:1024_D:4_ProofSize: 3970 bytes

Single_threadedPour_Curves:secp&amp;q_L:1024_D:4/prove
                        time:   [8.4621 s 8.4874 s 8.5097 s]

Single_threadedPour_Curves:secp&amp;q_L:1024_D:4_batch_verification/1
                        time:   [347.31 ms 348.01 ms 348.73 ms]

Single_threadedPour_Curves:secp&amp;q_L:1024_D:4_batch_verification/100
                        time:   [2.4206 s 2.4312 s 2.4428 s]

</code></pre></div></div>

<p class="note">The range proofs can probably be sped up using <a href="/dekart">DeKART</a>.</p>

<h2 id="references">References</h2>

<p>For cited works, see below 👇👇</p>

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:CHK22e">
      <p><strong>Curve Trees: Practical and Transparent Zero-Knowledge Accumulators</strong>, by Matteo Campanelli and Mathias Hall-Andersen and Simon Holmgaard Kamp, <em>in Cryptology {ePrint} Archive, Paper 2022/756</em>, 2022, <a href="https://eprint.iacr.org/2022/756">[URL]</a> <a href="#fnref:CHK22e" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name>Alin Tomescu</name></author><category term="benchmarks" /><category term="Merkle" /><category term="accumulators" /><category term="anonymous payments" /><category term="elliptic curves" /><summary type="html"><![CDATA[tl;dr: A few notes on the beautiful curve tree[^CHK22e] work by Campanelli, Hall-Andersen and Kamp.]]></summary></entry><entry><title type="html">Domain separation</title><link href="https://alinush.github.io//domain-separation" rel="alternate" type="text/html" title="Domain separation" /><published>2025-11-19T00:00:00+00:00</published><updated>2025-11-19T00:00:00+00:00</updated><id>https://alinush.github.io//domain-separation</id><content type="html" xml:base="https://alinush.github.io//domain-separation"><![CDATA[<p class="info"><strong>tl;dr:</strong> How to think clearly about domain separation in your protocols.</p>

<!--more-->

<!-- Here you can define LaTeX macros -->
<div style="display: none;">$
$</div>
<p><!-- $ --></p>

<h2 id="for-hashing">For hashing</h2>

<p class="todo">Explain Aptos’s strategy.</p>

<h2 id="for-proof-systems">For proof systems</h2>

<p>A <em>domain separator</em> in the context of proof systems (e.g., $\Sigma$-protocols, ZK range proofs, etc.) should consist of three things<sup id="fnref:sigma"><a href="#fn:sigma" class="footnote" rel="footnote" role="doc-noteref">1</a></sup>:</p>

<ol>
  <li><strong>Protocol identifier</strong>, which can often be split up into:
    <ul>
      <li>higher-level protocol identifier: e.g., <em>“Confidential Assets v1 on Aptos”</em></li>
      <li>lower-level relation identifier: e.g., <em>“PedEq”</em></li>
    </ul>
  </li>
  <li><strong>Session identifier</strong>
    <ul>
      <li>chosen by the user</li>
      <li>specifies the context where this proof is valid</li>
      <li>e.g., <em>“Alice (<code class="language-plaintext highlighter-rouge">0x1</code>) is paying Bob (<code class="language-plaintext highlighter-rouge">0x2</code>) at time $t$”)</em></li>
      <li>motivation is to prevent replay attacks (e.g., PoK of SK) or cross-protocol attacks</li>
      <li>this one is trickier, I think: in some settings the “session” accumulates naturally in the statement being proven
        <ul>
          <li>e.g., in Aptos Confidential Assets, the “session” is represented by the confidential balances of the users &amp; their addresses</li>
        </ul>
      </li>
    </ul>
  </li>
  <li><strong>Statement identifier</strong>
    <ul>
      <li>i.e., be sure to hash the public statement being proven</li>
      <li>here people forget that “public parameters” are part of the statement!</li>
      <li>e.g., in a Schnorr proof it is crucial to hash the generator $G$!</li>
    </ul>
  </li>
</ol>

<p>This suggests that a domain separator <code class="language-plaintext highlighter-rouge">dst</code> should consist of:</p>
<ul>
  <li>a <code class="language-plaintext highlighter-rouge">protocol_id</code></li>
  <li>a <code class="language-plaintext highlighter-rouge">session_id</code></li>
  <li>a <code class="language-plaintext highlighter-rouge">statement</code>, which is already an argument to a proof system anyway</li>
</ul>

<!--
https://mmaker.github.io/draft-irtf-cfrg-sigma-protocols/draft-irtf-cfrg-fiat-shamir.html#section-4
https://www.openzeppelin.com/news/interactive-sigma-proofs-and-fiat-shamir-transformation-proof-of-concept-implementation-audit
https://github.com/mmaker/draft-irtf-cfrg-sigma-protocols/blob/f427eddc973bc9ef284c342913010b57f935d71a/draft-irtf-cfrg-sigma-protocols.md#generation-of-the-protocol-identifier-protocol-id-generation
https://github.com/mmaker/draft-irtf-cfrg-sigma-protocols/blob/f427eddc973bc9ef284c342913010b57f935d71a/poc/sigma_protocols.sage#L123
https://docs.zkproof.org/pages/standards/accepted-workshop4/proposal-sigma.pdf
-->

<h2 id="references">References</h2>

<p>For cited works, see below 👇👇</p>

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:sigma">
      <p>These are thoughts inspired from talking to Michele Orrù and reading a few of the $\Sigma$-protocol standardization drafts. <a href="#fnref:sigma" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name>Alin Tomescu</name></author><category term="domain-separation" /><summary type="html"><![CDATA[tl;dr: How to think clearly about domain separation in your protocols.]]></summary></entry><entry><title type="html">Chunky: Weighted PVSS and DKG for field elements</title><link href="https://alinush.github.io//chunky" rel="alternate" type="text/html" title="Chunky: Weighted PVSS and DKG for field elements" /><published>2025-11-18T00:00:00+00:00</published><updated>2025-11-18T00:00:00+00:00</updated><id>https://alinush.github.io//chunky-weighted-pvss-for-field-elements</id><content type="html" xml:base="https://alinush.github.io//chunky"><![CDATA[<p class="info"><strong>tl;dr:</strong> A work-in-progress weighted PVSS for field elements using chunked <a href="/elgamal">ElGamal</a> encryption and <a href="/dekart">DeKART range proofs</a>.</p>

<!--more-->

<!-- Here you can define LaTeX macros -->
<div style="display: none;">$
%
\def\sig{\mathsf{Sig}}
\def\sign{\mathsf{Sign}}
%
\def\dekart{\mathsf{DeKART}^\mathsf{FFT}}
\def\setup{\mathsf{Setup}}
\def\commit{\mathsf{Commit}}
\def\prove{\mathsf{Prove}}
\def\verify{\mathsf{Verify}}
\def\piRange{\pi_\mathsf{range}}
%
\def\idx{\mathsf{idx}}
\def\epoch{\mathsf{epoch}}
%
\def\enc{\mathsf{Enc}}
\def\dec{\mathsf{Dec}}
%
\def\scrape{\mathsf{SCRAPE}}
\def\lowdegreetest{\mathsf{LowDegreeTest}}
%
\def\Retk{\mathcal{R}_\mathsf{e2k}}
\def\Retknew{\mathcal{R}'_\mathsf{e2k}}
\def\ctx{\mathsf{ctx}}
\def\sok{\mathsf{SoK}}
\def\piSok{\pi_\mathsf{SoK}}
\def\piSoknew{\pi_\mathsf{SoK}'}
%
\def\maxTotalWeight{W_\mathsf{max}}
\def\totalWeight{W}
\def\threshWeight{t_W}
\def\threshQ{t_Q}
\def\threshS{t_S}
%
\def\trs{\mathsf{trs}}
\def\pp{\mathsf{pp}}
\def\pid{\mathsf{pid}}
\def\ssid{\mathsf{ssid}}
\def\dk{\mathsf{dk}}
\def\ek{\mathsf{ek}}
\def\ssk{\mathsf{ssk}}
\def\spk{\mathsf{spk}}
%
\def\pvss{\mathsf{PVSS}}
\def\deal{\mathsf{Deal}}
\def\decrypt{\mathsf{Decrypt}}
\def\pvssSetup{\pvss.\mathsf{Setup}}
\def\pvssDeal{\pvss.\deal}
\def\pvssVerify{\pvss.\verify}
\def\pvssDecrypt{\pvss.\decrypt}
%
\def\subtrs{\mathsf{subtrs}}
\def\ssPvss{\mathsf{ssPVSS}}
\def\ssPvssDeal{\ssPvss.\deal}
\def\ssPvssVerify{\ssPvss.\verify}
\def\subtranscript{\mathsf{Subtranscript}}
\def\subaggregate{\mathsf{Subaggregate}}
\def\ssPvssSubtranscript{\ssPvss.\subtranscript}
\def\ssPvssSubaggregate{\ssPvss.\subaggregate}
$</div>
<p><!-- $ --></p>

<!-- Here you can define LaTeX macros -->
<div style="display: none;">$
%
\def\one#1{\left[#1\right]_\textcolor{green}{1}} <!-- \_ -->
\def\two#1{\left[#1\right]_\textcolor{red}{2}}
\def\three#1{\left[#1\right]_\textcolor{blue}{\top}}
\def\pair#1#2{e\left(#1, #2\right)}
\def\GGen{\mathsf{GGen}}
$</div>
<p><!-- $ --></p>

<div style="display: none;">$
%
% Field operations
%
% #1 is the number of field additions
\def\Fadd#1{#1\ \green{\F^+}}
% #1 is the number of field multiplications
\def\Fmul#1{#1\ \red{\F}^\red{\times}}
%
% Abstract group
%
% #1 is the group
% #2 is the # of group additions
\def\Gadd#1#2{#2\ \green{#1}^\green{+}}
% #2 is the # of scalar muls
\def\Gmul#1#2{#2\ \orange{#1}^\orange{\times}}
% #2 is the MSM size
\def\msm#1#2{\red{#1}^{#2}} % do not use directly use either \fmsm or \vmsm
\def\vmsm#1#2{\red{\mathsf{var}}\text{-}\msm{#1}{#2}}
\def\fmsm#1#2{\msm{#1}{#2}}
\def\fmsmSmall#1#2#3{\fmsm{#1}{#2}/{#3}}
% ...#3 is the max scalar size
\def\vmsmSmall#1#2#3{\vmsm{#1}{#2}/{#3}}
%
% \mathbb{G} group
%
\def\GaddG#1{\Gadd{\Gr}{#1}}
\def\GmulG#1{\Gmul{\Gr}{#1}}
\def\msmG#1{\msm{\Gr}{#1}}
\def\vmsmG#1{\vmsm{\Gr}{#1}}
\def\fmsmG#1{\fmsm{\Gr}{#1}}
\def\fmsmGSmall#1#2{\fmsmSmall{\Gr}{#1}/{#2}}
\def\vmsmGSmall#1#2{\vmsmSmall{\Gr}{#1}/{#2}}
%
% G_1 group
%
% Note: replicating the colors here because cannot get subscript to align with superscript (e.g., $\msmOne{n}$ would render akwardly)
\def\GaddOne#1{\Gadd{\Gr}{#1}_\green{1}}
\def\GmulOne#1{\Gmul{\Gr}{#1}_\orange{1}}
\def\msmOne#1{\msm{\Gr}{#1}_\red{1}}
\def\vmsmOne#1{\vmsm{\Gr}{#1}_\red{1}}
\def\fmsmOne#1{\fmsm{\Gr}{#1}_\red{1}}
\def\fmsmOneSmall#1#2{\fmsmSmall{\Gr}{#1}_\red{1}/{#2}}
\def\vmsmOneSmall#1#2{\vmsmSmall{\Gr}{#1}_\red{1}/{#2}}
%
% G_2 group
%
% Note: same replication as for G_1
\def\GaddTwo#1{\Gadd{\Gr}{#1}_\green{2}}
\def\GmulTwo#1{\Gmul{\Gr}{#1}_\orange{2}}
\def\msmTwo#1{\msm{\Gr}{#1}_\red{2}}
\def\vmsmTwo#1{\vmsm{\Gr}{#1}_\red{2}}
\def\fmsmTwo#1{\fmsm{\Gr}{#1}_\red{2}}
\def\fmsmTwoSmall#1#2{\fmsmSmall{\Gr}{#1}_\red{2}/{#2}}
\def\vmsmTwoSmall#1#2{\vmsmSmall{\Gr}{#1}_\red{2}/{#2}}
%
% Target group
%
% Note: same replication as for G_1
\def\GaddTarget#1{\Gadd{\Gr}{#1}_\green{T}}
\def\GmulTarget#1{\Gmul{\Gr}{#1}_\orange{T}}
\def\msmTarget#1{\msm{\Gr}{#1}_\red{T}}
\def\vmsmTarget#1{\vmsm{\Gr}{#1}_\red{T}}
\def\fmsmTarget#1{\fmsm{\Gr}{#1}_\red{T}}
\def\fmsmTargetSmall#1#2{\fmsmSmall{\Gr}{#1}_\red{T}/{#2}}
\def\vmsmTargetSmall#1#2{\vmsmSmall{\Gr}{#1}_\red{T}/{#2}}
%
% A single pairing
\def\pairing{\mathbb{P}}
% #1 is the # of pairings
\def\multipair#1{\mathbb{P}^{#1}}
$</div>
<p><!-- $ --></p>

<!-- Here you can define LaTeX macros -->
<div style="display: none;">$
\def\stmt{\mathbf{x}}
\def\witn{\mathbf{w}}
%
\def\td{\mathsf{td}}
%
\def\zkpSetup{\mathsf{ZKP}.\mathsf{Setup}}
\def\zkpProve{\mathsf{ZKP}.\mathsf{Prove}}
\def\zkpVerify{\mathsf{ZKP}.\mathsf{Verify}}
\def\zkpSim{\mathsf{ZKP}.\mathsf{Sim}}
$</div>
<p><!-- $ --></p>

<h2 id="preliminaries">Preliminaries</h2>

<p>We assume familiarity with:</p>
<ul>
  <li>PVSS, as an abstract cryptographic primitive.
    <ul>
      <li>In particular, the notion of a <strong>PVSS transcript</strong> will be used a lot.</li>
    </ul>
  </li>
  <li><a href="/signatures">Digital signatures</a>
    <ul>
      <li>i.e., sign a message $m$ as $\sigma \gets \sig.\sign(\sk, m)$ and verify via $\sig.\verify(\pk, \sigma, m)\equals 1$</li>
    </ul>
  </li>
  <li><a href="/elgamal">ElGamal encryption</a></li>
  <li>Batched range proofs (e.g., <a href="/dekart">DeKART</a>)</li>
  <li>ZKSoKs (i.e., <a href="/sigma">$\Sigma$-protocols</a> that implicitly sign over a message by feeding it into the Fiat-Shamir transform).</li>
  <li>The SCRAPE low-degree test</li>
</ul>

<p>All of these will be described in more detail in the subsections below.</p>

<h3 id="notation">Notation</h3>

<h4 id="pairing-friendly-groups-notation">Pairing-friendly groups notation</h4>
<!-- - Let $\Gr = \langle G \rangle$ denote the fact that $G$ is the generator of a group $\Gr$ -->
<ul>
  <li>Let $\GGen(1^\lambda) \rightarrow \mathcal{G}$ denote a probabilistic polynomial-time algorithm that outputs bilinear groups $\mathcal{G} \bydef (\Gr_1, \Gr_2, \Gr_T)$ of prime order $p\approx 2^{2\lambda}$, denoted <strong>additively</strong>, such that:
    <ul>
      <li>$\one{1}$ generates $\Gr_1$</li>
      <li>$\two{1}$ generates $\Gr_2$</li>
      <li>$\three{1}$ generates $\Gr_T$
<!-- \left(\Gr_1 = \langle G_1\rangle, \Gr_2 =\langle G_2\rangle ,\Gr_T = \langle G_T\rangle\right) --></li>
    </ul>
  </li>
  <li>We use $\one{a}\bydef a\cdot \one{1}$ and $\two{b}\bydef b\cdot \two{1}$ and $\three{c}\bydef c\cdot \three{1}$ to denote multiplying a scalar by the group generator</li>
  <li>
    <p>Let $\F$ denote the scalar field of order $p$ associated with the bilinear groups</p>
  </li>
  <li>We often use capital letters like $G$ or $H$ to denote group elements in $\Gr_1$</li>
  <li>We often use $\widetilde{G}$ or $\widetilde{H}$ letters to denote group elements in $\Gr_2$</li>
</ul>

<h4 id="time-complexity-notation">Time-complexity notation</h4>
<ul>
  <li>For time complexities, we use:
    <ul>
      <li>$\Fadd{n}$ for $n$ field additions in $\F$</li>
      <li>
        <p>$\Fmul{n}$ for $n$ field multiplications in $\F$</p>
      </li>
      <li>$\Gadd{\Gr}{n}$ for $n$ additions in $\Gr$</li>
      <li>$\Gmul{\Gr}{n}$ for $n$ individual scalar multiplications in $\Gr$</li>
      <li>$\fmsm{\Gr}{n}$ for a size-$n$ MSM in $\Gr$ where the group element bases are known ahead of time (i.e., <em>fixed-base</em>)
        <ul>
          <li>when the scalars are always from a set $S$, then we use $\fmsmSmall{\Gr}{n}{S}$</li>
        </ul>
      </li>
      <li>$\vmsm{\Gr}{n}$ for a size-$n$ MSM in $\Gr$ where the group element bases are <strong>not</strong> known ahead of time (i.e., <em>variable-base</em>)
        <ul>
          <li>when the scalars are always from a set $S$, then we use $\vmsmSmall{\Gr}{n}{S}$</li>
        </ul>
      </li>
      <li>$\pairing$ for one pairing</li>
      <li>$\multipair{n}$ for a size-$n$ multipairing</li>
    </ul>
  </li>
</ul>

<h4 id="other-notation">Other notation</h4>

<ul>
  <li>$[n]\bydef \{1,2,\ldots,n\}$</li>
  <li>$[n) \bydef \{0,1,2,\ldots,n-1\}$</li>
</ul>

<h3 id="elgamal-encryption">ElGamal encryption</h3>

<p>Assuming familiarity with <a href="/elgamal">ElGamal encryption</a>:</p>

<h4 id="emathsfkeygen_hrightarrow-mathsfdkmathsfek">$E.\mathsf{KeyGen}_H()\rightarrow (\mathsf{dk},\mathsf{ek})$</h4>

<p>Generate the key-pair:
\begin{align}
\dk &amp;\randget\F\\<br />
\ek &amp;\gets \dk \cdot H
\end{align}</p>

<h4 id="emathsfenc_ghleftmathsfek-v-rright-rightarrow-leftc-rright">$E.\mathsf{Enc}_{G,H}\left(\mathsf{ek}, v; r\right) \rightarrow \left(C, R\right)$</h4>

<p>Compute:
\begin{align}
C &amp;\gets v \cdot G + r \cdot \ek\\<br />
R &amp;\gets r \cdot H
\end{align}</p>

<h4 id="emathsfdec_gleftmathsfdk-c-rright-rightarrow-v">$E.\mathsf{Dec}_{G}\left(\mathsf{dk}, (C, R)\right) \rightarrow v$</h4>

<p>\begin{align}
v &amp;\gets \log_G\left(C - \dk\cdot R\right)\\<br />
        &amp;= \log_G\left((v \cdot G + r \cdot \ek) - \dk\cdot (r \cdot H)\right)\\<br />
        &amp;= \log_G\left(v \cdot G + r \cdot \ek - (r \cdot \dk) \cdot H\right)\\<br />
        &amp;= \log_G\left(v \cdot G + r \cdot \ek - r \cdot \ek\right)\\<br />
        &amp;= \log_G\left(v \cdot G\right) = v\\<br />
\end{align}</p>

<p class="note">Decryption will only work for sufficiently “small” values $v$ such that computing discrete logarithms on $v \cdot G$ is feasible (e.g., anywhere from 32 bits to 64 bits).</p>

<h3 id="univariate-dekart-batched-zk-range-proofs">Univariate DeKART batched ZK range proofs</h3>

<p>Assuming familiarity with batched ZK range proofs<sup id="fnref:BBBplus18"><a href="#fn:BBBplus18" class="footnote" rel="footnote" role="doc-noteref">1</a></sup>.
In particular, we will use <a href="/dekart">univariate DeKART</a> as our range proof scheme, formalized below.</p>

<h4 id="dekart_bsetupn-mathcalgrightarrow-mathsfprkmathsfckmathsfvk">$\dekart_b.\setup(N; \mathcal{G})\rightarrow (\mathsf{prk},\mathsf{ck},\mathsf{vk})$</h4>

<p>Sets up the ZK range proof to prove batches of size $\le N$, returning a proving key, a commitment key and a verification key.
(See implementation <a href="/dekart#mathsfdekart_bmathsffftmathsfsetupn-mathcalgrightarrow-mathsfprkmathsfckmathsfvk">here</a>.)</p>

<h4 id="dekart_bcommitckz_1ldotsz_n-rhorightarrow-c">$\dekart_b.\commit(\ck,z_1,\ldots,z_N; \rho)\rightarrow C$</h4>

<p>Returns a commitment $C$ to a vector of $N$ values using randomness $\rho$.
(See implementation <a href="/dekart#mathsfdekart_bmathsffftmathsfcommitckz_1ldotsz_n-rhorightarrow-c">here</a>.)</p>

<h4 id="dekart_bproveprk-c-ell-z_1ldotsz_n-rhorightarrow-pi">$\dekart_b.\prove(\prk, C, \ell; z_1,\ldots,z_N, \rho)\rightarrow \pi$</h4>

<p>Returns a ZK proof $\pi$ that the $N$ values committed in $C$ are all in $[0, b^\ell)$.
(See implementation <a href="/dekart#mathsfdekart_bmathsffftmathsfprovemathsfprk-c-ell-z_1ldotsz_n-rhorightarrow-pi">here</a>.)</p>

<h4 id="dekart_bverifyvk-c-ell-pirightarrow-01">$\dekart_b.\verify(\vk, C, \ell; \pi)\rightarrow \{0,1\}$</h4>

<p>Verifies that the $N$ values committed in $C$ are all in $[0, b^\ell)$.
(See implementation <a href="/dekart#mathsfdekart_bmathsffftmathsfverifymathsfvk-c-ell-pirightarrow-01">here</a>.)</p>

<h3 id="zero-knowledge-signatures-of-knowledge-zksoks">Zero-knowledge Signatures of Knowledge (ZKSoKs)</h3>

<p>Assuming familiarity with ZKSoKs<sup id="fnref:CL06"><a href="#fn:CL06" class="footnote" rel="footnote" role="doc-noteref">2</a></sup>, which typically consist of two algorithms:</p>

<h4 id="sokprovemathcalr-m-stmt-witn-rightarrow-pi">$\sok.\prove(\mathcal{R}, m, \stmt; \witn) \rightarrow \pi$</h4>

<p>Returns a ZK proof of knowledge of $\witn$ s.t. $\mathcal{R}(\stmt;\witn) = 1$ and signs the message $m$ in the process.</p>

<h4 id="sokverifymathcalr-m-stmt-pi-rightarrow-01">$\sok.\verify(\mathcal{R}, m, \stmt; \pi) \rightarrow \{0,1\}$</h4>

<p>Verifies a ZK proof of knowledge of some $\witn$ s.t. $\mathcal{R}(\stmt;\witn) = 1$ and that the message $m$ was signed.</p>

<h3 id="the-elgamal-to-kzg-np-relation">The ElGamal-to-KZG NP relation</h3>

<p>One of the key ingredients in our PVSS will be a ZK proof of knowledge of share chunks such that they are both ElGamal-encrypted and <a href="/kzg">KZG-committed</a>.</p>

<p>This is captured via the NP relation below:
\begin{align}
\label{rel:e2k}
\term{\Retk}\left(\begin{array}{l}
\stmt = \left(G, H, \ck, \{\ek_i\}_i,\{C_{i,j,k}\}_{i,j,k}, \{R_{j,k}\}_{j,k}, C\right),\\<br />
\witn = \left(\{s_{i,j,k}\}_{i,j,k}, \{r_{j,k}\}_{j,k}, \rho\right)
\end{array}\right) = 1\Leftrightarrow\\<br />
\Leftrightarrow\left\{\begin{array}{rl} 
    (C_{i,j,k}, R_{j,k}) &amp;= E.\enc_{G,H}(\ek_i, s_{i,j,k}; r_{j,k})\\<br />
    C&amp; = \dekart_2.\commit(\ck, \{s_{i,j,k}\}_{i,j,k}; \rho)\\<br />
\end{array}\right.
\end{align}</p>

<p>where the $s_{i,j,k}$’s will be “flattened” as a vector (in a specific order) before being input to $\dekart_2.\commit(\cdot)$.</p>

<p class="warning">We will explain how this flattening works later in the <a href="#mathsfpvssmathsfdeal_mathsfpplefta_0-t_w-w_i-mathsfek_i_iin-n-mathsfssidright-rightarrow-mathsftrs">$\pvssDeal$</a> algorithm.</p>

<h3 id="the-scrape-low-degree-test">The SCRAPE low-degree test</h3>

<p class="todo">Explain!</p>

<h2 id="building-a-dkg-from-a-pvss">Building a DKG from a PVSS</h2>

<p>Our goal is to get a <strong>weighted DKG</strong><sup id="fnref:DPTX24e"><a href="#fn:DPTX24e" class="footnote" rel="footnote" role="doc-noteref">3</a></sup> for field elements amongst the validators of a proof-of-stake blockchain, such that the <strong>DKG (final, shared) secret</strong> $\term{z}$ is only reconstructable by a fraction $&gt; \term{\threshQ}$ of the stake (e.g., $\threshQ = 0.5$ or 50%).</p>

<p>How?
Each validator $i$ will <strong>“contribute”</strong> to $z$ by picking their own secret $\term{z_i} \in \F$ and dealing it to the other validators via $\term{\pvssDeal}$ in a <strong>non-malleable</strong> fashion such that only a $&gt; \emph{\threshQ}$ fraction of the stake can reconstruct $z_i$.
The DKG secret will be set to $z \bydef \sum_{i\in Q} z_i$, where $\term{Q}$ is the <strong>eligible (sub)set</strong> of validators who correctly dealt their $z_i$.</p>

<p>Crucially, $Q$ must be large “enough”: i.e., it must have “enough” validators to guarantee that no malicious subset of them can learn (or can bias the choice of) $z$.
For example, we could assume only 33% of the stake<sup id="fnref:aptos-Q"><a href="#fn:aptos-Q" class="footnote" rel="footnote" role="doc-noteref">4</a></sup> is malicious and require that $Q$ have more stake than that.
We denote the stake of the validators in $Q$ by $\term{\norm{Q}}$.</p>

<p class="note">The DKG is parameterized by $\norm{Q}$ and by $\threshQ$.
Since, typically in a DKG, the same set of validators will deal a secret amongst themselves, $\norm{Q}$ and $\threshQ$ are typically set to the same value.
Otherwise, if $\norm{Q} &lt; \threshQ$, then the validators in $Q$ could reconstruct the secret even though they do not have $&gt; \threshQ$ of the stake, which defeats the point. 
Alternatively, if $\norm{Q} &gt; \threshQ$, then the protocol would be requiring more validators to contribute than needed for secrecy, since $\threshQ &lt; \norm{Q}$ can reconstruct.</p>

<p><em>First</em>, to publicly-prove that $\norm{Q}$ is large “enough”, each validator will <strong>digitally-sign</strong> their dealt PVSS transcript in a <a href="/domain-separation">domain-separated fashion</a> (part of the domain separator will be the current consensus epoch number).
Without such authentication, $Q$ could be filled with transcripts from one malicious validator impersonating the other ones.
Therefore, that malicious validator would have full knowledge of the final DKG secret $z$.
($\Rightarrow$ No es bueno.)</p>

<p><strong>Implication:</strong> The DKG protocol needs to be carefully crafted to sign the PVSS transcripts.
If done right, the validators’ public keys used to sign blockchain consensus messages can be safely reused as <strong>signing public keys</strong> for the transcript.
(If done right.)</p>

<p><em>Second</em>, we require that PVSS transcripts obtained from $\pvssDeal$ be <strong>non-malleable</strong>.
To see why this is necessary consider the following scenario:</p>
<ul>
  <li>two validators $i$ and $j$ have enough stake to form an eligible subset $Q = \{i,j\}$ with $\norm{Q} &gt; \threshQ$</li>
  <li>$j$ by itself does not have enough stake</li>
  <li>$i$ deals $z_i \in \F$ and signs the transcript</li>
  <li>$j$ removes $i$’s signature and mauls $i$’s transcript to deal $-z_i + r$ for some $r\randget\F$ it knows</li>
  <li>$j$ signs this mauled transcript
    <ul>
      <li>$\Rightarrow j$ would have full knowledge of the final DKG secret $z = z_i + (- z_i + r) = r$.</li>
    </ul>
  </li>
</ul>

<p><strong>Implication:</strong> The PVSS transcript will include a <strong>zero-knowledge signature of knowledge (ZKSoK)</strong> of the dealt secret $z_i$.
This way, the dealt secret cannot be mauled without rendering the transcript invalid.
Importantly, the ZKSoK signature will include the signing public key of the dealer.
This way, validator $j$ cannot bias the final DKG secret $z$ by appropriating validator $i$’s transcript as their own (i.e., by stripping validator $i$’s signature from the transcript, adding their own signature and leaving the dealt secret $z_i$ untouched).</p>

<h2 id="chunky-a-weighted-non-malleable-pvss">Chunky: A weighted, non-malleable PVSS</h2>

<p>Notation:</p>

<ul>
  <li>Let $\term{n}$ denote the number of players
    <ul>
      <li><em>Note:</em> In our setting, the PoS validators will act as the players</li>
    </ul>
  </li>
  <li>Let $\term{\maxTotalWeight}$ denote the <strong>maximum total weight</strong> $\Leftrightarrow$ maximum # of shares that we will ever want to deal in the PVSS</li>
  <li>Let $\term{\ell}$ denote the <strong>chunk bit-size</strong> (e.g., $\ell=32$ for 32-bit chunks)</li>
  <li>Let $\term{m} = \ceil{\log_2{\sizeof{\F}} / \ell}$ denote the <strong>number of chunks per share</strong></li>
  <li>Let $\term{B}\bydef 2^\ell$ denote the <strong>maximum value of a chunk</strong> (e.g., $B=2^{32}$ for 32-bit chunks)</li>
</ul>

<p>The algorithms below describe <strong>Chunky</strong>, a weighted PVSS where only subsets of players with combined weight $&gt; \threshWeight$ can reconstruct the shared secret.</p>

<h3 id="mathsfpvssmathsfsetupell-w_mathsfmax-mathcalg-widetildeg-rightarrow-mathsfpp">$\mathsf{PVSS}.\mathsf{Setup}(\ell, W_\mathsf{max}; \mathcal{G}, \widetilde{G}) \rightarrow \mathsf{pp}$</h3>

<p>Recall that $\emph{\maxTotalWeight}$ is the max. total weight, $\emph{\ell}$ is the # of bits per chunk and $\emph{m}$ is the number of chunks a share is split into.</p>

<p>$\term{\widetilde{G}}\in\Gr_2$ will be the base used to commit to the shares in $\pvssDeal$.</p>

<p><strong>Step 1:</strong> Set up the ElGamal encryption:
\begin{align}
\term{G},\term{H} &amp;\randget \Gr_1
\end{align}</p>

<p><strong>Step 2:</strong> Set up the ZK range proof to batch prove that $\le \maxTotalWeight\cdot m$ share chunks are all $\ell$-bit wide:</p>

<p>\begin{align}
(\prk,\ck,\vk) \gets \dekart_2.\setup(\maxTotalWeight\cdot m; \mathcal{G})
\end{align}</p>

<p>Note that DeKART assumes that the field $\F$ admits a $2^\kappa$-th primitive root of unity where $2^\kappa$ is the smallest power of two $\ge \maxTotalWeight\cdot m + 1$.
(The ZK range proof needs FFTs of size $\maxTotalWeight\cdot m$.)</p>

<p>Return the public parameters:
\begin{align}
\pp \gets (\ell, \maxTotalWeight, G, \widetilde{G}, H, \prk,\ck,\vk)
\end{align}</p>

<h3 id="mathsfpvssmathsfdeal_mathsfpplefta_0-t_w-w_i-mathsfek_i_iin-n-mathsfssidright-rightarrow-mathsftrs">$\mathsf{PVSS}.\mathsf{Deal}_\mathsf{pp}\left(a_0, t_W, \{w_i, \mathsf{ek}_i\}_{i\in [n]}, \mathsf{ssid}\right) \rightarrow \mathsf{trs}$</h3>

<p class="smallnote">$a_0$ is the dealt secret.
<!--$\pid\in [n]$ is the dealer's player ID.-->
$w_i$’s are the weights of each player, including the dealer’s (i.e., $w_\pid$).
$\ssid$ is a session identifier, which will be set to the consensus epoch number in which the DKG is taking place and calls this PVSS deal algorithm.</p>

<p>Parse public parameters:
\begin{align}
(\ell, \maxTotalWeight, G, \widetilde{G}, H, \prk,\ck,\vk)\parse\pp
\end{align}</p>

<p>Compute the <strong>total weight</strong> and assert that the public parameters can accommodate it:
\begin{align}
\label{eq:W}
\term{W} &amp;\gets \sum_{i\in[n]} w_i\\<br />
\textbf{assert}\ W &amp;\le \maxTotalWeight
\end{align}</p>

<p>Find a $2^\kappa$-th <strong>root of unity</strong> $\term{\omega} \in \F$ such that we can efficiently compute FFTs of size $W$ (i.e., smallest $2^\kappa \ge W$).</p>

<p><strong>Step 1:</strong> Pick the degree-$\threshWeight$ random secret sharing polynomial and compute the $j$th share of player $i$:
\begin{align}
\term{a_1,\ldots,a_t} &amp;\randget \F\\<br />
\term{f(X)} &amp;\bydef \emph{a_0} + a_1 X + a_2 X^2 + \ldots + a_t X^\threshWeight\\<br />
\label{eq:eval}
\term{s_{i, j}} &amp;\gets f\left(\term{\chi_{i,j}}\right),\forall i\in[n],\forall j\in[w_i]
\end{align}</p>

<p><em>Note:</em> Assuming that the set of evaluation points $\emph{\{\chi_{i,j}\}}$ are <em>wisely</em> set to be the first $W$ roots of unity in $\{\omega^{i’}\}_{i’\in [0,W)}$, then the $s_{i,j}$’s would be quickly-computable in $\Fmul{O(W\log{W})}$ via an FFT.</p>

<p><strong>Step 2:</strong> Commit to the shares, $\forall i\in[n],\forall j\in[w_i]$:
\begin{align}
\label{eq:share-commitments}
\term{\widetilde{V}_{i,j}} &amp;\gets s_{i,j} \cdot \widetilde{G} \in \Gr_2\\<br />
\label{eq:dealt-pubkey}
\term{\widetilde{V}_0} &amp;\gets a_0 \cdot \widetilde{G}
\end{align}</p>

<p><strong>Step 3:</strong> Split each share $s_{i,j}$ into $\emph{m}\bydef \ceil{\log_2{\sizeof{\F}}} / \ell$ chunks $\term{s_{i,j,k}}$, of $\ell$-bits each, such that:
\begin{align}
s_{i,j} 
    &amp;= \sum_{k\in[m]} (2^\ell)^{k-1} \cdot \emph{s_{i,j,k}}\\<br />
    &amp;\bydef \sum_{k\in[m]} \emph{B}^{k-1} \cdot s_{i,j,k}\\<br />
\end{align}</p>

<p><em>Note:</em> Each $s_{i,j,k} \in [0, B)$, where $B = 2^\ell$.</p>

<p><strong>Step 4:</strong> $\forall i \in[n], j\in[w_i], k\in[m]$, encrypt the $k$th chunk of the $j$th share of player $i$:
\begin{align}
    \term{r_{j,k}} &amp;\randget \F\ \text{s.t.}\ \sum_{k\in[m]} B^{k-1}\cdot r_{j,k} = 0\\<br />
    \label{eq:share-ciphertexts}
    \term{(C_{i,j,k}, R_{j,k})} &amp;\gets E.\enc_{G,H}(\ek_i, s_{i,j,k}; r_{j,k})\\<br />
        &amp;\bydef \left(\begin{array}{l}
            s_{i,j,k} \cdot G + r_{j,k}\cdot \ek_i\\<br />
            r_{j,k}\cdot H\end{array}\right)
\end{align}</p>

<p><em>Observation 1:</em> The randomness has been correlated such that:
\begin{align}
\label{eq:correlated}
\sum_{k\in[m]} B^{k-1} \cdot C_{i,j,k} 
    &amp;= \sum_{k\in[m]} B^{k-1} \cdot (s_{i,j,k} \cdot G + r_{j,k}\cdot \ek_i)\\<br />
    &amp;= \underbrace{\sum_{k\in[m]} (B^{k-1} \cdot s_{i,j,k})}_{s_{i,j}} \cdot G + \underbrace{\sum_{k\in [m]} (B^{k-1} \cdot r_{j,k})}_{0}\cdot \ek_i\\<br />
    &amp;= s_{i,j} \cdot G + 0 \cdot \ek_i = s_{i,j} \cdot G
\end{align}</p>

<p><em>Observation 2:</em> Different players $i$ will safely re-use the same $r_{j,k}$ randomness.</p>

<p><em>Observation 3:</em> $\sizeof{\{R_{j,k}\}_{j,k}} = m\cdot \max_{i\in[n]}{(w_i)}$</p>

<p class="definition">The <strong>cumulative weight up to (but excluding) $i$</strong> is $\term{W_i}$ such that $\emph{W_1} = 0$ and 
<!-- 
$\term{W_i} = W_{i-1} + w_{i-1}$.
-->
$\emph{W_i} = \sum_{i’\in [1, i)} w_{i’}$. 
(Note that $W \bydef W_{n+1}$.)
This notion helps us “flatten” all the share chunks $s_{i,j,k}$ into an array $\{z_{i’}\}_{i’\in [W \cdot m]}$, where $z_{i’} \bydef s_{i,j,k}$ with $i’\gets \left(\emph{W_i} + (j-1)\right)\cdot m + k \bydef \term{\idx(i,j,k)}$ (see <a href="#appendix-the-igets-mathsfidxijk-indexing">appendix</a> for how the indexing was derived).</p>

<p><strong>Step 5:</strong> Prove that the share chunks are correctly encrypted <strong>and</strong> are all $\ell$-bit long.</p>

<p>First, “flatten” all the shares into a vector. $\forall i\in[n], j\in[w_i],\forall k\in[m]$:
\begin{align}
\term{z_{i’}} \gets s_{i,j, k},\ \text{where}\ i’ 
    &amp;\bydef \emph{\idx}(i,j,k)\in[W\cdot m]
\end{align}</p>

<p>Second, KZG commit to the share chunks and prove they are all in range:
\begin{align}
\rho &amp;\randget \F\\<br />
\term{C} &amp;\gets \dekart_2.\commit(\ck, z_1, \ldots, z_{W \cdot m}; \rho)\\<br />
\term{\piRange} &amp;\gets \dekart_2.\prove(\prk, C, \ell, z_1, \ldots, z_{W\cdot m}; \rho) 
\end{align}</p>

<p><strong>Step 6:</strong> Compute a signature of knowledge of the dealt secret key $a_0$ over the session ID: 
<a id="step-6-deal"></a>
\begin{align}
\term{\ctx} &amp;\gets (\threshWeight, \{w_i\}_i, \ssid)\\<br />
\term{\piSok} &amp;\gets \sok.\prove\left(\begin{array}{l}
    \Retk, \emph{\ctx},\\<br />
    \underbrace{G, H, \ck, \{\ek_i\}_i,\{C_{i,j,k}\}_{i,j,k}, \{R_{j,k}\}_{j,k}, C}_{\stmt},\\<br />
    \underbrace{\{s_{i,j,k}\}_{i,j,k}, \{r_{j,k}\}_{j,k}, \rho}_{\witn}
\end{array}\right)
\end{align}</p>

<p>Return the transcript:
\begin{align}
\label{eq:proof}
\term{\pi}  &amp;\gets \left(C, \piRange, \piSok\right)\\<br />
\label{eq:trs}
\trs &amp;\gets \left(\widetilde{V}_0, \{\widetilde{V}_{i,j}\}_{i,j\in[w_i]}, \{C_{i,j,k}\}_{i,j\in[w_i],k}, \{R_{j,k}\}_{j\in[\max_i{w_i}],k}, \emph{\pi}\right)
\end{align}</p>

<h3 id="mathsfpvssmathsfverify_mathsfppleftmathsftrs-t_w-w_i-mathsfek_i_iinn-mathsfssidright-rightarrow-01">$\mathsf{PVSS}.\mathsf{Verify}_\mathsf{pp}\left(\mathsf{trs}, t_W, \{w_i, \mathsf{ek}_i\}_{i\in[n]}, \mathsf{ssid}\right) \rightarrow \{0,1\}$</h3>

<p>Parse public parameters:
\begin{align}
(\ell, \cdot, G, \widetilde{G}, H, \cdot,\cdot,\vk)\parse\pp
\end{align}</p>

<p>Parse the transcript:
\begin{align}
\left(\widetilde{V}_0, \{\widetilde{V}_{i,j}\}_{i,j\in[w_i]}, \{C_{i,j,k}\}_{i,j\in[w_i],k}, \{R_{j,k}\}_{j\in[\max_i{w_i}],k}, \left(C, \piRange, \piSok\right)\right)\parse\trs 
\end{align}</p>

<p>Let the <em>total weight</em> $W$ be defined as before in Eq. \ref{eq:W}.</p>

<p><strong>Step 1:</strong> Verify that the committed shares encode a degree-$\threshWeight$ polynomial via the SCRAPE LDT<sup id="fnref:CD17"><a href="#fn:CD17" class="footnote" rel="footnote" role="doc-noteref">5</a></sup>:
\begin{align}
\term{\alpha} &amp;\randget \F\\<br />
\textbf{assert}\ &amp;\scrape.\lowdegreetest(\{(0, \widetilde{V}_0)\} \cup \{(\chi_{i,j}, \widetilde{V}_{i,j})\}_{i,j}, \threshWeight, W; \emph{\alpha}) \equals 1
\end{align}</p>

<p><em>Note:</em> Recall that the $\emph{\chi_{i,j}}$’s are the roots of unity used to evaluate the secret-sharing polynomial $f(X)$ during dealing (see Eq. \ref{eq:eval}).</p>

<p class="todo">May need to feed in the size of the evaluation domain to SCRAPE for the super-efficient algorithm.</p>

<p><strong>Step 2:</strong> Check that ciphertexts encrypt the committed shares:
<a id="step-2-verify"></a>
\begin{align}
\term{\beta_{i,j}} &amp;\randget\{0,1\}^\lambda\\<br />
\label{eq:multi-pairing-check}
\textbf{assert}\ 
    &amp;\pair{\sum_{i\in[n],j\in[w_i],k\in[m]} (B^{k-1}\cdot\emph{\beta_{i,j}})\cdot C_{i,j,k}}{\widetilde{G}} 
        \equals
    \pair{G}{\sum_{i\in[n],j\in[w_i]} \emph{\beta_{i,j}}\cdot \widetilde{V}_{i,j}}
\end{align}</p>

<details>
 <summary><b>Q:</b> <i>But how was this derived?</i> <b>A:</b> Click to expand and understand...</summary>
  <p style="margin-left: .3em; border-left: .15em solid black; padding-left: .5em;">
   First, recall from Eq. \ref{eq:correlated} that the randomness has been correlated such that $\sum_k C_{i,j,k} = s_{i,j}\cdot G$.
   <br />
   Second, observe that, using a pairing, we can check that the share chunked in the $C_{i,j,k}$’s is the same as the one committed in $\widetilde{V}_{i,j}$:
   \begin{align}
        \pair{\sum_{k\in[m]} B^{k-1}\cdot C_{i,j,k}}{\widetilde{G}} &amp;\equals \pair{G}{\widetilde{V}_{i,j}}
   \end{align}
   <br />
   Third, observe that we can batch all these pairing checks into one by taking linear combination of the verification equations using random $\beta_{i,j}$’s:
   \begin{align}
        \sum_{i,j}\beta_{i,j}\cdot\pair{\sum_{k\in[m]} B^{k-1}\cdot C_{i,j,k}}{\widetilde{G}} &amp;\equals \sum_{i,j} \beta_{i,j}\cdot \pair{G}{\widetilde{V}_{i,j}}\\\
   \end{align}
   Moving the sum inside the pairing by leveraging the bilinearity gives exactly Eq. \ref{eq:multi-pairing-check}.
  </p>
</details>

<p><strong>Step 3:</strong> Verify the range proof:
\begin{align}
\textbf{assert}\ \dekart_2.\verify(\vk, C, \ell; \piRange) \equals 1 
\end{align}</p>

<p><strong>Step 4:</strong> Verify the SoK:
<a id="step-4-verify"></a>
\begin{align}
\term{\ctx} &amp;\gets (\threshWeight, \{w_i\}_i, \ssid)\\<br />
\textbf{assert}\ &amp;\sok.\verify\left(\begin{array}{l}
    \Retk, \emph{\ctx},\\<br />
    \underbrace{G, H, \ck, \{\ek_i\}_i,\{C_{i,j,k}\}_{i,j,k}, \{R_{j,k}\}_{j,k}, C}_{\stmt};\\<br />
    \piSok
\end{array}\right) \equals 1
\end{align}</p>

<h3 id="mathsfpvssmathsfdecrypt_mathsfppleftmathsftrs-mathsfdk-i-w_iright-rightarrow-s_ij_j-in-f">$\mathsf{PVSS}.\mathsf{Decrypt}_\mathsf{pp}\left(\mathsf{trs}, \mathsf{dk}, i, w_i\right) \rightarrow \{s_{i,j}\}_j \in \F$</h3>

<p class="smallnote">$i\in[n]$ is the ID of the player who is decrypting their share(s) from the transcript.
Recall that $\emph{m}\bydef \ceil{\log_2{\sizeof{\F}} / \ell}$ is the number of chunks per share.</p>

<p>Parse public parameters:
\begin{align}
(\ell, \cdot, G, \cdot, \cdot, \cdot,\cdot,\cdot)\parse\pp
\end{align}</p>

<p>Parse the transcript:
\begin{align}
\left(\cdot, \cdot, \{C_{i,j,k}\}_{i,j\in[w_i],k}, \{R_{j,k}\}_{j\in[\max_i{w_i}],k},\cdot\right)\parse\trs
\end{align}</p>

<p><strong>Step 1:</strong> Decrypt all of player $i$’s share chunks $\{s_{i,j,k}\}_{i,j\in[w_i],k\in[m]}$:
\begin{align}
s_{i,j,k}\gets E.\dec_{G}\left(\dk_i, (C_{i,j,k}, R_{j,k})\right)
\end{align}</p>

<p><strong>Step 2:</strong> Assemble the chunks back into shares:
\begin{align}
s_{i,j}\gets \sum_{k\in[m]} (2^\ell)^{k-1} \cdot s_{i,j,k}
\end{align}</p>

<h2 id="weighted-dkg-protocol">Weighted DKG protocol</h2>

<p>Below, we give a high-level sketch of our $\threshWeight$-out-of-$\{w_i\}_{i\in[n]}$ weighted DKG with contributions from $&gt; \emph{\threshQ}$ fraction of the stake.</p>

<p>But first, we have to slightly augment our notion of a non-malleable PVSS, denoted by $\pvss$, into a <strong>signed, subaggregatable and non-malleable PVSS</strong>, denoted by $\term{\ssPvss}$.
This will make building a DKG protocol much easier.</p>

<p><strong>First,</strong> recall <a href="#building-a-dkg-from-a-pvss">from before</a> that validators must sign their PVSS transcripts in the DKG protocol.
Thus, the $\term{\ssPvss.\deal}$ and $\term{\ssPvss.\verify}$ algorithms will differ slightly:</p>
<ol>
  <li>dealing now takes a <strong>signing secret key</strong> $\term{\sk}$ as input and additionally returns a signature $\term{\sigma}$</li>
  <li>verification now takes a <strong>signing pubkey</strong> $\term{\pk}$ and the signature $\sigma$ as input</li>
</ol>

<p><strong>Second</strong>, we introduce a useful notion of an <strong>aggregatable PVSS subtranscript</strong> $\term{\subtrs}$ which excludes the non-aggregatable components of the PVSS transcript $\emph{\trs}$ from Eq. \ref{eq:trs} (i.e., the proof $\pi$ from Eq. \ref{eq:proof}).</p>

<p><strong>Third,</strong> we define a new $\term{\ssPvssSubtranscript}$ algorithm which returns such a $\subtrs$.
In Chunky’s case, this will consist of only:</p>

<ol>
  <li>The dealt pubkey $\widetilde{V}_0$ as defined in Eq. \ref{eq:dealt-pubkey}</li>
  <li>The share commitments (i.e., all share commitments $\widetilde{V}_{i,j}$ as defined in Eq. \ref{eq:share-commitments})</li>
  <li>The share chunk ciphertexts (i.e., all share ciphertexts $(C_{i,j,k}, R_{j,k})$ as defined in Eq. \ref{eq:share-ciphertexts})</li>
</ol>

<p><strong>Fourth</strong>, and last, we will also define a $\term{\ssPvssSubaggregate}$ algorithm which takes several subtranscripts $\{\subtrs_i\}_i$ and aggregates them into a single $\subtrs$.
This way, two subtranscripts $\subtrs_1$ and $\subtrs_2$ dealing secrets $z_1$ and $z_2$, respectively, can be succinctly combined into a $\subtrs$ dealing $z_1 + z_2$ (such that $\sizeof{\subtrs} = \sizeof{\subtrs_i}, \forall i\in\{1,2\}$).</p>

<p>We detail the new algorithms for this signed, subaggregatable, non-malleable PVSS below.
(Note that the $\setup$ and $\decrypt$ algorithms remain the same.)</p>

<h3 id="mathsfsspvssmathsfdeal_mathsfppleftmathsfsk-a_0-t_w-w_i-mathsfek_i_iin-n-mathsfssidright-rightarrow-mathsftrssigma">$\mathsf{ssPVSS}.\mathsf{Deal}_\mathsf{pp}\left(\mathsf{sk}, a_0, t_W, \{w_i, \mathsf{ek}_i\}_{i\in [n]}, \mathsf{ssid}\right) \rightarrow (\mathsf{trs},\sigma)$</h3>

<p>Deal a normal PVSS transcript via <a href="#mathsfpvssmathsfdeal_mathsfpplefta_0-t_w-w_i-mathsfek_i_iin-n-mathsfssidright-rightarrow-mathsftrs">$\pvssDeal$</a> <strong>but</strong> also sign over it and over the session ID:
\begin{align}
\trs &amp;\gets \pvssDeal(a_0, \threshWeight, \{w_i,\ek_i\}_{i\in[n]}, \ssid)\\<br />
(\tilde{V}_0,\cdot,\cdot,\cdot,\cdot)&amp;\parse \trs\\<br />
\sigma &amp;\gets \sig.\sign(\sk, (\tilde{V}_0, \ssid))
\end{align}</p>

<h3 id="mathsfsspvssmathsfverify_mathsfppleftpk-mathsftrs-sigma-t_w-w_i-mathsfek_i_iinn-mathsfssidright-rightarrow-01">$\mathsf{ssPVSS}.\mathsf{Verify}_\mathsf{pp}\left(\pk, \mathsf{trs}, \sigma, t_W, \{w_i, \mathsf{ek}_i\}_{i\in[n]}, \mathsf{ssid}\right) \rightarrow \{0,1\}$</h3>

<p>Do a normal PVSS transcript verification via <a href="#mathsfpvssmathsfverify_mathsfppleftmathsftrs-t_w-w_i-mathsfek_i_iinn-mathsfssidright-rightarrow-01">$\pvssVerify$</a> <strong>but</strong> also verify the signature over it and the session ID:
\begin{align}
\textbf{assert}\ \pvssVerify(\trs, \threshWeight, \{w_i,\ek_i\}_{i\in[n]}, \ssid) &amp;\equals 1\\<br />
(\tilde{V}_0,\cdot,\cdot,\cdot,\cdot) &amp;\parse \trs\\<br />
\textbf{assert}\ \sig.\verify(\pk, \sigma, (\tilde{V}_0, \ssid)) &amp;\equals 1
\end{align}</p>

<h3 id="mathsfsspvssmathsfsubtranscriptleftmathsftrsright-rightarrow-mathsfsubtrs">$\mathsf{ssPVSS}.\mathsf{Subtranscript}\left(\mathsf{trs}\right) \rightarrow \mathsf{subtrs}$</h3>

<p>Parse the transcript as defined in Eq. \ref{eq:trs}:
\begin{align}
\left(\widetilde{V}_0, \{\widetilde{V}_{i,j}\}_{i,j\in[w_i]}, \{C_{i,j,k}\}_{i,j\in[w_i],k}, \{R_{j,k}\}_{j\in[\max_i{w_i}],k}, \cdot \right)\parse\trs 
\end{align}</p>

<p>Return the <em>aggregatable</em> subtranscript:
\begin{align}
\label{eq:subtrs}
\subtrs &amp;\gets \left(\widetilde{V}_0, \{\widetilde{V}_{i,j}\}_{i,j\in[w_i]}, \{C_{i,j,k}\}_{i,j\in[w_i],k}, \{R_{j,k}\}_{j\in[\max_i{w_i}],k}\right)\\<br />
\end{align}</p>

<h3 id="mathsfsspvssmathsfsubaggregate_mathsfppleftmathsfsubtrs_i_iright-rightarrow-mathsfsubtrs">$\mathsf{ssPVSS}.\mathsf{Subaggregate}_\mathsf{pp}\left(\{\mathsf{subtrs}_{i’}\}_{i’}\right) \rightarrow \mathsf{subtrs}$</h3>

<p>Parse public parameters:
\begin{align}
(\ell, \cdot, \cdot, \cdot, \cdot, \cdot,\cdot,\cdot)\parse\pp
\end{align}</p>

<p>Parse all the <em>aggregatable</em> subtranscripts, for all $i’$:
\begin{align}
\left(\widetilde{V}^{(i’)}_0, \{\widetilde{V}^{(i’)}_{i,j}\}_{i,j\in[w_i]}, \{C^{(i’)}_{i,j,k}\}_{i,j\in[w_i],k}, \{R^{(i’)}_{j,k}\}_{j\in[\max_i{w_i}],k}\right)\parse \subtrs_{i’}\\<br />
\end{align}</p>

<p>Recall that $\emph{n}$ denotes the number of players that a transcript deals to and recall that $\emph{m} = \ceil{\log_2{\sizeof{\F}} / \ell}$ denotes the number of chunks per share.</p>

<p>Aggregate:
\begin{align}
\term{\widetilde{V}_0} &amp;\gets \sum_{i’} \widetilde{V}^{(i’)}_0\\\ 
\forall i\in[n],j\in[w_i], \term{\widetilde{V}_{i,j}} &amp;\gets \sum_{i’} \widetilde{V}^{(i’)}_{i,j}\\<br />
\forall i\in[n],j\in[w_i],k\in[m], \term{\widetilde{C}_{i,j,k}} &amp;\gets \sum_{i’} C^{(i’)}_{i,j,k}\\<br />
\forall j\in[w_i],k\in[m], \term{\widetilde{R}_{j,k}} &amp;\gets \sum_{i’} R^{(i’)}_{j,k}\\<br />
\end{align}</p>

<p>Return the aggregated subtranscript:
\begin{align}
\subtrs &amp;\gets \left(\widetilde{V}_0, \{\widetilde{V}_{i,j}\}_{i,j\in[w_i]}, \{C_{i,j,k}\}_{i,j\in[w_i],k}, \{R_{j,k}\}_{j\in[\max_i{w_i}],k}\right)\\<br />
\end{align}</p>

<h3 id="dkg-overview">DKG overview</h3>

<p>A DKG will occur within the context of a consensus epoch $\term{\epoch}$.
All validators know each other’s public keys.
Specifically, every validator $i$ has signing pubkey $\term{\pk_{i’}}$ (with signing secret key $\term{\sk_{i’}}$) and encryption key $\ek_i$<sup id="fnref:reuse"><a href="#fn:reuse" class="footnote" rel="footnote" role="doc-noteref">6</a></sup>.</p>

<p><strong>Dealing phase:</strong> Each validator $i’\in[n]$ picks a random secret $\term{z_{i’}}\in\F$ and computes a PVSS transcript that deals it:
\begin{align}
    \emph{z_{i’}} &amp;\randget \F\\<br />
    \term{\ssid_{i’}} &amp;\gets (i’, \emph{\pk_{i’}}, \emph{\epoch})\\<br />
    \term{\trs_{i’}, \sigma_{i’}} &amp;\gets \ssPvssDeal(\emph{\sk_{i’}}, z_{i’}, \threshWeight, \{w_i,\ek_i\}_{i\in[n]}, \emph{\ssid_{i’}})\\<br />
\end{align}</p>

<p class="smallnote">Our current $\ssPvssDeal$ Rust implementation in <code class="language-plaintext highlighter-rouge">aptos-dkg</code> returns a <code class="language-plaintext highlighter-rouge">chunky::Transcript</code> struct that will contain both the actual transcript $\trs_{i’}$ and its signature $\sigma_{i’}$.</p>

<p>Then, each validator $i’$ (best-effort) disseminates $(\trs_{i’}, \sigma_{i’})$ to all other validators.
Eventually, each validators $i’$ will have its own view of a set $\term{Q_{i’}}$ of validators who correctly-dealt a (single) transcript, as well as the actual signed transcripts themselves.</p>

<p><strong>Agreement phase:</strong> In this phase, validators will agree on an aggregated subtranscript $\term{\subtrs}$ obtained from a “large-enough” eligible set $\emph{Q}$ of honest validators.
More formally, the agreed-upon $(Q,\subtrs)$ will have the following three properties:
\begin{align}
   &amp;\norm{Q} &gt; \threshQ\\<br />
   \label{eq:trs-verifies}
   &amp;\forall j’ \in Q, \exists (\term{\trs_{j’},\sigma_{j’}}),\ \text{s.t.}\ \ssPvssVerify(\pk_{j’}, \emph{\trs_{j’}, \sigma_{j’}}, \threshWeight, \{w_i,\ek_i\}_{i\in[n]}, (\underbrace{j’, \pk_{j’}, \epoch}_{\emph{\ssid_{j’}}})) \goddamnequals 1\\<br />
   \label{eq:subtrs-aggr}
   &amp;\emph{\subtrs} \goddamnequals \ssPvssSubaggregate(\{\ssPvssSubtranscript(\trs_{j’})\}_{j’ \in Q})
\end{align}</p>

<p class="note">Agreement on $Q$ could be reached inefficiently by running a Byzantine agreement phase for each transcript: i.e., validator $i’$ proposes its $(\trs_{i’}, \sigma_{i’})$ and if it collects “enough” <strong>attestations</strong> (e.g., signatures from a fraction $&gt; \term{\threshS}$ of the stake, say, 33%<sup id="fnref:vaba"><a href="#fn:vaba" class="footnote" rel="footnote" role="doc-noteref">7</a></sup>) on it, then $i’$ is accumulated in the set $Q$ so far.
The downside of this approach is high latency: it requires one Byzantine agreement per contributing validator.
For Aptos, specifically, it would also require sending too many <a href="https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-64.md">validator TXNs</a>.</p>

<p><strong>Proposal sub-phase:</strong> To reach agreement on $(Q,\subtrs)$ efficiently, one of the validators (e.g., the consensus leader) sends a <strong>final DKG subtranscript proposal</strong> $(Q, h)$, where $h \gets H(\subtrs)$ and $H(\cdot)$ is a collision-resistant hash function.</p>

<p>Every validator $i’$ will <strong>attest to</strong> (i.e., sign) this proposal if they can verify that the hashed subtranscript in $h$ was actually aggregated from some set $\{\trs_{j’}\}_{j’\in Q}$ of transcripts that all passed verification as per Eq. \ref{eq:trs-verifies}.</p>

<p>More formally, validator $i’$ will attest to the $(Q, h)$ proposal via a signature $\term{\alpha_{i’}}\bydef \sig.\sign(\sk_{i’}, (Q, h))$, if and only if:</p>
<ol>
  <li>$\norm{Q} &gt; \threshQ$</li>
  <li>$\forall j’\in Q$, validator $i’$ eventually<sup id="fnref:eventually"><a href="#fn:eventually" class="footnote" rel="footnote" role="doc-noteref">8</a></sup> receives a single<sup id="fnref:equivocation"><a href="#fn:equivocation" class="footnote" rel="footnote" role="doc-noteref">9</a></sup> $(\trs_{j’},\sigma_{j’})$ s.t. $\ssPvssVerify(\pk_{j’}, \trs_{j’}, \sigma_{j’}, \threshWeight, \{w_i,\ek_i\}_{i\in[n]}, (j’, \pk_{j’}, \epoch)) \goddamnequals 1$</li>
  <li>$h \equals H(\ssPvssSubaggregate(\{\ssPvssSubtranscript(\trs_{j’})\}_{j’\in Q}))$</li>
</ol>

<p><strong>Commit sub-phase:</strong> If the $(Q, h)$ proposal gathers “enough” attestations (i.e., $&gt; \threshS$), the proposing validator sends a(n Aptos validator) TXN with $(Q, \subtrs, \{\alpha_{j’}\}_{j’\in \term{S}})$ to the chain, where $\emph{S}$ is the set of validators who attested with $\norm{S} &gt; \threshS$.</p>

<p>(Note that this TXN includes the $\subtrs$ corresponding to the hash in the proposal $(Q,h)$.)</p>

<p>This TXN will be succinct as it only contains:</p>
<ol>
  <li>The aggregated subtranscript $\subtrs$
    <ul>
      <li><em>Note:</em> Assuming elliptic curves over 256-bit base fields (e.g., BN254), $\sizeof{\subtrs} \bydef \underbrace{64}_{\widetilde{V}_0} + \underbrace{64 \cdot W}_{\widetilde{V}_{i,j}\text{'s}} + \underbrace{32 \cdot W\cdot m}_{C_{i,j,k}\text{'s}} + 32\cdot \underbrace{\max_i{w_i}\cdot m}_{R_{j,k}\text{'s}}$ as per Eq. \ref{eq:subtrs}</li>
      <li>e.g., for total weight $W = 254$, $m=8$ chunks and $\max_i{w_i} = 5$, the size will be $64 + 64 \cdot 254 + 32 \cdot 254 \cdot 8 + 32 \cdot 5 \cdot 8 =$ 82,624 bytes $=$ 80.6875 KiB</li>
      <li>If we increase $\max_i{w_i}$ to 7, we get $64 + 64 \cdot 254 + 32 \cdot 254 \cdot 8 + 32 \cdot \emph{7} \cdot 8 =$ 83,136 bytes $=$ 81.1875 KiB</li>
    </ul>
  </li>
  <li>Attestations $\alpha_{j’}$’s from at most all $n$ validators.
    <ul>
      <li>e.g., In Aptos, we are using BLS signatures<sup id="fnref:BLS01"><a href="#fn:BLS01" class="footnote" rel="footnote" role="doc-noteref">10</a></sup> over BLS12-381 curves<sup id="fnref:BLS02e"><a href="#fn:BLS02e" class="footnote" rel="footnote" role="doc-noteref">11</a></sup> $\Rightarrow$ since validators are voting by signing over the same proposal $(Q,\subtrs)$, the attestation signatures can be aggregated into a single multi-signature of 48 bytes.</li>
    </ul>
  </li>
</ol>

<p>Once this TXN gets included on-chain it is sent to execution, where all (honest) validators will:</p>

<ol>
  <li>check that the attestations in $(Q,\subtrs, \{\alpha_{j’}\}_{j’\in S})$ are valid; i.e.,:
    <ul>
      <li>$h \gets H(\subtrs)$</li>
      <li>$\textbf{assert}\ \norm{S} &gt; \threshS$</li>
      <li>$\forall j’\in S, \textbf{assert}\ \sig.\verify(\pk_{j’}, \alpha_{j’}, (Q, h)) \equals 1$</li>
    </ul>
  </li>
  <li>this implies that $\norm{Q} &gt; \threshQ$…
    <ul>
      <li>…and that Eqs. \ref{eq:trs-verifies} and \ref{eq:subtrs-aggr} hold</li>
    </ul>
  </li>
  <li>install the subtranscript on-chain, declaring the DKG complete</li>
</ol>

<p>Now:</p>
<ul>
  <li>The final public key whose corresponding secret key is secret-shared is $\widetilde{V}_0$ from $\subtrs$</li>
  <li>The share commitments $\widetilde{V}_{i,j}$’s in $\subtrs$ can be made public
    <ul>
      <li>e.g., if the DKG is for bootstrapping a weighted <a href="/threshold-bls">threshold BLS signature scheme</a>, then $\widetilde{V}_{i,j}\bydef s_{i,j}\cdot G$ will act as the verification key for the BLS signature share $H(m)^{s_{i,j}}$</li>
    </ul>
  </li>
  <li>Each player can use $\pvssDecrypt$ to obtain their shares from $\subtrs$ <sup id="fnref:dummy"><a href="#fn:dummy" class="footnote" rel="footnote" role="doc-noteref">12</a></sup></li>
</ul>

<h2 id="benchmarks">Benchmarks</h2>

<p>Single-threaded numbers from my Apple Macbook Pro M4 Max:</p>

<table>
  <thead>
    <tr>
      <th>Scheme</th>
      <th>$\ell$</th>
      <th>Setup</th>
      <th>Transcript size</th>
      <th>Deal (ms)</th>
      <th>Serialize (ms)</th>
      <th>Aggregate (ms)</th>
      <th>Verify (ms)</th>
      <th>Decrypt-share (ms)</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>Chunky</td>
      <td>32</td>
      <td>129-out-of-219 / 136 players</td>
      <td>259.24 KiB</td>
      <td>373.30</td>
      <td>0.24</td>
      <td>1.29</td>
      <td>63.05</td>
      <td>10.73</td>
    </tr>
    <tr>
      <td>Chunky2</td>
      <td>32</td>
      <td>129-out-of-219 / 136 players</td>
      <td>279.78 KiB</td>
      <td><span style="color:#dc2626">401.96</span> (0.93x)</td>
      <td><span style="color:#dc2626">0.27</span> (0.89x)</td>
      <td><span style="color:#dc2626">1.35</span> (0.96x)</td>
      <td><span style="color:#dc2626">72.45</span> (0.87x)</td>
      <td><span style="color:#dc2626">11.09</span> (0.97x)</td>
    </tr>
  </tbody>
</table>

<p>These numbers can be reproduce by cloning <a href="https://github.com/aptos-labs/aptos-core">aptos-core</a> and doing:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git clone https://github.com/aptos-labs/aptos-core
cd aptos-core/crates/aptos-crypto/benches/
./run-pvss-benches.sh
</code></pre></div></div>

<h2 id="acknowledgements">Acknowledgements</h2>

<p>The weighted PVSS in this blog post has been co-designed with Rex Fernando and Wicher Malten at Aptos Labs.
The weighted DKG built on top of the PVSS has been co-designed with Daniel Xiang and Balaji Arun.
Thanks to Ittai Abraham for helping me think through the DKG protocol from the lens of validated Byzantine agreement.
Thanks to Wicher Malten for the initial write-up of <strong>Chunky 2</strong>, which I later modified.</p>

<h2 id="appendix-the-igets-mathsfidxijk-indexing">Appendix: The $i’\gets \mathsf{idx}(i,j,k)$ indexing</h2>

<p>It may be easiest to understand the $\idx(i,j,k) = (W_i + (j-1))\cdot m + k$ formula by considering an example.</p>

<p>Say the number of chunks per share is $m = 3$ and that we have $n=4$ players with weights $[ w_1, w_2, w_3, w_4 ] = [2, 1, 3, 2]$</p>

<p>Then, the cumulative weights will be $[ W_1, W_2, W_3, W_4 ] = [ 0, 2, 3, 6 ]$</p>

<p>“Flattening out” the shares, we’d get:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Player 1:

    s_{1,1,1}, s_{1,1,2}, s_{1,1,3},
    1          2          3         

    s_{1,2,1}, s_{1,2,2}, s_{1,2,3},
    4          5          6         

Player 2:
    s_{2,1,1}, s_{2,1,2}, s_{2,1,3},
    7          8          9         

Player 3:
    s_{3,1,1}, s_{3,1,2}, s_{3,1,3},
    10         11         12        

    s_{3,2,1}, s_{3,2,2}, s_{3,2,3},
    13         14         15        

    s_{3,3,1}, s_{3,3,2}, s_{3,3,3},
    16         17         18        

Player 4:
    s_{4,1,1}, s_{4,1,2}, s_{4,1,3},
    19         20         21        

    s_{4,2,1}, s_{4,2,2}, s_{4,2,3},
    22         23         24
</code></pre></div></div>

<p>Observations:</p>
<ol>
  <li>Player $i$’s share chunks start at index $W_i\cdot m + 1$.</li>
  <li>To get to the chunks of the $j$th share (of player $i$) add $(j-1)\cdot m$ to that.</li>
  <li>To get to the $k$th chunk (of the $j$th share of player i) add $k-1$ to that.</li>
</ol>

<p>So:
\begin{align}
\idx(i,j,k) 
    &amp;= ((W_i \cdot m + 1) + ((j-1) \cdot m) + (k-1)\\<br />
    &amp;= ((W_i \cdot m) + ((j-1) \cdot m) + k\\<br />
    &amp;= (W_i + (j-1)) \cdot m + k\\<br />
\end{align}</p>

<p>For example, when $i = 3, j = 3, k = 2$, we get:
\begin{align}
    (W_i + (j-1)) \cdot m + k
  &amp;= (W_3 + (3-1)) \cdot 3 + 2\\<br />
  &amp;= (3 + 2) \cdot 3 + 2\\<br />
  &amp;= 5 \cdot 3 + 2 = 17
\end{align}
as expected for $s_{3,3,2}$.</p>

<h2 id="appendix-chunky-2">Appendix: Chunky 2</h2>

<p>We present a modified version of <strong>Chunky</strong> with a 13% faster verifier, henceforth called <strong>Chunky 2</strong>.</p>

<p>To avoid redundancy, we describe only the modifications we made, rather than restating the entire algorithm from scratch.</p>

<p>The <strong>key idea</strong> is to modify the ElGamal-to-KZG relation $\emph{\Retk}$ from Eq. $\ref{rel:e2k}$ to also prove that the $\widetilde{V}_{i,j}$ share commitments from Eq. $\ref{eq:share-commitments}$ are computed correctly.
This will speed up <a href="#step-2-verify">Step 2 in $\pvss.\verify$</a>, which checks that what’s encrypted in the $C_{i,j,k}$’s from Eq. \ref{eq:share-ciphertexts} is what’s comitted in the $\widetilde{V}_{i,j}$’s.</p>

<p>The modified $\Retknew$ relation follows below, with changes $\bluedashedbox{\text{highlighted in blue}}$:
\begin{align}
\term{\Retknew}\left(\begin{array}{l}
\stmt = \left(G, H, \ck, \{\ek_i\}_i,\{C_{i,j,k}\}_{i,j,k}, \{R_{j,k}\}_{j,k}, C, \bluedashedbox{\{\widetilde{V}_{i,j}\}_{i,j}}    \right),\\<br />
\witn = \left(\{s_{i,j,k}\}_{i,j,k}, \{r_{j,k}\}_{j,k}, \rho\right)
\end{array}\right) = 1\Leftrightarrow\\<br />
\Leftrightarrow\left\{\begin{array}{rl} 
    (C_{i,j,k}, R_{j,k}) &amp;= E.\enc_{G,H}(\ek_i, s_{i,j,k}; r_{j,k})\\<br />
    C&amp; = \dekart_2.\commit(\ck, \{s_{i,j,k}\}_{i,j,k}; \rho)\\<br />
    \bluedashedbox{\widetilde{V}_{i,j}} &amp; = \bluedashedbox{\left( \sum_{k \in [m]} B^{k-1} s_{i,j,k} \right) \cdot \widetilde{G}}
\end{array}\right.
\end{align}</p>

<p>This modification moves almost all verification work in <a href="#step-2-verify">Step 2 of $\pvss.\verify$</a> into the dealing algorithm.
Furthermore, it reduces total computation across the dealing and verification algorithms.
We summarize below:</p>

<table>
  <thead>
    <tr>
      <th>Scheme</th>
      <th>Proving work</th>
      <th>Verification work</th>
      <th>Transcript size change</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>Chunky</td>
      <td>0</td>
      <td>$\vmsmOne{W\cdot m} + \vmsmTwo{W} + \multipair{2}$</td>
      <td>0</td>
    </tr>
    <tr>
      <td><strong>Chunky 2</strong></td>
      <td>$\GmulTwo{W}$</td>
      <td>$\vmsmTwo{2W+1}$</td>
      <td>${} + W |\Gr_2|$</td>
    </tr>
  </tbody>
</table>

<p class="note">The $\Sigma$-protocol verifier extra work will be of the form $\psi(\mathbf{\sigma}) \equals \mathbf{A} + e\cdot [\widetilde{V}_{i,j}]_{i,j}$ and can be done in a size-$(2W+1)$ MSM because the group elements in $\psi(\mathbf{\sigma})$ will all have the same base $\widetilde{G}$.</p>

<p>Then, we modify <a href="#step-6-deal"><strong>Step 6</strong> of the $\pvss.\deal$ algorithm</a> to prove this new relation: 
\begin{align}
\bluedashedbox{\piSoknew} &amp;\gets \sok.\prove\left(\begin{array}{l}
    \bluedashedbox{\Retknew}, \ctx,\\<br />
    G, H, \ck, \{\ek_i\}_i,\{C_{i,j,k}\}_{i,j,k}, \{R_{j,k}\}_{j,k}, C, \bluedashedbox{\{\widetilde{V}_{i,j}\}_{i,j}},\\<br />
    \{s_{i,j,k}\}_{i,j,k}, \{r_{j,k}\}_{j,k}, \rho
\end{array}\right)
\end{align}</p>

<p>Then, we modify <a href="#step-4-verify"><strong>Step 4</strong> of the $\pvss.\verify$ algorithm</a> to verify the proof from above:
\begin{align}
\textbf{assert}\ &amp;\sok.\verify\left(\begin{array}{l}
    \bluedashedbox{\Retknew}, \ctx,\\<br />
    G, H, \ck, \{\ek_i\}_i,\{C_{i,j,k}\}_{i,j,k}, \{R_{j,k}\}_{j,k}, C, \bluedashedbox{\{\widetilde{V}_{i,j}\}_{i,j}};\\<br />
    \bluedashedbox{\piSoknew}
\end{array}\right) \equals 1
\end{align}</p>

<p>Lastly, we remove <a href="#step-2-verify"><strong>Step 2</strong> of the $\pvss.\verify$ algorithm</a>, since the check is now performed above.</p>

<h2 id="references">References</h2>

<p>For cited works, see below 👇👇</p>

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:BBBplus18">
      <p><strong>Bulletproofs: Short Proofs for Confidential Transactions and More</strong>, by B. Bünz and J. Bootle and D. Boneh and A. Poelstra and P. Wuille and G. Maxwell, <em>in 2018 IEEE Symposium on Security and Privacy (SP)</em>, 2018 <a href="#fnref:BBBplus18" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:CL06">
      <p><strong>On Signatures of Knowledge</strong>, by Melissa Chase and Anna Lysyanskaya, <em>in Cryptology ePrint Archive, Report 2006/184</em>, 2006, <a href="https://ia.cr/2006/184">[URL]</a> <a href="#fnref:CL06" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:DPTX24e">
      <p><strong>Distributed Randomness using Weighted VRFs</strong>, by Sourav Das and Benny Pinkas and Alin Tomescu and Zhuolun Xiang, <em>in Cryptology ePrint Archive, Paper 2024/198</em>, 2024, <a href="https://eprint.iacr.org/2024/198">[URL]</a> <a href="#fnref:DPTX24e" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:aptos-Q">
      <p>In an abundance of caution, in Aptos, we require that $Q$ contains $&gt;$ 66% of the stake. <a href="#fnref:aptos-Q" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:CD17">
      <p><strong>SCRAPE: Scalable Randomness Attested by Public Entities</strong>, by Cascudo, Ignacio and David, Bernardo, <em>in Applied Cryptography and Network Security</em>, 2017 <a href="#fnref:CD17" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:reuse">
      <p>Recall that in Aptos, we will safely reuse the validator signing keys as encryption keys. <a href="#fnref:reuse" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:vaba">
      <p>This can be viewed through the lens of collecting $f+1$ attestations in validated Byzantine agreement (VABA). <a href="#fnref:vaba" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:eventually">
      <p>This may require that each validator $i’$ poll other validators for the transcripts in the proposed set $Q$ that $i’$ is missing. <a href="#fnref:eventually" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:equivocation">
      <p>If $i’$ receives two transcripts signed by the same validator $j’$, then that constitute equivocation and would be provable misbehavior. So $i’$ should (or may?) not attest to $Q$ since it includes a malicious player $j’$. <a href="#fnref:equivocation" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:BLS01">
      <p><strong>Short Signatures from the Weil Pairing</strong>, by Boneh, Dan and Lynn, Ben and Shacham, Hovav, <em>in Advances in Cryptology — ASIACRYPT 2001</em>, 2001 <a href="#fnref:BLS01" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:BLS02e">
      <p><strong>Constructing Elliptic Curves with Prescribed Embedding Degrees</strong>, by Paulo S.  L.  M.  Barreto and Ben Lynn and Michael Scott, <em>in Cryptology ePrint Archive, Paper 2002/088</em>, 2002, <a href="https://eprint.iacr.org/2002/088">[URL]</a> <a href="#fnref:BLS02e" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:dummy">
      <p>Technically, they have to add a dummy proof to the <em>subtranscript</em>, obtaining a proper <em>transcript</em>, which they can now feed in to $\pvssDecrypt$ in a type-safe way. <a href="#fnref:dummy" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name>Alin Tomescu</name></author><category term="PVSS" /><category term="ElGamal" /><category term="range proofs" /><category term="polynomials" /><category term="sigma protocols" /><category term="distributed key generation (DKG)" /><category term="KZG" /><summary type="html"><![CDATA[tl;dr: A work-in-progress weighted PVSS for field elements using chunked ElGamal encryption and DeKART range proofs.]]></summary></entry><entry><title type="html">$\Sigma$-phore</title><link href="https://alinush.github.io//sigmaphore" rel="alternate" type="text/html" title="$\Sigma$-phore" /><published>2025-11-13T00:00:00+00:00</published><updated>2025-11-13T00:00:00+00:00</updated><id>https://alinush.github.io//sigmaphore</id><content type="html" xml:base="https://alinush.github.io//sigmaphore"><![CDATA[<p class="info"><strong>tl;dr:</strong> A primitive much more powerful than Semaphore, that we’d like to build without zkSNARKs.</p>

<!--more-->

<!-- Here you can define LaTeX macros -->
<div style="display: none;">$
\def\rt{\mathsf{rt}}
\def\cm{\mathsf{cm}}
\def\com{\mathsf{Com}}
\def\comRerand{\com.\mathsf{Rerand}}
\def\mht{\mathsf{MHT}}
\def\mhtVerifyMem{\mht.\mathsf{VerifyMem}}
$</div>
<p><!-- $ --></p>

<h2 id="some-thoughts">Some thoughts</h2>

<p>Assume we have a full/complete tree data structure that stores commitments in its leaves, such that all leaves stay at the same level: i.e., trees grows from left to right by appending new leaves.
(Otherwise, privacy challenges with leaf depth leaking when revealed via a non-constant-sized ZK proof.)</p>

<p>Assume we can authenticate the tree by having each parent Pedersen commit to the Pedersen commitments in the children, in a somewhat-structure-preserving way (e.g., via cycles or chains of elliptic curves).</p>

<p>Can we come up with a scheme that proves in ZK that a commitment $\cm’$ was obtained by taking some commitment $\cm$ in such a tree with root $\rt$ and re-randomizing it using some secret blinder $\Delta{r}$?</p>

<p>Let $\com$ denote the Pedersen commitment scheme.
Let $\comRerand(\cdot)$ denote the naturally-defined commitment re-randomization algorithm.</p>

<p>Let $\mht$ denote such a tree-based append-only authenticated data structure, henceforth an <strong>accumulator</strong>.
Let $\mhtVerifyMem(\cdot)$ denote the naturally-defined algorithm for verify a membership proof for a leaf in this tree.</p>

<p>More formally, the NP relation we’d like to prove looks like:
\begin{align}
\mathcal{R}(\rt,\cm’; \cm, \pi,\Delta{r})=1 \Leftrightarrow\begin{cases}
1 &amp;= \mhtVerifyMem(\rt, \cm; \pi)\\<br />
\cm’ &amp;= \comRerand(\cm; \Delta{r})
\end{cases}
\end{align}</p>

<h3 id="reduction-to-proving-scalar-multiplication-of-committed-points-by-a-witness-scalar">Reduction to proving scalar multiplication of committed points by a witness scalar</h3>

<p>Given a parent node’s commitment $\cm$ to its children $(\cm_0,\cm_1)$, the main challenge lies in proving that $\comRerand(\cm_b;\Delta{r}),b\in\{0,1\}$ was computed correctly over one of the children of $\cm$ without leaking which one ($b$) nor the blinding factor ($\Delta{r}$).</p>

<p>There are of course many ways of proving this quite effectively using the right zkSNARK.
These solutions would fall under the category of “improvements upon <em>curve trees</em><sup id="fnref:CH22e"><a href="#fn:CH22e" class="footnote" rel="footnote" role="doc-noteref">1</a></sup>”.
Very interesting, but not my goal.</p>

<p>My obsession is to see if we can leverage (and improve upon) some existing techniques and do this efficiently with only $\Sigma$-protocols and/or structure-preserving cryptography.
(A bit tricky, since there are some impossibilities around structure-preserving and compressing commitments to group elements.)</p>

<p class="todo">Write exact relation!</p>

<h2 id="references">References</h2>

<p>For cited works, see below 👇👇</p>

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:CH22e">
      <p><strong>Curve Trees: Practical and Transparent Zero-Knowledge Accumulators</strong>, by Matteo Campanelli and Mathias Hall-Andersen, <em>in Cryptology ePrint Archive, Paper 2022/756</em>, 2022, <a href="https://eprint.iacr.org/2022/756">[URL]</a> <a href="#fnref:CH22e" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name>Alin Tomescu</name></author><category term="zero-knowledge proofs (ZKPs)" /><category term="Merkle" /><summary type="html"><![CDATA[tl;dr: A primitive much more powerful than Semaphore, that we’d like to build without zkSNARKs.]]></summary></entry><entry><title type="html">Notes on scaling nullifier sets</title><link href="https://alinush.github.io//nullifiers" rel="alternate" type="text/html" title="Notes on scaling nullifier sets" /><published>2025-11-12T00:00:00+00:00</published><updated>2025-11-12T00:00:00+00:00</updated><id>https://alinush.github.io//notes-on-scaling-nullifier-sets</id><content type="html" xml:base="https://alinush.github.io//nullifiers"><![CDATA[<p class="info"><strong>tl;dr:</strong> Trying to organize some thoughts on how to scale nullifier sets.</p>

<!--more-->

<!-- Here you can define LaTeX macros -->
<div style="display: none;">$
$</div>
<p><!-- $ --></p>

<h2 id="approach-1-sharded-herkle-trees">Approach 1: Sharded Herkle trees</h2>

<p>High-level:</p>

<ul>
  <li>Build depth-256 (or compressed, if possible) <a href="/please-solve#efficient-homomorphic-merkle-herkle-trees">homomorphic Merkle prefix tree</a> over global nullifier sets (e.g., via SADS<sup id="fnref:PSTY13"><a href="#fn:PSTY13" class="footnote" rel="footnote" role="doc-noteref">1</a></sup>)</li>
  <li>Shard the tree into <strong>“triangles”</strong> (e.g., a top triangle with $k$ leaves and $k$ other bottom triangles, but can have multiple levels of triangles; in fact can add them dynamically, as the tree “extends downwards”)</li>
  <li>Each triangle is managed by its own <strong>proof-serving node (PSN)</strong></li>
  <li>Importantly, when a leaf $i$ changes by $\delta_i$ every PSN can locally update its portion of the path w/o waiting for other nodes or communicating with them $\Rightarrow$ extremely simple sharding</li>
  <li>Proof serving nodes can be incentivized to serve proofs (see Hyperproofs<sup id="fnref:SCPplus22"><a href="#fn:SCPplus22" class="footnote" rel="footnote" role="doc-noteref">2</a></sup>)</li>
</ul>

<h2 id="approach-2-tachyon">Approach 2: Tachyon</h2>

<p>Just some loose notes for now (will go into more depth as I understand later) from a few resources:</p>

<ul>
  <li>Sean Bowe’s <a href="https://seanbowe.com/blog/tachyon-scaling-zcash-oblivious-synchronization/">blog in April 2025</a></li>
  <li>Sean Bowe’s original <a href="https://x.com/ebfull/status/1907474914162127002">tweet announcement</a></li>
  <li><a href="https://x.com/alinush407/status/1907507290543980818">Replies between me and Wei Dai on Twitter</a> regarding the interactive payment flow</li>
  <li>Sean Bowe’s notes on <a href="https://hackmd.io/@dJO3Nbl4RTirkR2uDM6eOA/BJOnrTEj1x">a possible SNARK-friendly accumulator scheme</a> for this</li>
  <li>Sean Bowe’s <a href="https://seanbowe.com/blog/tachyaction-at-a-distance/">notes on Tachyon</a></li>
  <li><a href="https://x.com/alinush407/status/1977123515158409616">Tweet about my understanding of Tachyon</a> in Oct. 2025</li>
  <li>Mike O’Connor’s <a href="https://forum.aztec.network/t/reducing-nullifier-set-state-growth/155">post</a></li>
  <li>Mike O’Connor’s <a href="https://x.com/mike_connor/status/1977131650233274749">clarification</a> on how you precompute some nullifiers and then send one to whomever is trying to pay you.
    <ul>
      <li>This makes the recursive proof more efficient because, I suppose, you can prove in batch that all of your precomputed nullifiers have not yet been spent:</li>
    </ul>
  </li>
  <li>Sean Bowe’s <a href="https://zeroknowledge.fm/podcast/388/">zeroknowledge.fm podcast</a> on Tachyon</li>
</ul>

<p>Some disadvantages:</p>

<ol>
  <li>it requires out of band communication during payments.</li>
  <li>cannot recover funds from seedphrase only; need other dynamic state as well that is not on-chain</li>
  <li>no more viewing keys (?)</li>
  <li>cannot just give out your address to get paid: you have to give a place for the sender to include the extra info</li>
</ol>

<p>Quotes from Sean Bowe:</p>

<blockquote>
  <p>as the wallet state updates to reflect new blocks it will continually maintain a proof of its own correctness. Then, when it’s time to spend our funds we will extend our transaction with this proof-carrying data.
This effectively attaches evidence that the transaction is valid up until a certain recent point in the history of the blockchain — the position of the anchor.
The result is that validators are now only responsible for ensuring that the transaction is correct in the presence of the additional transactions that appeared in the intervening time, which just involves checking that the most recent block(s) do not contain the revealed nullifier. [15] 
As a result, almost everything in a block can be permanently pruned by validators and ultimately all users of the system as well. Despite transactions sharing a common state by being indistinguishable from each other, nearly all state contention problems vanish in this new approach.</p>
</blockquote>

<blockquote>
  <p>[15] Together with the proof of the wallet’s validity, this demonstrates that the nullifier did not appear in another transaction that followed the block that created the note commitment being spent.
Notably, this loosens the condition that the nullifier has never been seen before in the history of the blockchain but still manages to prevent double-spending.</p>
</blockquote>

<p>Is the key observation that the TXN proves that the note’s nullifier did not appear in the nullifier set accumulated so far? 
But how do you construct that proof without the full set?
Ah!
You dont.
You just need the nullifiers created between the note’s creation and the note being spent.
And that’s what the wallet can prove recursively!</p>

<h2 id="other-approaches">Other approaches</h2>

<ul>
  <li>Stateless validation: removes the validation state but introduces PSNs and needs <a href="#approach-1-sharded-herkle-trees">the right approach</a></li>
  <li><a href="https://github.com/0xMiden/miden-vm/discussions/356">Epoch-based nullifiers</a>: freezes old nullifier sets</li>
  <li>Mutator sets <a href="https://neptune.cash/blog/mutator-sets/">1</a> and <a href="https://www.youtube.com/watch?v=Fjh1PxrgwQo">2</a>: need to investigate</li>
</ul>

<h2 id="references">References</h2>

<p>For cited works, see below 👇👇</p>

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:PSTY13">
      <p><strong>Streaming Authenticated Data Structures</strong>, by Papamanthou, Charalampos and Shi, Elaine and Tamassia, Roberto and Yi, Ke, <em>in EUROCRYPT 2013</em>, 2013 <a href="#fnref:PSTY13" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:SCPplus22">
      <p><strong>Hyperproofs: Aggregating and Maintaining Proofs in Vector Commitments</strong>, by Shravan Srinivasan and Alexander Chepurnoy and Charalampos Papamanthou and Alin Tomescu and Yupeng Zhang, <em>in 31st USENIX Security Symposium (USENIX Security 22)</em>, 2022, <a href="https://www.usenix.org/conference/usenixsecurity22/presentation/srinivasan">[URL]</a> <a href="#fnref:SCPplus22" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name>Alin Tomescu</name></author><category term="Merkle" /><category term="nullifier" /><category term="anonymous payments" /><summary type="html"><![CDATA[tl;dr: Trying to organize some thoughts on how to scale nullifier sets.]]></summary></entry></feed>