A constant-time sampling algorithm for binary Gaussian distribution over the integers
TL;DR Summary
This paper presents a novel constant-time sampling algorithm for binary Gaussian distribution, crucial for lattice-based cryptography. It requires no pre-computed storage, uses hardware-friendly bitwise operations, boasts lower computational complexity, and consumes less entropy
Abstract
Information Processing Letters 176 (2022) 106246 Contents lists available at ScienceDirect Information Processing Letters www.elsevier.com/locate/ipl A constant-time sampling algorithm for binary Gaussian distribution over the integers ✩ Yusong Du, Baoying Fan, Baodian Wei ∗ Guangdong Province Key Laboratory of Information Security Technology, School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou 510006, China a r t i c l e i n f o a b s t r a c t Article history: Received 21 January 2020 Accepted 16 January 2022 Available online 21 January 2022 Communicated by Abhi Shelat Keywords: cryptography randomized algorithms discrete Gaussian sampling binary Gaussian distribution timing attack Discrete Gaussian sampling over the integers is one of fundamental operations in lattice- based cryptography. The binary Gaussian distribution D Z + , σ 2 is a special discrete Gaussian distribution with σ 2 = √ 1 /( 2 ln 2 ) and μ = 0 over the set
Mind Map
In-depth Reading
English Analysis
1. Bibliographic Information
- Title: A constant-time sampling algorithm for binary Gaussian distribution over the integers
- Authors: Yusong Du, Baoying Fan, Baodian Wei. (Affiliation: Guangzhou 510006, China)
- Journal/Conference: Information Processing Letters. This is a reputable peer-reviewed scientific journal that publishes short articles on various topics in computer science.
- Publication Year: 2022 (Accepted January 16, 2022; Available online January 21, 2022)
- Abstract: The paper addresses the problem of discrete Gaussian sampling, a crucial operation in lattice-based cryptography. It focuses on a specific distribution called the binary Gaussian distribution (), which can serve as a base for sampling from more general Gaussian distributions. The authors propose a new constant-time sampling algorithm for this distribution. Key features of their algorithm are that it requires no pre-computed storage, relies mainly on simple bitwise operations (making it hardware-friendly), has lower computational complexity than the standard
Knuth-Yaomethod, and consumes less entropy (randomness) thancumulative distribution table (CDT)based methods. Additionally, its security analysis is simpler. - Original Source Link:
/files/papers/68ef2fe8e77486f6f3192e66/paper.pdf(This link is a relative path from the source file. The paper is published and accessible through the journal's official channels.)
2. Executive Summary
-
Background & Motivation (Why):
- Core Problem: Many modern cryptographic systems, especially those designed to be secure against quantum computers (lattice-based cryptography), rely on sampling numbers from a discrete Gaussian distribution. A major security vulnerability in cryptographic implementations is the timing attack, where an attacker can infer secret information by measuring how long different operations take. To prevent this, cryptographic algorithms must run in constant time, meaning their execution time is independent of any secret data. Existing algorithms for discrete Gaussian sampling are often either not constant-time or are inefficient when made so.
- Importance & Gaps: The binary Gaussian distribution is a fundamental building block for more complex samplers. The original algorithm for it is not constant-time. Making standard methods like the
Knuth-YaoorCDTmethods constant-time results in significant performance penalties or high memory/randomness costs. There is a need for a sampling algorithm for the binary Gaussian distribution that is secure (constant-time), efficient, and practical for hardware implementations. - Innovation: This paper introduces a novel algorithm (
Algorithm 2) that achieves constant-time sampling through a pure rejection sampling approach. Its design avoids large pre-computed tables and complex arithmetic, relying instead on simple bit-level operations, making it uniquely suited for hardware.
-
Main Contributions / Findings (What):
- A Novel Constant-Time Algorithm: The paper presents
Algorithm 2, a new algorithm for sampling from the binary Gaussian distribution that runs in constant time per attempt. - Efficiency: The proposed algorithm is shown to be more efficient than existing constant-time alternatives. It has lower computational complexity (fewer bits to scan) than the
full-tree Knuth-Yaomethod and lower entropy consumption (fewer random bits needed) than thefull-table-access CDTmethod. - No Precomputation: Unlike
CDTorKnuth-Yaomethods, this algorithm does not require any pre-computed tables to be stored, saving memory and simplifying implementation. - Simplified Security Analysis: The security of the algorithm can be analyzed using Rényi divergence without the complexities of "precision issues" that affect table-based methods. This makes it easier to formally prove its security for a given parameter set.
- A Novel Constant-Time Algorithm: The paper presents
3. Prerequisite Knowledge & Related Work
-
Foundational Concepts:
- Lattice-Based Cryptography: A field of public-key cryptography based on the presumed difficulty of solving certain mathematical problems on lattices (grids of points). It is a leading candidate for post-quantum cryptography, i.e., cryptographic systems that can resist attacks from both classical and quantum computers.
- Discrete Gaussian Distribution (): A probability distribution defined over the integers. A random integer is drawn with a probability proportional to a Gaussian (bell curve) function centered at with a standard deviation . The probability of picking an integer is given by . Sampling from this distribution is a core operation in many lattice-based schemes.
- Binary Gaussian Distribution (): A specific, simplified case of the discrete Gaussian distribution over non-negative integers () with center and a fixed standard deviation . With this specific value of , the relative probability of sampling an integer simplifies beautifully to . This distribution is important because it can be used as a simple "base sampler" to construct samplers for more general discrete Gaussian distributions.
- Timing Attack: A type of side-channel attack where an attacker analyzes the time taken to execute a cryptographic algorithm. If the execution time depends on secret data (like a secret key), the attacker can deduce information about that data.
- Constant-Time Algorithm: An algorithm whose execution time is independent of the values of its inputs, especially secret inputs. This is the primary defense against timing attacks.
- Rejection Sampling: A general technique for generating samples from a target distribution. It works by sampling from a simpler, proposal distribution and then "accepting" or "rejecting" the sample based on a probabilistic test. If accepted, the sample is returned; if rejected, the process repeats.
- Inversion Sampling (CDT Method): A sampling method that uses the Cumulative Distribution Table (CDT). The CDT stores the cumulative probabilities for each possible outcome . To sample, one generates a uniform random number and finds the smallest such that . A constant-time version requires scanning the entire table every time (
full-table-access). - Knuth-Yao Method (DDG Tree): A method that uses a binary tree (a Discrete Distribution Generating tree) where a path from the root to a leaf, determined by a sequence of random bits, corresponds to sampling a specific value. A constant-time version requires traversing the entire tree structure every time (
full-tree Knuth-Yao). - Rényi Divergence: A mathematical function that measures the "distance" or divergence between two probability distributions. In cryptography, it is used to quantify the security loss when an ideal probability distribution (e.g., a perfect Gaussian) is replaced by a practical, imperfect one (e.g., a truncated or approximated distribution). A smaller divergence means the two distributions are very similar, and the security loss is minimal.
-
Previous Works:
- Ducas et al. [5]: Introduced the binary Gaussian distribution and proposed the first sampling algorithm for it (
Algorithm 1in this paper). However, their algorithm is not constant-time because the number of loops and random bits it consumes depends on the output value. For example, sampling0is much faster than sampling a larger number. - CDT-based Constant-Time Samplers [6]: To achieve constant time with the inversion method, one must always scan the entire pre-computed CDT. This is efficient in software but can consume a large number of random bits (high entropy consumption) to achieve the required precision.
- Knuth-Yao-based Constant-Time Samplers [12]: To make the Knuth-Yao method constant-time, one must effectively traverse the entire DDG tree structure (or scan the entire probability matrix). This incurs a significant performance overhead, especially in hardware.
- Polynomial Approximation [6, 7]: A modern technique where the binary Gaussian sampler is used as a base. Samples from the base are used as inputs to a pre-computed polynomial that approximates the target distribution's sampling ratio. This whole process can be constant-time if the base sampler is constant-time. This context motivates the need for a better constant-time binary Gaussian sampler.
- Ducas et al. [5]: Introduced the binary Gaussian distribution and proposed the first sampling algorithm for it (
-
Differentiation: The proposed algorithm differs from previous works in several key ways:
- vs. Ducas et al. [5]: The new algorithm is constant-time, fixing the main security flaw of the original.
- vs. CDT/Knuth-Yao [6, 12]: It requires no pre-computed tables, saving memory. It is also more computationally efficient (fewer operations/scans) and has lower entropy consumption than its constant-time counterparts.
- vs. Boolean Function Samplers [12, 14]: While samplers based on Boolean functions are also constant-time and efficient, they require complex precomputation and large memory to store the functions, resulting in a larger code or circuit size. The proposed algorithm is simpler in this regard.
4. Methodology (Core Technology & Implementation)
The core of the paper is Algorithm 2, a constant-time sampling algorithm for the binary Gaussian distribution.
| Algorithm 2 Sampling from DZ+,σ2. |
| Input: Integer N1 ≥ 7 and N0 = N1 · (N1 − 1) |
| Output: A non-negative integer approximately from DZ+,σ2 |
| 1: sample N1 + 1 random bits b1, b2, ..., bN1+1 |
| 2: set n1 be 0 if b1 = 0, otherwise set n1 be the maximum integer such that b1 = b2 = ... = bn1 = 1 |
| 3: sample N0 random bits b'1, b'2, ..., b'N0 |
| 4: set n0 be 0 if b'1 = 1, otherwise set n0 be the maximum integer such that b'1 = b'2 = ... = b'n0 = 0 |
| 5: set n ← (n1 − 1) · n1 |
| 6: return n1 if n0 ≥ n, otherwise goto Step 1 |
-
Principles: The algorithm is a clever implementation of rejection sampling. The goal is to produce an integer with a probability proportional to . The algorithm achieves this by breaking down the probability into two components that are easy to generate using random bits. The overall probability of outputting a value in a single run of the loop (before a potential restart) is constructed to be proportional to .
-
Steps & Procedures:
- Generate (Steps 1-2):
- random bits are sampled. is determined by the number of leading
1s. - If the first bit is 0, is 0. The probability of this is .
- If the first bits are 1s and the -th bit is 0, the value is chosen. The probability of this event is . This step generates a number with a probability proportional to .
- random bits are sampled. is determined by the number of leading
- Generate (Steps 3-4):
- random bits are sampled. is determined by the number of leading
0s. - The probability of getting a specific value is , but what matters here is the probability that is at least some value .
- The probability that (i.e., the first bits are all 0s) is exactly .
- random bits are sampled. is determined by the number of leading
- The Rejection Test (Steps 5-6):
- A target value is calculated.
- The algorithm accepts and returns if . The probability of this acceptance condition, given , is .
- Overall Probability in One Loop:
- The total probability of generating and accepting a value in a single iteration is the product of the probability of generating and the probability of accepting it:
- This shows that the probability of outputting in one go is proportional to the target probability density . The "goto Step 1" handles the rejection case, ensuring that eventually, a sample is produced from the correct final distribution.
- Generate (Steps 1-2):
-
Mathematical Formulas & Key Details (Theorem 1):
- Theorem 1 states that Algorithm 2 produces a tailcut binary Gaussian distribution. This means it samples from the binary Gaussian distribution but restricted to the set of integers .
- Proof Intuition:
- Let be the probability of successfully outputting in one iteration.
- The total probability of success in one iteration is . The probability of failure (rejection) is .
- The final probability of outputting a value is the sum of probabilities of succeeding on the first try, or failing then succeeding on the second, and so on:
- Substituting the expressions for
q(x)and : - This final expression is precisely the definition of the tailcut binary Gaussian distribution over the set .
-
Constant-Time Implementation Details:
- The algorithm can be made constant-time because the number of bits sampled in each loop is fixed: and .
- Steps 2 and 4 (finding the number of leading 1s or 0s) can be implemented by always scanning all or bits.
- Step 6 (comparison ) can be implemented in constant time using a signed integer subtraction and checking the sign bit.
5. Experimental Setup
The paper's evaluation is theoretical, focusing on security and complexity analysis rather than empirical benchmarks on specific hardware.
-
Datasets: No datasets are used. The analysis is purely algorithmic.
-
Evaluation Metrics:
- Rényi Divergence:
- Conceptual Definition: A measure of how different two probability distributions are. A value close to 1 indicates the distributions are very similar. It's used here to measure the "statistical distance" between the algorithm's actual output distribution (the tailcut one, ) and the ideal, infinite binary Gaussian distribution (). This quantifies the security impact of using an approximate sampler.
- Mathematical Formula: The Rényi divergence of order is:
- Symbol Explanation:
- : Two probability distributions. In this paper, (the algorithm's output) and (the ideal distribution).
- : The order of the divergence, where . For cryptographic security analysis, it is often set to where is the bit-security level.
- : The support of the distribution , i.e., the set of all possible outcomes.
- Computational Complexity:
- G: The average number of random bits that need to be generated to produce one output sample.
- S: The average number of bits that need to be scanned to produce one output sample. In the constant-time implementation of Algorithm 2, .
- Rényi Divergence:
-
Baselines:
- Full-tree Knuth-Yao: A constant-time sampler based on the Knuth-Yao method, where the entire probability matrix is scanned for each sample.
- Full-table access CDT: A constant-time sampler based on inversion sampling, where the entire Cumulative Distribution Table is read for each sample.
6. Results & Analysis
The paper presents its results in two tables, which are transcribed and analyzed below.
-
Core Results: Security Analysis (Table 1)
The following is a transcription of Table 1 from the paper.
256-bit security, q queries 256-bit security, signatures -
Analysis of Table 1:
- This table evaluates the security impact of the "tailcut" (truncation) for different values of the output range parameter . The analysis is for a security level of bits, so the Rényi divergence order is .
- The second column shows that the Rényi divergence is extremely close to 1, indicating the output distribution is statistically very close to the ideal one. For , the divergence is less than , which is a negligible difference for any practical purpose.
- The third and fourth columns translate this divergence into practical security terms using
Lemma 1. They show the maximum number of queries () or digital signatures (qs) an adversary can make before the 256-bit security level might drop to 255-bit. For , the scheme remains secure even after an impossibly large number of operations ( signatures). - Key Insight: A small value of (like 9) is sufficient to achieve a very high level of security. This is important because directly impacts the algorithm's complexity.
- Simpler Analysis: The authors highlight that this analysis is simpler than for CDT/Knuth-Yao because it only involves analyzing the tailcut probability. It does not have to deal with additional errors introduced by finite-precision representation of probabilities in tables, known as the "precision issue".
-
Core Results: Complexity Analysis (Table 2)
The following is a transcription of Table 2 from the paper.
Algorithm 2 G = S = 63.9 G = S = 83.1 G = S = 104.8 K.Y. (p=64) G = 64, S = 256 K.Y. (p=96) G = 96, S = 432 K.Y. (p=128) G = 128, S = 640 -
Analysis of Table 2:
- This table compares the computational complexity of
Algorithm 2with thefull-tree Knuth-Yao(K.Y.) method. Complexity is measured by bits generated () and scanned (). - For
Algorithm 2, the complexity () is the expected number of bits needed per sample, calculated as . For , this is about 104.8 bits. - For the
Knuth-Yaomethod, to sample from , the probability matrix has rows. The number of columns determines the precision. For (so rows) and a typical precision of , the sampler must generate 128 random bits () and scan a large number of bits in the matrix ( is estimated as , to account for optimizations). - Key Insight:
Algorithm 2is significantly more efficient. For , it requires fewer random bits to be generated (104.8 vs. 128) and far fewer bits to be scanned (104.8 vs. 640). This makes it much faster, especially in hardware where scanning memory can be costly.
- This table compares the computational complexity of
-
Comparison with CDT-based Sampler:
- The paper also compares entropy consumption with a constant-time CDT-based sampler from Zhao et al. [6], which outputs integers from 0 to 9. That sampler requires 126 random bits per sample.
Algorithm 2with provides the same output range but requires only 104.8 bits of entropy on average. This is a ~17% reduction in randomness required, which can be a significant advantage.
7. Conclusion & Reflections
-
Conclusion Summary: The paper successfully introduces a new constant-time sampling algorithm for the binary Gaussian distribution. This algorithm is a strong alternative to existing methods, offering superior performance in terms of computational complexity and entropy consumption. Its key advantages are that it requires no pre-computed storage, is well-suited for hardware implementation due to its reliance on bitwise operations, and allows for a more straightforward security analysis. It serves as an excellent base sampler for building more complex, constant-time discrete Gaussian samplers in lattice-based cryptography.
-
Limitations & Future Work:
- The paper does not provide an empirical performance analysis on specific hardware (like FPGAs or ASICs), which would be a valuable next step to concretely demonstrate its hardware-friendliness.
- The total execution time is still probabilistic, as the algorithm might loop multiple times due to rejection. While each loop is constant-time (protecting against simple timing attacks), an attacker observing the total time over many operations might still gain some statistical information. However, the probability of rejection is constant and independent of any secret, making this a very difficult attack vector.
- The choice of is a trade-off between the output range, security (statistical distance), and performance. The paper provides clear guidance for this trade-off, but it remains a parameter that implementers must choose carefully.
-
Personal Insights & Critique:
- The algorithm's design is elegant and insightful. It cleverly decomposes the target probability into two components, and , which can be sampled probabilistically by counting leading bits. This is a non-obvious and highly effective approach.
- The most significant practical contribution is arguably the combination of constant-time security, high efficiency, and zero precomputation. In resource-constrained environments (like smart cards or IoT devices), eliminating the need for storing large tables (CDT or probability matrices) is a major advantage.
- The simplification of the security proof is a noteworthy academic contribution. By sidestepping the "precision issue" inherent in table-based methods, the authors make it easier for cryptographers to validate and deploy implementations with confidence in their security bounds. This work is a solid step forward in the practical implementation of secure post-quantum cryptography.
Similar papers
Recommended via semantic vector search.