cronokirby

(2026-03) Asymptotic Analysis of Ternary Sparse LWE

2026-03-31

Abstract

We present an asymptotic analysis of the ternary variant of Sparse Learning with Errors (spLWE), a structured LWE variant proposed by Jain--Lin--Saha (CRYPTO'24) in which each equation involves only knk \ll n of the nn secret coordinates, enabling significantly more efficient computation than dense LWE. Unlike standard LWE, the small-secret regime of spLWE is not automatically reducible to its large-secret counterpart, leaving asymptotic hardness unclear, particularly when kk is very small.

We develop a two-pronged attack framework that depends explicitly on the sparsity parameter kk. In the geometric regime q>3kq > 3^k, each sparse row reduces to a short-vector problem in a kk-dimensional lattice, yielding complexity 20.292k2^{0.292k} via a sieving algorithm. In the statistical regime q3kq \leq 3^k, we propose a greedy coordinate-recovery attack with running time O(mk3k)O(m \cdot k \cdot 3^k), where mm is the number of samples.

Heuristically, under mild assumptions, full recovery holds with high probability once the sample size is large enough; the resulting complexity is exponential only in kk and otherwise mild (up to polylogarithmic factors), i.e., polynomial in nn, which makes very small kk vulnerable even at large dimensions.

Experiments on toy instances confirm the predicted sharp transition. Complexity comparisons with prior works indicate lower complexity on a few of their parameter sets, while identifying regimes where our method is not applicable.