We present an asymptotic analysis of the ternary variant of Sparse Learning with Errors (spLWE), a structured LWE variant proposed by Jain--Lin--Saha (CRYPTO'24) in which each equation involves only of the secret coordinates, enabling significantly more efficient computation than dense LWE. Unlike standard LWE, the small-secret regime of spLWE is not automatically reducible to its large-secret counterpart, leaving asymptotic hardness unclear, particularly when is very small.
We develop a two-pronged attack framework that depends explicitly on the sparsity parameter . In the geometric regime , each sparse row reduces to a short-vector problem in a -dimensional lattice, yielding complexity via a sieving algorithm. In the statistical regime , we propose a greedy coordinate-recovery attack with running time , where is the number of samples.
Heuristically, under mild assumptions, full recovery holds with high probability once the sample size is large enough; the resulting complexity is exponential only in and otherwise mild (up to polylogarithmic factors), i.e., polynomial in , which makes very small vulnerable even at large dimensions.
Experiments on toy instances confirm the predicted sharp transition. Complexity comparisons with prior works indicate lower complexity on a few of their parameter sets, while identifying regimes where our method is not applicable.