In recent years, the shuffle model has emerged as a prevalent paradigm in privacy-preserving data analysis, centered on the principle of \textit{privacy amplification via shuffling}: an individual user's report is obscured by the ``background noise'' of other users' messages, a phenomenon intuitively known as the \textit{privacy blanket}. This paper initiates a foundational and systematic study of this mechanism from an information-theoretic perspective. We investigate the following optimal noise-design problem: given that a target user's message follows a distribution , what is the optimal blanket noise distribution that maximizes privacy? Specifically, when and the remaining messages are shuffled to produce an output , we seek the that affords the strongest protection for under various metrics, including mutual information , total-variation-information , message recovery advantage, and expected posterior variance.
Our analysis reveals a series of non-intuitive results that challenge the conventional heuristic of setting . First, we prove that the optimal noise distribution generally \textit{deviates} from the target distribution . For binary alphabets, we show that the (near-)uniform distribution is optimal in a strong sense. For general finite alphabets, we derive an explicit analytical form that achieves asymptotic optimality for mutual information. Furthermore, we demonstrate that our analytical framework transcends the shuffle model, yielding new security insights into broader cryptographic primitives such as the \textit{ideal cipher model} and \textit{honey encryption}.
Finally, we extend our results to the shuffle-DP paradigm, where messages are outputs of -locally differentially private mechanisms. We establish a new, tight information-theoretic upper bound . This result provides a sharp characterization that matches the optimal privacy-amplification parameters known in the literature, while offering a novel interpretation of the shuffling gain.