Essential Site Maintenance: Authorea-powered sites will be updated circa 15:00-17:00 Eastern on Tuesday 5 November.
There should be no interruption to normal services, but please contact us at [email protected] in case you face any issues.

Differential privacy (DP) has been widely used in communication systems, especially those using federated learning or distributed computing. DP comes in the data preparation stage before line coding and transmission. In contrast to the literature where differential privacy is mainly discussed from the point of view of data/computer science, in this paper we approach DP from a communications perspective. From this perspective, we show the contrast and opposition between the MAP detection problem in communications and the DP problem. In this paper, we consider two DP mechanisms; namely, the Gaussian Mechanism (GM) and the Laplacian Mechanism (LM). We explain why we have ϵ-DP if we use the LM, while we must have (ϵ, δ)-DP if we use the GM. Furthermore, we derive a new lower bound on the perturbation noise required for the GM to guarantee (ϵ, δ)-DP. Although no closed form is obtained for the new lower bound, a very simple one dimensional search algorithm can be used to achieve the lowest possible noise variance. Since the perturbation noise is known to negatively affect the performance of the data analysis (such as the convergence in federated learning), the new lower bound on the perturbation noise is expected to improve the performance over the classical GM. Moreover, we derive the perturbation noise required for both the LM and the GM in case of the adversary having auxiliary information in the form of the prior probabilities of the different databases We show that having auxiliary information is equivalent to reducing the tolerable privacy leakage, and hence it requires more perturbation noise. Finally, we analytically derive the border between the region where GM is better to use and the region where LM is better to use.