The Cost of Privacy
The increasing concerns that statistical analysis of datasets (containing sensitive personal information) may compromise individual privacy give rise to statistical methods that provide privacy guarantees at the cost of statistical accuracy, but there has been very limited understanding of the optimal tradeoff between accuracy and privacy for many important statistical problems.
Differential privacy, introduced in Dwork et al. (2006), is arguably the most widely adopted definition of privacy in statistical data analysis.
A usual approach to developing differentially private algorithms is perturbing the output of non-private algorithms by random noise.
- when the observations are continuous, differential privacy can be guaranteed by adding Laplace/Gaussian noise to the non-private output
- for discrete data, differential privacy can be achieved by adding Gumbel noise to utility score functions.
Naturally, the processed output suffers from some loss of accuracy, and the goal of the paper is to provide a quantitative characterization of the tradeoff between differential privacy guarantees and statistical accuracy, under the statistical framework.
Specifically, consider the problem for mean estimation and linear regression models in both classical and high-dimensional setting with $(\varepsilon,\delta)$-differential privacy constraint.
The raw definition in Dwork et al. (2006) is
According to the definition, the two parameter $\epsilon$ and $\delta$ control the level of privacy against an adversary who attempts to detect the presence of a certain subject in the sample. Roughly speaking, $\epsilon$ is an upper bound on the amount of influence an individual’s record has on the information released and $\delta$ is the probability that this bound fails to hold.
The authors establish the necessary cost of privacy by providing minimax risk lower bounds under the $(\varepsilon, \delta)$-differential privacy constraint.