From Moritz's slides
I have updated the previous entry with the following text but it might be new to some hence the reposting. Here is the additional information that is relevant;
- Andy has a blog post on the use of this holdout reuse technique in Holdout reuse and attendant Reddit comments.
- John Mount wrote on the Win-vector blog about Using differential privacy to reuse training data
- Nina Zumel, A Simpler Explanation of Differential Privacy
- Moritz's slides
- Reddit comments on the Google Blog entry.
also from the comment section, what looks one of the author mentioned the following:
Note that the NIPS 2015 paper relies on results from two papers by other authors that came after the STOC 2015 paper:
Bassily et al: http://arxiv.org/abs/1503.04843
Nissim and Stemmer: http://arxiv.org/abs/1504.05800
Those two papers provide tighter generalization guarantees than the STOC 2015 paper. The NIPS 2015 and Science papers rely on those intermediate results.
The two papers are:
More General Queries and Less Generalization Error in Adaptive Data Analysis
Raef Bassily, Adam Smith, Thomas Steinke, Jonathan Ullman
Adaptivity is an important feature of data analysis---typically the choice of questions asked about a dataset depends on previous interactions with the same dataset. However, generalization error is typically bounded in a non-adaptive model, where all questions are specified before the dataset is drawn. Recent work by Dwork et al. (STOC '15) and Hardt and Ullman (FOCS '14) initiated the formal study of this problem, and gave the first upper and lower bounds on the achievable generalization error for adaptive data analysis.
Specifically, suppose there is an unknown distributionP and a set ofn independent samplesx is drawn fromP . We seek an algorithm that, givenx as input, "accurately" answers a sequence of adaptively chosen "queries" about the unknown distributionP . How many samplesn must we draw from the distribution, as a function of the type of queries, the number of queries, and the desired level of accuracy?
In this work we make two new contributions towards resolving this question:
*We give upper bounds on the number of samplesn that are needed to answer statistical queries that improve over the bounds of Dwork et al.
*We prove the first upper bounds on the number of samples required to answer more general families of queries. These include arbitrary low-sensitivity queries and the important class of convex risk minimization queries.
As in Dwork et al., our algorithms are based on a connection between differential privacy and generalization error, but we feel that our analysis is simpler and more modular, which may be useful for studying these questions in the future.
On the Generalization Properties of Differential Privacy
Kobbi Nissim, Uri Stemmer
A new line of work, started with Dwork et al., studies the task of answering statistical queries using a sample and relates the problem to the concept of differential privacy. By the Hoeffding bound, a sample of sizeO(logk/α2) suffices to answerk non-adaptive queries within errorα , where the answers are computed by evaluating the statistical queries on the sample. This argument fails when the queries are chosen adaptively (and can hence depend on the sample). Dwork et al. showed that if the answers are computed with(ϵ,δ) -differential privacy thenO(ϵ) accuracy is guaranteed with probability1−O(δϵ) . Using the Private Multiplicative Weights mechanism, they concluded that the sample size can still grow polylogarithmically with thek .
Very recently, Bassily et al. presented an improved bound and showed that (a variant of) the private multiplicative weights algorithm can answerk adaptively chosen statistical queries using sample complexity that grows logarithmically ink . However, their results no longer hold for every differentially private algorithm, and require modifying the private multiplicative weights algorithm in order to obtain their high probability bounds.
We greatly simplify the results of Dwork et al. and improve on the bound by showing that differential privacy guaranteesO(ϵ) accuracy with probability1−O(δlog(1/ϵ)/ϵ) . It would be tempting to guess that an(ϵ,δ) -differentially private computation should guaranteeO(ϵ) accuracy with probability1−O(δ) . However, we show that this is not the case, and that our bound is tight (up to logarithmic factors).
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment