The topic of sampling in an adversarial context is in fashion these days when it comes to Machine Learning and the training of deep neural networks. Today, we have that in the context of Group Testing. I like the way it sounds. From the paper:

'...While the current literature includes several works on the privacy in GT algorithms for digital objects (e.g., [7]–[10]), these works are based on cryptographic schemes, thus ensuing a high computational burden. More importantly, cryptographic schemes using, for example, public-key or additively homomorphic encryption are based on the assumption that the computational power of the eavesdropper is limited [11],[12]. Information theoretic security, on the other hand, offers privacy at the price of rate [12], [13], or, in this case, additional tests...."

I wonder if some information theoretic scheme like this one could be used in a blockchain or cryptocurrency scenario (how do you algorithmically restore trust ? see this 2008 Nuit Blanche post entitled Trust But Verify ) I am also wondering how widely different this is from most of the phase transitions we have noticed so far. Without further ado:

Secure Group Testing by Alejandro Cohen, Asaf Cohen, Omer Gurewitz

The principal mission of Group Testing (GT) is to identify a small subset of "defective" items from a large population, by grouping items into as little as possible test pools. The test outcome of a pool is positive if it contains at least one defective item, and is negative otherwise. GT algorithms are utilized in numerous applications, and in most of them the privacy of the tested subjects, namely, whether they are defective or not, is critical.

In this paper, we consider a scenario where there is an eavesdropper (Eve) which is able to observe a subset of the GT outcomes (pools). We propose a new non-adaptive Secure Group Testing (SGT) algorithm based on information theoretic principles, which keeps the eavesdropper ignorant regarding the items' status. Specifically, when the fraction of tests observed by Eve is 0 les than δ<1 11="" and="" at="" both="" color="#330033" constraint.="" correct="" eve="" font="" for="" high="" information="" is="" legitimate="" mutual="" negligible="" no="" number="" of="" probability="" prove="" reconstruction="" required="" s="" secrecy="" side="" style="color: #330033;" tests="" that="" the="" times="" user="" we="" with=""> less than 1, we prove that the number of tests required for both correct reconstruction at the legitimate user (with high probability) and negligible mutual information at Eve’s side is 1/(1-delta) times the number of tests required with no secrecy constraint.

**Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !**

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## No comments:

Post a Comment