Monday, August 01, 2011

Group Testing and Its Connection to Compressed Sensing.

Today is the first day of the Coding, Complexity and Sparsity Workshop at University of Michigan. Some of the organizers of this meeting are also featured in this post. I look forward for some of these presentations.

Two preprints and a thesis just came out to provide some insight on Group Testing and how it relates to compressed sensing. If you recall, Group Testing is probably one of the easily understandable technique that can be understood by non specialists and is therefore a nice way to explain compressed sensing in layman's terms (see some examples here which include finding Cylons in some Galactica's ships) A recent meeting was held on the subject, It is very nice to see some sort of convergence between these two subjects.

Recovering Simple Signals by Anna Gilbert, Brett Hemenway, Atri Rudra, Martin Strauss, and Mary Wootters. The abstract reads:
The primary goal of compressed sensing and (non-adaptive) combinatorial group testing is to recover a sparse vector x from an underdetermined set of linear equations \phi x = y. Both problems entail solving \phi x = y given \phi and y but they use di"erent models of arithmetic, different models of randomness models for \phi, and different guarantees upon the solution x and the class of signals from which it is drawn. In [32], Lipton introduced a model for error correction where the channel is computationally bounded, subject to standard cryptographic assumptions, and produces the error vector x that must be found and then corrected. This has been extended in [24, 34] to create more e#cient schemes against polynomial and logspace bounded channels. Inspired by these results in error correction, we view compressed sensing and combinatorial group testing as an adversarial process, where Mallory the adversary produces the vector x to be measured, with limited information about the matrix !. We define a number of computationally bounded models for Mallory and show that there are significant gains (in the minimum number of measurements) to be had by relaxing the model from adversarial to computationally or information-theoretically bounded, and not too much (in some cases, nothing at all) is lost by assuming these models over oblivious or statistical models. We also show that di"erences in adversarial power give rise to different lower bounds for the number of measurements required to defeat such an adversary. By contrast we show that randomized one pass log space streaming Mallory is almost as powerful as a fully adversarial one for group testing while for compressed sensing such an adversary as weak as an oblivious one.
The introduction starts with:
....Group testing was introduced by Dorfman in his seminal 1943 paper to identify soldiers in WWII who had syphilis [15]. Combinatorial group testing 1 has found applications in drug and DNA library screening (the literature is large, see [37, 26, 46, 25] and references thereof ), multiple access control protocols [5, 45], live baiting of DoS attackers [27], data forensics [23] and data streams [13], among others. The reader is referred to the standard monograph on group testing for more details [16]. Compressed sensing was introduced much more recently in 2006 by [8, 14] as a method for acquiring signals in a “compressed” fashion and then reconstructing good (i.e., sparse) approximations to the signals from those observations. The problem has many practical applications from analogto-digital conversion [28, 43, 31], novel optical device design [47, 42, 17], pooling designs for rare allele detection [20], and video acquisition [44], to cite a few examples. Even though they were introduced more than half a century apart, compressed sensing and (non-adaptive) combinatorial group testing are similar. Their primary goal is to recover a sparse vector x from an underdetermined set of linear equations \phi x = y. Both problems include solving \phi x = y given \phi and y but they use different models of arithmetic, di"erent randomness models for \phi and different guarantees upon the solution x and the class of signals from which it is drawn. Compressed sensing attempts to recover a sparse signal x element of R^N from a small number of linear measurements \phi x. The signal x is usually assumed to be composed of a “head”, the k largest entries (in magnitude) of the vector, supported on a small number of indices, and a tail whose l_1 or l_2 norm is small. The goal is to recover a close approximation x^ to x from the measurements \phi x. In the combinatorial group testing model the binary matrix ! represents a pooling design and the goal is to recover a small set (of size d) of “defective” elements from a set of tests !x. (Here x ! {0, 1}N.) In the group testing model, a test fails if any items in the group being tested are defective, so the matrix vector product \phi x is done using boolean OR....

We consider the problem of detecting a small subset of defective items from a large set via non-adaptive "random pooling" group tests. We consider both the case when the measurements are noiseless, and the case when the measurements are noisy (the outcome of each group test may be independently faulty with probability q). Order-optimal results for these scenarios are known in the literature. We give information-theoretic lower bounds on the query complexity of these problems, and provide corresponding computationally efficient algorithms that match the lower bounds up to a constant factor. To the best of our knowledge this work is the first to explicitly estimate such a constant that characterizes the gap between the upper and lower bounds for these problems.

The last item is a PhD thesis entitled Applications of Derandomization Theory in Coding by Mahdi Cheraghchi. The abstract reads:
Randomized techniques play a fundamental role in theoretical computer science and discrete mathematics, in particular for the design of efficient algorithms and construction of combinatorial objects. The basic goal in derandomization theory is to eliminate or reduce the need for randomness in such randomized constructions. In this thesis, we explore some applications of the fundamental notions in derandomization theory to problems outside the core of theoretical computer science, and in particular, certain problems related to coding theory. First, we consider the wiretap channel problem which involves a communication system in which an intruder can eavesdrop a limited portion of the transmissions, and construct efficient and information-theoretically optimal communication protocols for this model. Then we consider the combinatorial group testing problem. In this classical problem, one aims to determine a set of defective items within a large population by asking a number of queries, where each query reveals whether a defective item is present within a specified group of items. We use randomness condensers to explicitly construct optimal, or nearly optimal, group testing schemes for a setting where the query outcomes can be highly unreliable, as well as the threshold model where a query returns positive if the number of defectives pass a certain threshold. Finally, we design ensembles of error-correcting codes that achieve the information-theoretic capacity of a large class of communication channels, and then use the obtained ensembles for construction of explicit capacity achieving codes.

Image taken from a slide presentation entitled Group Testing: How to find out what’s important in life by Mark Iwen and Tsvetanka Sendova explaining group testing for educational outreach purposes.

No comments: