Pages

Monday, August 05, 2013

Around the blogs in 78 summer hours; What does a high dimensional body look like ? group testing redux and more.




In response to the Sampling and high-dimensional convex geometry entry, Gabriel Synnaeve tweeted back for me to check out the Spikey Sphere analogy. This is good but recently in a cube, a starfish, a thin shell, and the central limit theorem Djalil Chafai ( from which I took the figure above) showed us where most of the volume of a high dimensional object was really located. To me, it doesn't look like a hegdedog, nor a starfish, but rather like a Hatpin Urchin with long elongated spikes.

Anna Gilbert and I had a short exchange on her blog entry Combinatorial Group Testing Magic Trick featured earlier here
Igor

Here is another question/discussion, Is there a way to design this trick so that fewer draws can be made while at the same time risking being the laughingstock of the audience (because the magician did not find the right number) ? and what is the probability of failing given a number of draws and a specifically designed “game” ?

Anna kindly responded with

That’s a great set of questions, too! There are a number of different ways to extend your question, as well. The design in the magic trick is the simplest one possible for finding one “defective.” It’s a deterministically generated matrix that is guaranteed to work no matter what number the participant picks.
You could, instead, allow the matrix to be drawn at random and then stipulate that, with high probability over the choice of the matrix, the magician find any possible number (using the same randomly drawn matrix). Alternatively, you could draw a new random matrix for every number chosen and ask to succeed with high probability over the choice of the matrix on each instance or performance of the trick. And, finally, you could impose a random model on the choice of numbers (assume that your participant is going to draw a number uniformly at random from 0 to 31, say), and ask to succeed with high probability over the choice of the number.
To use more compressive sensing terminology, the first model is the “for all” model of signal recovery, the second the “for each”, and the third the “oblivious.” Unfortunately, the distinctions in these models (at least with respect to the number of tests) don’t appear when you are searching for only one number but the issues with generating a set of tests either deterministically or randomly and guaranteeing their success on all performances or just one performance at a time does come up. As does the difference between non-adaptive and adaptive tests—note that the cards are set before the magician asks the participant any questions.
Lastly, when you have two people in the roles of magician and participant, the anthropomorphization of signal recovery into a recovery algorithm versus an adversary makes a lot more sense!

Then I went a little further
One more thing, I wonder when somebody is going to make it more obvious that group testing is some sort of nonlinear / 2bit compressive sensing and how making that statement would enrich both fields as you did when you first the connection between CS and Group Testing.
Some links related to this are featured in today’s blog entry featuring your trick at

All one has to do is look at the respective upper and lower bounds for group testing and 1-bit CS to see that they are *quite* different. GT for k defectives requires at least $k^2 \log(N)/\log(k)$ tests where as 1-bit CS can recover the signal with only $k \log(N)$ tests. The arithmetic with which measurements are taken really matters. It’s not just what values the measurements themselves take on. In both 1-bit CS and GT, the measurements are 0/1 valued but *how* those are computed is really quite different. Take a look at the Learning Parity with Noise problem which is also a problem of learning information about a function from linear observations and which produces measurements with values in 0/1 (sounds familiar?!). It’s believed to be NP-hard and is used as the basis for some crypto-systems.

Thanks Anna; I am going to have to check this Learning Parity with Noise problem

I have been asked why I thought group testing could be considered a 2-bit problem. For what it's worth, I take the example of the 12-balls / coins problem featured in the Compressed Sensing Puzzle page (statement, solution) which can be summarized as follows: Find one ball out of twelve with three comprative weightings i.e measurements. The ideal answers for the weightings scheme are +1 (left leaning), -1 ( right leaning) or 0 (same weight on both sides) which fits in a 2 bit answer. In real life, there is never a 0 position the scale always either tilts a little bit on the left or right side and one could consider that there are two 0s, a 0+ and 0- which still gets us in the 2bit compressive sensing territory.

In that case; we have y = phi(Ax) with phi a nonlinear function and A a linear operator .Each element
of x represents the mass of each ball/coin and phi looks like this:



In other news, we had the following interesting blog entries:

Thomas Arildsen has a blog at; http://thomas.arildsen.org/
Mikael
Dustin AIM WORKSHOP: FRAME THEORY INTERSECTS GEOMETRY
Josh A Defense of Computational Physics
Hein Unanswered Internet Queries
Bob Using MPTK as a library, part 1
Larry The Steep Price of Sparsity
Michael Grounding
Muthu Computer Science amid Others
Christian Gelman’s course in Paris, next term!
Vladimir
Charles Causality vs Correlation: the Projection Operator way
David


In the meantime, on Nuit Blanche we had:




Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization 

Computer Science amid Others


Miand join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Post a Comment