Sunday, March 27, 2011

CS: Least Squares vs Basis Pursuit: Multiplicative Noise Effect on the Donoho-Tanner Phase Transition (Part 5)

Let me summarize what we have so far (Part 1, 2, 3, 4). We are interested in finding if given y (a result of applying A to x), one can recover that same x by finding a solution to: y = (A + \epsilon*E) x using a specific solver. A and E are random gaussian matrices, while \epsilon is chosen at will. We evaluate the possibility of recovery of x given some parameters such as k/N the sparsity level of x and m/N, the percentage of measurements over the ambient dimension. The linear system is underdetermined when m/N is less than 1, it is exactly determined when m = N, it is overdetermined when m is greater than N.

In the noiseless case, i.e. \epsilon = 0, we have the following graph for using an L_1 solver (leading to the well known Donoho-Tanner phase transition for sparse recovery) and using the Least Square solver ( a trivial graph)



In the noisy case, \epsilon = 0.5%, here is the case for the Least Squares solver

and the same case with the SL0 solver (solving the Basis pursuit problem)

and when using the cross and bouquet approach with the SL0 solver, we have;

What does this tell us ?

At this level of \epsilon (or imprecision on A) equal to 0.5%, the L1 solver provides a small advantage over least squares in the underdetermined range. This small advantage is limited to a sparsity level less than 10% and for a number of measurements (equations) less than half the size of the ambient space. 

A smaller \epsilon implies a better return on investment on using an L1 solver (BP) in the underdetermined case. 

A larger \epsilon implies that the Least Squares solver can find a solution in the overdetermined case only if that solution does not go over a critical sparsity threshold.

What is the take away so far ?

If you want to do compressive sensing and are using Basis Pursuit, you'd better know the coefficients of your measurement matrix to less than 0.5%.

If you want to just sense of signal and are using Least Squares for recovery, an uncertainty of 2% means that you need more than six times the number of unknowns and you'll be limited to the sparsity level of the solution.

Next, we'll revisit these cases with an implementation of the LASSO instead of an implementation of the Basis Pursuit (with or without the Cross and Bouquet approach).

2 comments:

Bob et Carla said...

Hi Igor,

Five days later, I am still confused by what you are showing. What are the colors? What does "No Trick delta ..." mean? What are the axes? Why do you show these results as images and not lines? Are you evaluating recoveries for every x,y pair? If so, how many?

Igor said...

Bob,

I am glad we are having this discussion after five posts :-)

Yes every (well maybe not every you need to see my explanation below) pixel in these graphs represents the result of 20 reconstructions. I need to detail the nitty grit tweaking of the algorithm at the end of the series of post. The line is the border between the regions where most reconstructions fail and where most reconstructions work. IN particular, the line is drawn in light blue when more than 2 reconstructions have failed over 20. The region where most reconstructions work is always below. The color does not matter as much but I will be releasing the code later.

'No trick' means that I am not using the [A I] trick used in the Cross and Bouquet approach. So 'No Trick' means I am doing the vanilla reconstruction using whatever solver I am stating I am using...

Not all pixels were computed, since I have an old machine, I need to get these computations running fast. I compute the pixels initially from below (k/N = 1/N) all the way to the border. Then when I get to the border, the algo goes to the next m/N and then recompute some pixels below the border all the way up to the next pixels on the border. It is a simple adaptive algorithm but obviously any publication would need to have the full computations done. Right now, I am going the way of the pirates.

Printfriendly