Pages

Tuesday, March 29, 2011

CS: LASSO: Mutliplicative Noise Effect on the Donoho-Tanner Phase Transition (Part 6)

Instead of performing Basis Pursuit and Least Squares on the recovery of solutions for system that have multiplicative noise, we now investigate a LASSO solver. An implementation can be found in SPGL1. An \epsilon = 0.5% allows us to graph the following Donoho-Tanner phase transition:
For \epsilon = 1%, we have:
Obviously, we now get a better result in the underdetermined regime. What is also interesting is that as \epsilon grows, the LASSO solver is also becoming sensitive to the sparsity of the signal of interest. This is the same unexpected (to me at least) finding we found with the traditional Least Squares solver.

In the previous installement, Bob kindly pointed out the following:

Hi Igor,
Five days later, I am still confused by what you are showing. What are the colors? What does "No Trick delta ..." mean? What are the axes? Why do you show these results as images and not lines? Are you evaluating recoveries for every x,y pair? If so, how many? 


To what I replied (I have added some details from the initial reply).

Bob, 

I am glad we are having this discussion after five posts :-)

Yes every (well maybe not every you need to see my explanation below) pixel in these graphs represents the result of 20 reconstructions. I need to detail the nitty grit tweaking of the algorithm at the end of the series of posts. The line is the border between the regions where most reconstructions fail and where most reconstructions work. In particular, the line is drawn in light blue when more than 2 reconstructions have failed over 20. The region where most reconstructions work is always below. The color does not matter (the redder means that it has failed the most) as much but I will be releasing the code later. The x-axis is m/N (measurements over ambient dimension/number of unknowns) and the y-axis is k/N (sparsity over ambient dimension/number of unknowns).
'No trick' means that I am not using the [A I] trick used in the Cross and Bouquet approach. So 'No Trick' means I am doing the vanilla reconstruction using whatever solver I am stating I am using...
Not all pixels were computed, since I have an old machine, I need to get these computations running fast. I compute the pixels initially from below (k/N = 1/N) all the way to the transitionborder. Then when I get to the border, the algorithm iterates to the next m/N and then recomputes some pixels below the border all the way up to the next pixels on the border. It is a simple adaptive algorithm but obviously any publication would need to have the full computations done. Right now, I am going the way of the pirates.


Thanks Bob ! looks like you are getting some satisfaction out of CMP.

3 comments:

  1. I'm interested in knowing what "solving LASSO" entails, ie, how is your regularization/constraint parameter chosen? It seems like it would be a pretty crucial component of your results, unless you are solving SPGL1 to completion, but then you would be solving bpdn?

    ReplyDelete
  2. Hello Tim,

    Thanks for asking this question. I guess you are asking what paramrter \tau I am using (in the spgl1 sense)?

    My known solution is an instance of zeros and ones. I also know the sparsity of the "unknown" vector beforehand, so I know what the \tau parameter should be. So in essence, I am really showing the best DT transition since I know the exact \tau parameter when I ask SPGL1 to solve my problem. Does that answer your question ?

    Cheers,

    Igor.

    ReplyDelete
  3. Ah yes, so this is an "oracle" lasso solution. That makes sense now, thanks!

    Ps: the implications of this on really large systems that experience numerical roundoff errors is very scary!

    ReplyDelete