Pages

Sunday, April 03, 2011

CS: The Dantzig Selector solvers: Multiplicative Noise Effect on the Donoho-Tanner Phase Transition (Part 7)

I have run two Dantzig Selector solvers after watching the presentation of Vincent Rivoirard on The Dantzig selector for high dimensional statistical problems . I don't know if this is because I was introduced to the subject a second time around but something clicked. I'll have to come back later as it looks to me like some sort of instance of coded aperture imaging. Anyway, I tried both the DS solver of Salman Asif and Justin Romberg at Georgia Tech and that of Emmanuel Candes and Justin Romberg in the L1 magic package. For reasons stemming from the way they are set up I had to change the problem in that in this entry, I am looking at only non complex measurement matrices.

Here are the results for the Donoho-Tanner phase transition with a 0.5% multiplicative noise to the measurement matrices.

Using the Homotopy solver
 Using the solver in L1-magic

Clearly, I must not do something well (set the right parameter) with the homotopy solver. It is however very fast and in the noiseless case this is an advantage. At that level of noise, LASSO and the Dantzig Selector seem to provide the same phase transition. Increasing the noise to 1% and using the l1 magic solver we get:



If I am comparing this to the LASSO solver, it looks like the LASSO provides a larger area in the DT phase transition where one can return sparse vectors.

Maybe I should now focus on the beginning of the x-axis, i.e. small number of measurements since my computation have a resolution of 2% (1/50). 

2 comments:

  1. Hi,

    Thanks for your blog, its an excellent source for inter-disciplinary spreading of CS-related ideas.

    By any chance, could you comment on the differences between l1-magic and LASSO? I am more exposed to the statistics community, so I don't hear about l1-magic, and it seemd to me that LASSO was state of the art with regards to l1 penalization, with LARS or coordinate descent the best optimization routines to solve the problem, depending on the shape of your data.

    ReplyDelete
  2. Gael,

    As a first approximation, I'd say that l1-magic solves its problems with interior-point methods which I think are supposedly slow compare to the methods you are mentioning.


    You may also want to check what problems are being solved by L1-magic
    http://www.acm.caltech.edu/l1magic/downloads/l1magic.pdf

    they do not include the LASSO and this is probably a difference ( I am making a guess here). In the LASSO you solve for:
    min}}Ax-b||_2 st ||x||< tau

    In most CS problems, we don't really have access to tau and most solvers are therefore solving problems like Basis pursuit or basis pursuit denoise (see SPGL1 presentation: http://www.cs.ubc.ca/labs/scl/spgl1/ )

    So when I am using LASSO in the case of this entry, I am really using some sort of Oracle-LASSO because I know in advance what the tau is.

    Hope this helps,

    Igor.

    ReplyDelete