I'm interested in knowing what "solving LASSO" entails, ie, how is your regularization/constraint parameter chosen? It seems like it would be a pretty crucial component of your results, unless you are solving SPGL1 to completion, but then you would be solving bpdn?
To what I responded:
Thanks for asking this question. I guess you are asking what paramrter \tau I am using (in the spgl1 sense)?
My known solution is an instance of zeros and ones. I also know the sparsity of the "unknown" vector beforehand, so I know what the \tau parameter should be. So in essence, I am really showing the best DT transition since I know the exact \tau parameter when I ask SPGL1 to solve my problem. Does that answer your question ?
Tim then responded with:
Ah yes, so this is an "oracle" lasso solution. That makes sense now, thanks!
Ps: the implications of this on really large systems that experience numerical roundoff errors is very scary!
Tim is really nice, he diplomatically calls it an "oracle",
As for numerical roundoffs on very large systems this may indeed be an issue. While watching the videos of Emmanuel Candes and Vincent Rivoirard, I was reminded of this work Sparse Recovery in Linear Models with Matrix Uncertainty by Alexandre Tsybakov and Mathieu Rosenbaum. The main paper is here. Maybe I ought to look into the Dantzig Selector in the multiplicative noise influence on the Donoho-Tanner phase transition.