I'm interested in knowing what "solving LASSO" entails, ie, how is your regularization/constraint parameter chosen? It seems like it would be a pretty crucial component of your results, unless you are solving SPGL1 to completion, but then you would be solving bpdn?
To what I responded:
Hello Tim,
Thanks for asking this question. I guess you are asking what paramrter \tau I am using (in the spgl1 sense)?
My known solution is an instance of zeros and ones. I also know the sparsity of the "unknown" vector beforehand, so I know what the \tau parameter should be. So in essence, I am really showing the best DT transition since I know the exact \tau parameter when I ask SPGL1 to solve my problem. Does that answer your question ?
Cheers,
Igor.
Tim then responded with:
Ah yes, so this is an "oracle" lasso solution. That makes sense now, thanks!
Ps: the implications of this on really large systems that experience numerical roundoff errors is very scary!
Tim is really nice, he diplomatically calls it an "oracle",
We will discuss a generalization of the Shannon Sampling Theorem that allows for reconstruction of signals in arbitrary bases. Not only can one reconstruct in arbitrary bases, but this can also be done in a completely stable way. When extra information is available, such as sparsity or compressibility of the signal in a particular bases, one may reduce the number of samples dramatically. This is done via Compressed Sensing techniques, however, the usual finite-dimensional framework is not sufficient. To overcome this obstacle I'll introduce the concept of Infinite-Dimensional Compressed Sensing.
In this talk we discuss the numerical solution of minimisation problems promoting higher-order sparsity properties. In particular, we are interested in total variation minimisation, which enforces sparsity on the gradient of the solution. There are several methods presented in the literature for performing very efficiently total variation minimisation, e.g., for image processing problems of small or medium size. Because of their iterative-sequential formulation, none of them is able to address in real-time extremely large problems, such as 4D imaging (spatial plus temporal dimensions) for functional magnetic-resonance in nuclear medical imaging, astronomical imaging or global terrestrial seismic tomography. For these cases, we propose subspace splitting techniques, which accelerate the numerics by dimension reduction and preconditioning. A careful analysis of these algorithms is furnished with a presentation of their application to some imaging tasks.
Measurement of radio waves forms the basis for a number of sensing applications including: medical imaging (MRI), remote sensing (Synthetic Aperture Radar), and electronic warfare (wideband spectral monitoring). This talk will discuss the application of compressed sensing to these different RF-based sensing/imaging problems. In each case the application of compressed sensing depends crucially on the signal model. We will consider the different issues raised by each application and the potential of compressed sensing to transform the sensing technology.
The Dantzig selector has been introduced by Emmanuel Candes and Terence Tao in an outstanding paper that deals with prediction and variable selection in the setting of the curse of dimensionality extensively considered in statistics recently. Using sparsity assumptions, variable selection performed by the Dantzig selector can improve estimation accuracy by effectively identifying the subset of important predictors, and then enhance model interpretability allowed by parsimonious representations. The goal of this talk is to present the main ideas of the paper by Candes and Tao and the remarkable results they obtained. We also wish to emphasize some of the extensions proposed in different settings and in particular for density estimation considered in the dictionary approach. Finally, connections between the Dantzig selector and the popular lasso procedure will be also highlighted.