Pages

Thursday, March 31, 2011

CS:Q&A with Tim Lin, Dantzig Selector and DT ?

In the comment section of the entry featuring the results of the LASSO in the mutliplicative noise effect on the Donoho-Tanner Phase transition, Tim Lin kindly asked the following question:

I'm interested in knowing what "solving LASSO" entails, ie, how is your regularization/constraint parameter chosen? It seems like it would be a pretty crucial component of your results, unless you are solving SPGL1 to completion, but then you would be solving bpdn?

To what I responded:

Hello Tim,

Thanks for asking this question. I guess you are asking what paramrter \tau I am using (in the spgl1 sense)?

My known solution is an instance of zeros and ones. I also know the sparsity of the "unknown" vector beforehand, so I know what the \tau parameter should be. So in essence, I am really showing the best DT transition since I know the exact \tau parameter when I ask SPGL1 to solve my problem. Does that answer your question ?

Cheers,

Igor.

Tim then responded with:

Ah yes, so this is an "oracle" lasso solution. That makes sense now, thanks!

Ps: the implications of this on really large systems that experience numerical roundoff errors is very scary!

Tim is really nice, he diplomatically calls it an "oracle",

As for numerical roundoffs on very large systems this may indeed be an issue. While watching the videos of Emmanuel Candes and Vincent Rivoirard, I was reminded of this work Sparse Recovery in Linear Models with Matrix Uncertainty by Alexandre Tsybakov and Mathieu Rosenbaum. The main paper is here. Maybe I ought to look into the Dantzig Selector in the multiplicative noise influence on the Donoho-Tanner phase transition.




Wednesday, March 30, 2011

How To Become a Rockstar

Besides publishing good papers, make sure that whenever you do a presentation somewhere, there is a camera recording it.
(from here)

I have added some additional videos to yesterday's post. They are all here.

CS: LMS Invited Lecturer Series 2011, Cambridge UK: Emmanuel Candes, Carola Schoenlieb, Anders Hansen, Mike Davies. Vincent Rivoirard

Last week, Emmanuel Candes was invited at the Centre for Mathematical Sciences in Cambridge, UK to give a series of lectures. Other speakers were also invited. Here are the voice-only or videos of these talks made at the LMS Invited Lecturer Series 2011:


Emmanuel Candes, Lecture 1: Some history and a glossy introduction










FormatQualityBitrateSize
MPEG-4 Video *640x360   1.84 Mbits/sec1.21 GBViewDownload
Flash Video484x272   568.67 kbits/sec372.57 MBViewDownload
iPod Video480x270   506.21 kbits/sec331.65 MBViewDownload
MP344100 Hz125.0 kbits/sec81.70 MBListenDownload

Lecture 2: Probabilistic approach to compressed sensing


FormatQualityBitrateSize
MPEG-4 Video *640x360   1.84 Mbits/sec992.84 MBViewDownload
Flash Video484x272   568.78 kbits/sec298.35 MBViewDownload
iPod Video480x270   506.27 kbits/sec265.56 MBViewDownload
MP344100 Hz125.02 kbits/sec65.38 MBListenDownload

Lecture 3: Deterministic approach to compressed sensing



MPEG-4 Video *640x360   1.84 Mbits/sec1.29 GB View Download
iPod Video480x270   506.19 kbits/sec353.32 MB View Download
MP344100 Hz125.0 kbits/sec87.05 MB Listen Download

Lecture 4: Incoherent sampling theorem



FormatQualityBitrateSize
MPEG-4 Video *640x360   1.84 Mbits/sec1.11 GBViewDownload
Flash Video484x272   568.75 kbits/sec340.61 MBViewDownload
iPod Video480x270   506.25 kbits/sec303.18 MBViewDownload
MP344100 Hz125.02 kbits/sec74.67 MBListenDownload

 Lecture 5: Noisy compressed sensing/sparse regression




FormatQualityBitrateSize
MPEG-4 Video *640x360   1.84 Mbits/sec1.18 GBViewDownload
Flash Video484x272   568.7 kbits/sec361.90 MBViewDownload
iPod Video480x270   506.2 kbits/sec322.13 MBViewDownload
MP344100 Hz125.01 kbits/sec79.35 MBListenDownload

Lecture 6: Matrix completion



FormatQualityBitrateSize
MPEG-4 Video *640x360   1.84 Mbits/sec1.12 GBViewDownload
Flash Video484x272   568.75 kbits/sec345.26 MBViewDownload
iPod Video480x270   506.26 kbits/sec307.33 MBViewDownload
MP344100 Hz125.02 kbits/sec75.70 MBListenDownload

Lecture 7: Robust principal components analysis and some numerical optimization



FormatQualityBitrateSize
MPEG-4 Video *640x360   1.84 Mbits/sec1.27 GBViewDownload
Flash Video484x272   568.66 kbits/sec391.37 MBViewDownload
iPod Video480x270   506.21 kbits/sec348.39 MBViewDownload
MP344100 Hz125.0 kbits/sec85.84 MBListenDownload

Lecture 8: Some Applications and Hardware Implementations



FormatQualityBitrateSize
MPEG-4 Video *640x360   1.84 Mbits/sec1.11 GBViewDownload
Flash Video484x272   568.8 kbits/sec340.02 MBViewDownload
iPod Video480x270   506.2 kbits/sec302.66 MBViewDownload
MP344100 Hz125.0 kbits/sec74.54 MBListenDownload



Anders Hansen, Generalized sampling and infinite-dimensional compressed sensing



We will discuss a generalization of the Shannon Sampling Theorem that allows for reconstruction of signals in arbitrary bases. Not only can one reconstruct in arbitrary bases, but this can also be done in a completely stable way. When extra information is available, such as sparsity or compressibility of the signal in a particular bases, one may reduce the number of samples dramatically. This is done via Compressed Sensing techniques, however, the usual finite-dimensional framework is not sufficient. To overcome this obstacle I'll introduce the concept of Infinite-Dimensional Compressed Sensing.
FormatQualityBitrateSize
MPEG-4 Video *640x360   1.84 Mbits/sec829.14 MBViewDownload
Flash Video484x272   568.77 kbits/sec249.05 MBViewDownload
iPod Video480x270   506.27 kbits/sec221.68 MBViewDownload
MP344100 Hz125.02 kbits/sec54.55 MBListenDownload

Carola SchoenliebMinimisation of sparse higher-order energies for large-scale problems in imaging



In this talk we discuss the numerical solution of minimisation problems promoting higher-order sparsity properties. In particular, we are interested in total variation minimisation, which enforces sparsity on the gradient of the solution. There are several methods presented in the literature for performing very efficiently total variation minimisation, e.g., for image processing problems of small or medium size. Because of their iterative-sequential formulation, none of them is able to address in real-time extremely large problems, such as 4D imaging (spatial plus temporal dimensions) for functional magnetic-resonance in nuclear medical imaging, astronomical imaging or global terrestrial seismic tomography. For these cases, we propose subspace splitting techniques, which accelerate the numerics by dimension reduction and preconditioning. A careful analysis of these algorithms is furnished with a presentation of their application to some imaging tasks.
FormatQualityBitrateSize
MPEG-4 Video *640x360   1.85 Mbits/sec1.00 GBViewDownload
Flash Video484x272   568.8 kbits/sec306.07 MBViewDownload
iPod Video480x270   506.18 kbits/sec272.43 MBViewDownload
MP344100 Hz125.0 kbits/sec67.08 MBListenDownload

Mike Davies Compressed Sensing in RF



Measurement of radio waves forms the basis for a number of sensing applications including: medical imaging (MRI), remote sensing (Synthetic Aperture Radar), and electronic warfare (wideband spectral monitoring). This talk will discuss the application of compressed sensing to these different RF-based sensing/imaging problems. In each case the application of compressed sensing depends crucially on the signal model. We will consider the different issues raised by each application and the potential of compressed sensing to transform the sensing technology.
FormatQualityBitrateSize
MPEG-4 Video *640x360   1.84 Mbits/sec944.94 MBViewDownload
Flash Video484x272   568.75 kbits/sec283.89 MBViewDownload
iPod Video480x270   506.26 kbits/sec252.70 MBViewDownload
MP344100 Hz125.02 kbits/sec62.21 MBListenDownload

Vincent Rivoirard, The Dantzig selector for high dimensional statistical problems



The Dantzig selector has been introduced by Emmanuel Candes and Terence Tao in an outstanding paper that deals with prediction and variable selection in the setting of the curse of dimensionality extensively considered in statistics recently. Using sparsity assumptions, variable selection performed by the Dantzig selector can improve estimation accuracy by effectively identifying the subset of important predictors, and then enhance model interpretability allowed by parsimonious representations. The goal of this talk is to present the main ideas of the paper by Candes and Tao and the remarkable results they obtained. We also wish to emphasize some of the extensions proposed in different settings and in particular for density estimation considered in the dictionary approach. Finally, connections between the Dantzig selector and the popular lasso procedure will be also highlighted.

FormatQualityBitrateSize
MPEG-4 Video *640x360   1.84 Mbits/sec957.04 MBViewDownload
Flash Video484x272   568.74 kbits/sec287.36 MBViewDownload
iPod Video480x270   506.25 kbits/sec255.79 MBViewDownload
MP344100 Hz125.01 kbits/sec62.97 MBListenDownload