Last week, Emmanuel Candes was invited at the Centre for Mathematical Sciences in Cambridge, UK to give a series of lectures. Other speakers were also invited. Here are the voice-only or videos of these talks made at the LMS Invited Lecturer Series 2011:
Emmanuel Candes, Lecture 1: Some history and a glossy introduction
|MPEG-4 Video *||640x360||1.84 Mbits/sec||1.21 GB||View||Download|
|Flash Video||484x272||568.67 kbits/sec||372.57 MB||View||Download|
|iPod Video||480x270||506.21 kbits/sec||331.65 MB||View||Download|
|MP3||44100 Hz||125.0 kbits/sec||81.70 MB||Listen||Download|
Lecture 2: Probabilistic approach to compressed sensing
Lecture 3: Deterministic approach to compressed sensing
Lecture 4: Incoherent sampling theorem
Lecture 5: Noisy compressed sensing/sparse regression
|MPEG-4 Video *||640x360||1.84 Mbits/sec||1.18 GB||View||Download|
|Flash Video||484x272||568.7 kbits/sec||361.90 MB||View||Download|
|iPod Video||480x270||506.2 kbits/sec||322.13 MB||View||Download|
|MP3||44100 Hz||125.01 kbits/sec||79.35 MB||Listen||Download|
Lecture 6: Matrix completion
Lecture 7: Robust principal components analysis and some numerical optimization
Lecture 8: Some Applications and Hardware Implementations
Anders Hansen, Generalized sampling and infinite-dimensional compressed sensing
We will discuss a generalization of the Shannon Sampling Theorem that allows for reconstruction of signals in arbitrary bases. Not only can one reconstruct in arbitrary bases, but this can also be done in a completely stable way. When extra information is available, such as sparsity or compressibility of the signal in a particular bases, one may reduce the number of samples dramatically. This is done via Compressed Sensing techniques, however, the usual finite-dimensional framework is not sufficient. To overcome this obstacle I'll introduce the concept of Infinite-Dimensional Compressed Sensing.
Carola Schoenlieb, Minimisation of sparse higher-order energies for large-scale problems in imaging
In this talk we discuss the numerical solution of minimisation problems promoting higher-order sparsity properties. In particular, we are interested in total variation minimisation, which enforces sparsity on the gradient of the solution. There are several methods presented in the literature for performing very efficiently total variation minimisation, e.g., for image processing problems of small or medium size. Because of their iterative-sequential formulation, none of them is able to address in real-time extremely large problems, such as 4D imaging (spatial plus temporal dimensions) for functional magnetic-resonance in nuclear medical imaging, astronomical imaging or global terrestrial seismic tomography. For these cases, we propose subspace splitting techniques, which accelerate the numerics by dimension reduction and preconditioning. A careful analysis of these algorithms is furnished with a presentation of their application to some imaging tasks.
Mike Davies Compressed Sensing in RF
Measurement of radio waves forms the basis for a number of sensing applications including: medical imaging (MRI), remote sensing (Synthetic Aperture Radar), and electronic warfare (wideband spectral monitoring). This talk will discuss the application of compressed sensing to these different RF-based sensing/imaging problems. In each case the application of compressed sensing depends crucially on the signal model. We will consider the different issues raised by each application and the potential of compressed sensing to transform the sensing technology.
Vincent Rivoirard, The Dantzig selector for high dimensional statistical problems
The Dantzig selector has been introduced by Emmanuel Candes and Terence Tao in an outstanding paper that deals with prediction and variable selection in the setting of the curse of dimensionality extensively considered in statistics recently. Using sparsity assumptions, variable selection performed by the Dantzig selector can improve estimation accuracy by effectively identifying the subset of important predictors, and then enhance model interpretability allowed by parsimonious representations. The goal of this talk is to present the main ideas of the paper by Candes and Tao and the remarkable results they obtained. We also wish to emphasize some of the extensions proposed in different settings and in particular for density estimation considered in the dictionary approach. Finally, connections between the Dantzig selector and the popular lasso procedure will be also highlighted.