Pages

Tuesday, April 21, 2009

CS: Compressive Diffraction Tomography for Weakly Scattering, On Streaming, a course and some ML python code

The good stuff keeps on coming, this is very interesting:

Compressive Diffraction Tomography for Weakly Scattering by Lianlin Li, Wenji Zhang, Fang Li. The abstract reads:
An appealing requirement from the well-known diffraction tomography (DT) exists for success reconstruction from few-view and limited-angle data. Inspired by the well-known compressive sensing (CS), the accurate super-resolution reconstruction from highly sparse data for the weakly scatters has been investigated in this paper. To realize the compressive data measurement, in particular, to obtain the super-resolution reconstruction with highly sparse data, the compressive system which is realized by surrounding the probed obstacles by the random media has been proposed and empirically studied. Several interesting conclusions have been drawn: (a) if the desired resolution is within the range from to, the K-sparse N-unknowns imaging can be obtained exactly by measurements, which is comparable to the required number of measurement by the Gaussian random matrix in the literatures of compressive sensing. (b) With incorporating the random media which is used to enforce the multi-path effect of wave propagation, the resulting measurement matrix is incoherence with wavelet matrix, in other words, when the probed obstacles are sparse with the framework of wavelet, the required number of measurements for successful reconstruction is similar as above. (c) If the expected resolution is lower than, the required number of measurements of proposed compressive system is almost identical to the case of free space. (d) There is also a requirement to make the tradeoff between the imaging resolutions and the number of measurements. In addition, by the introduction of complex Gaussian variable the kind of fast sparse Bayesian algorithm has been slightly modified to deal with the complex-valued optimization with sparse constraints.


The approach parallel the Random Lens Imager of Bill Freeman et al [2] or the In-situ Random Medium scattering of Larry Carin et al [1]. I'll add this new technique to the Compressive Sensing Hardware page.

Angshul Majumdar has modifed my implementation of the re-weighted L_p algorithm of Rick Chartrand and Wotao Yin [3] so that " it can handle operators instead of explicit matrices. It requires Sparco." It is here. Thanks Angshul !

Muthu Muthukrishnan has a post up on the release of some of the slides of the recent DIMACS workhop on Streaming, they are:



Laurent Duval pointed out to me that Ron DeVore will give a course in Paris. From the flyer:

High Dimensional Approximation: Theory and Algorithms

Ronald DeVore
Chaire d'excellence de la Fondation Sciences Mathmatiques de Paris

Horaires : mardi 16 Juin de 10h30 12h30, jeudi 18, mardi 23 et jeudi 25 Juin, de 10h 12h30.

Lieu: salle 2E1, Laboratoire Jacques-Louis Lions, 175 rue du Chevaleret, 75013 Paris.

This course will study scienti c problems that are challenged by the fact that they live naturally in a Euclidean space RN with N very large. We mention three settings which will be considered in this short course.

Broad-banded signals: It is well known that a bandlimited signal (function on R) can be captured exactly from its equispaced time samples taken at the Nyquist rate. However when the bandwidth of the signal is large then the sampling rate cannot be implemented accurately in circuitry. This leads us to consider other models for signals which are still realistic but enable capturing the signal with much fewer measurements. We shall study one such setting called Compressed Sensing (CS) which models signals as having a sparse or compressible representation in a suitably chosen basis of waveforms. We shall develop the mathematical foundations of compressed sensing culminating with a derivation of the optimal rate/distortion that can be achieved with CS and a discussion of the encoder/decoders which achieve this optimality.
Mathematical learning: The mathematical theory of data tting in a stochastic setting is known as learning. Many interesting learning problems are set in high dimension and must engage the problem of approximating functions of a large number of variables. Classical approaches of approximation in several variables su er from the curse of dimensionality: the approximation rate severely su ers as the dimension increases. We shall consider possible ways to avoid the curse of dimensionality through variable reduction and sparsity.
Stochastic PDEs: The classical approach to solving stochastic PDEs is Monte Carlo methods. While these methods are robust they have severe limits in terms of their rate/distortion bounds. We shall study alternatives to Monte Carlo methods which utilize Wiener chaos expansions to convert the stochastic PDE to a deterministic parametric PDE. The number of parameters in this approach is in finite and therefore its success depends capturing a function of an in finite number of variables in a numerically realistic way. We shall derive properties of the solutions to such parametric problems in the elliptic case that show these solutions have highly accurate sparse approximations. We shall then derive potential numerical approaches to capturing these sparse approximants. Theory and algorithms for high dimensional problems are in their infancy and so this course will expose many interesting open questions.
Thanks Laurent !

On a different note, the python codes implementing the examples of the new book Machine Learning: An Algorithmic Perspective by Stephen Marsland are here.

Reference:
[1] Lawrence Carin, Dehong Liu, Wenbin Lin and Bin Guo, Compressive Sensing for Numerical Multi-Static Scattering Analysis.
[2] Random Lens Imaging, Fergus, Rob; Torralba, Antonio; Freeman, William T.
[3] Rick Chartrand and Wotao Yin, Iteratively Reweighted Algorithms for Compressive Sensing

No comments:

Post a Comment