The good stuff keeps on coming, this is very interesting:
Compressive Diffraction Tomography for Weakly Scattering by Lianlin Li, Wenji Zhang, Fang Li. The abstract reads:
Compressive Diffraction Tomography for Weakly Scattering by Lianlin Li, Wenji Zhang, Fang Li. The abstract reads:
An appealing requirement from the well-known diffraction tomography (DT) exists for success reconstruction from few-view and limited-angle data. Inspired by the well-known compressive sensing (CS), the accurate super-resolution reconstruction from highly sparse data for the weakly scatters has been investigated in this paper. To realize the compressive data measurement, in particular, to obtain the super-resolution reconstruction with highly sparse data, the compressive system which is realized by surrounding the probed obstacles by the random media has been proposed and empirically studied. Several interesting conclusions have been drawn: (a) if the desired resolution is within the range from to, the K-sparse N-unknowns imaging can be obtained exactly by measurements, which is comparable to the required number of measurement by the Gaussian random matrix in the literatures of compressive sensing. (b) With incorporating the random media which is used to enforce the multi-path effect of wave propagation, the resulting measurement matrix is incoherence with wavelet matrix, in other words, when the probed obstacles are sparse with the framework of wavelet, the required number of measurements for successful reconstruction is similar as above. (c) If the expected resolution is lower than, the required number of measurements of proposed compressive system is almost identical to the case of free space. (d) There is also a requirement to make the tradeoff between the imaging resolutions and the number of measurements. In addition, by the introduction of complex Gaussian variable the kind of fast sparse Bayesian algorithm has been slightly modified to deal with the complex-valued optimization with sparse constraints.
The approach parallel the Random Lens Imager of Bill Freeman et al [2] or the In-situ Random Medium scattering of Larry Carin et al [1]. I'll add this new technique to the Compressive Sensing Hardware page.
Angshul Majumdar has modifed my implementation of the re-weighted L_p algorithm of Rick Chartrand and Wotao Yin [3] so that " it can handle operators instead of explicit matrices. It requires Sparco." It is here. Thanks Angshul !
Muthu Muthukrishnan has a post up on the release of some of the slides of the recent DIMACS workhop on Streaming, they are:
- Alexandr Andoni, MIT
Approximating Edit Distance in Near-Linear Time - Choongsoon Bae, Google Inc., U.C.Berkeley
Versions of Random Forests: Properties and Performances - Amit Chakrabarti, Dartmouth College
Lower Bounds for Gap-Hamming-Distance and Consequences for Data Stream Algorithms - Graham Cormode, AT & T
Finding Frequent Items in Data Streams - Wei Dai, University of Illinois at Urbana-Champaign
Weighted Superimposed Codes And Constrained Compressed Sensing - Piotr Indyk, MIT
Sparse Recovery Using Sparse (Random) Matrices - Kunal Talwar, MSR SVC
The Price of Privacy and the Limits of LP decoding - Ping Li, Cornell University
Compressed Counting - Andrea Montanari, Stanford University
Matrix Completion from Fewer Entries - Jelani Nelson, MIT
Sketching and Streaming Entropy via Approximation TheorySketching and Streaming Entropy via Approximation Theory-Part II
- Ben Recht, Caltech
Low-rank Matrix Completion via Convex Optimization - Martin J. Strauss, University of Michigan
Euclidean Sparse Recovery with Optimal Measurement (in progress) - Joel A. Tropp, California Institute of Technology
CoSaMP Iterative signal recovery from incomplete and inaccurate samples - Martin Wainwright, UC Berkeley
High-dimensional graphical model selection: Practical and information-theoretic limits
High Dimensional Approximation: Theory and Algorithms
Ronald DeVore
Chaire d'excellence de la Fondation Sciences Mathmatiques de Paris
Horaires : mardi 16 Juin de 10h30 12h30, jeudi 18, mardi 23 et jeudi 25 Juin, de 10h 12h30.
Lieu: salle 2E1, Laboratoire Jacques-Louis Lions, 175 rue du Chevaleret, 75013 Paris.
This course will study scientic problems that are challenged by the fact that they live naturally in a Euclidean space RN with N very large. We mention three settings which will be considered in this short course.
Broad-banded signals: It is well known that a bandlimited signal (function on R) can be captured exactly from its equispaced time samples taken at the Nyquist rate. However when the bandwidth of the signal is large then the sampling rate cannot be implemented accurately in circuitry. This leads us to consider other models for signals which are still realistic but enable capturing the signal with much fewer measurements. We shall study one such setting called Compressed Sensing (CS) which models signals as having a sparse or compressible representation in a suitably chosen basis of waveforms. We shall develop the mathematical foundations of compressed sensing culminating with a derivation of the optimal rate/distortion that can be achieved with CS and a discussion of the encoder/decoders which achieve this optimality.
Mathematical learning: The mathematical theory of data tting in a stochastic setting is known as learning. Many interesting learning problems are set in high dimension and must engage the problem of approximating functions of a large number of variables. Classical approaches of approximation in several variables suer from the curse of dimensionality: the approximation rate severely suers as the dimension increases. We shall consider possible ways to avoid the curse of dimensionality through variable reduction and sparsity.
Stochastic PDEs: The classical approach to solving stochastic PDEs is Monte Carlo methods. While these methods are robust they have severe limits in terms of their rate/distortion bounds. We shall study alternatives to Monte Carlo methods which utilize Wiener chaos expansions to convert the stochastic PDE to a deterministic parametric PDE. The number of parameters in this approach is infinite and therefore its success depends capturing a function of an infinite number of variables in a numerically realistic way. We shall derive properties of the solutions to such parametric problems in the elliptic case that show these solutions have highly accurate sparse approximations. We shall then derive potential numerical approaches to capturing these sparse approximants. Theory and algorithms for high dimensional problems are in their infancy and so this course will expose many interesting open questions.
On a different note, the python codes implementing the examples of the new book Machine Learning: An Algorithmic Perspective by Stephen Marsland are here.
Reference:
[1] Lawrence Carin, Dehong Liu, Wenbin Lin and Bin Guo, Compressive Sensing for Numerical Multi-Static Scattering Analysis.
[2] Random Lens Imaging, Fergus, Rob; Torralba, Antonio; Freeman, William T.
[3] Rick Chartrand and Wotao Yin, Iteratively Reweighted Algorithms for Compressive Sensing
No comments:
Post a Comment