Pages

Friday, July 15, 2011

Compressive Sensing: Around the blogs in 80 hours and some papers

Dirk Lorenz has two entries on his blog that of interest:

Bob Sturm comments on An Improvement to CoSaMP but not to Supspace Pursuit?



This paper develops new theory and algorithms to recover signals that are approximately sparse in some general (i.e., basis, frame, over-complete, or incomplete) dictionary but corrupted by a combination of measurement noise and interference having a sparse representation in a second general dictionary. Particular applications covered by our framework include the restoration of signals impaired by impulse noise, narrowband interference, or saturation, as well as image in-painting, super-resolution, and signal separation. We develop efficient recovery algorithms and deterministic conditions that guarantee stable restoration and separation. Two application examples demonstrate the efficacy of our approach.
Rich has more on his blog.

We present an imaging method, dSLIM, that combines a novel deconvolution algorithm with spatial light interference microscopy (SLIM), to achieve 2.3x resolution enhancement with respect to the diffraction limit. By exploiting the sparsity of the phase images, which is prominent in many biological imaging applications, and modeling of the image formation via complex fields, the very fine structures can be recovered which were blurred by the optics. With experiments on SLIM images, we demonstrate that significant improvements in spatial resolution can be obtained by the proposed approach. Moreover, the resolution improvement leads to higher accuracy in monitoring dynamic activity over time. Experiments with primary brain cells, i.e. neurons and glial cells, reveal new subdiffraction structures and motions. This new information can be used for studying vesicle transport in neurons, which may shed light on dynamic cell functioning. Finally, the method is flexible to incorporate a wide range of image models for different applications and can be utilized for all imaging modalities acquiring complex field images
.
While Laurent Duval let me know of this one: Sparse approximation property and stable recovery of sparse signals from noisy measurements by Qiyu Sun
In this paper, we introduce a sparse approximation property of order s for a measurement matrix A:kxsk2 DkAxk2 + s(x) p s for all x; where xs is the best s-sparse approximation of the vector x in `2, s(x) is the s-sparse approximation error of the vector x in `1, and D and are positive constants. The sparse approximation property for a measurement matrix can be thought of as a weaker version of its restricted isometry property and a stronger version of its null space property. In this paper, we show that the sparse approximation property is an appropriate condition on a measurement matrix to consider stable recovery of any compressible signal from its noisy measurements. In particular, we show that any compressible signal can be stably recovered from its noisy measurements via solving an `1-minimization problem if the measurement matrix has the sparse approximation property with 2 (0; 1), and conversely the measurement matrix has the sparse approximation property with 2 (0; 1) if any compressible signal can be stably recovered from its noisy measurements via solving an `1-minimization problem.
Finally, this one showed up on my radar screen:

Convergence and Rate Analysis of Neural Networks for Sparse Approximation by Aurele Balavoine, Justin Romberg, and Christopher J. Rozell. The abstract reads:
We present an analysis of the Locally Competitive Algorithm (LCA), a Hopfield-style neural network that solves sparse approximation problems (e.g., approximating a vector from a dictionary using just a few non-zero coefficients). This class of problems plays a significant role in both theories of neural coding and applications in signal processing, but traditional analysis approaches are difficult because the objective functions are non-smooth. Specifically, we characterize the convergence properties of this system by showing that the LCA is globally
convergent to a fixed point corresponding to the exact solution of the objective function, and (under some mild conditions) this solution is reached in finite time. Furthermore, we characterize
the convergence rate of the system by showing that the LCA converges exponentially fast with an analytically bounded convergence rate (that depends on the specifics of a given problem).
We support our analysis with several illustrative simulations.
In other news, the ICCV 2011 papers on the web are here. A working list of people who contribute on the topics of statistics, machine learning and data analysis on Quora lists me as Others/ Bloggers/ Not Sure: this is good. Pierre Vandergheynst has a profile on the Montreux Jazz Festival that reminds me when I was asked to provide a speaker on MEMS for SXSW back in 1999.

Image Credit: NASA/JPL/Space Science Institute, W00068470.jpg was taken on July 12, 2011 and received on Earth July 13, 2011. The camera was pointing toward SATURN at approximately 1,317,905 kilometers away, and the image was taken using the CL1 and IR1 filters. 

No comments:

Post a Comment