Pages

Thursday, December 09, 2010

CS: Bob's continuing investigation, Sparse Approximation and Compressed Sensing Using the Reed-Muller Sieve, Lensfree Fluorescent On-Chip Imaging Using Compressive Sampling

Do you want an introduction to compressed sensing in Indonesian, here is one. Bob continues his investigation in Lecture Slides: Sparse Approximation for Audio and Music Signals.

From Bob's quest, here is a paper of interest:

Compressive Imaging using Approximate Message Passing and a Markov-Tree Prior by Subhojit Som, Lee C. Potter, and Philip Schniter. The abstract reads:
We propose a novel algorithm for compressive maging that exploits both the sparsity and persistence across scales found in the 2D wavelet transform coefficients of natural images. Like other recent works, we model wavelet structure using a hidden Markov tree (HMT) but, unlike other works, ours is based on loopy belief propagation (LBP). For LBP, we adopt a recently proposed “turbo” message passing schedule that alternates between exploitation of HMT structure and exploitation of compressive-measurement structure. For the latter, we leverage Donoho, Maleki, and Montanari’s recently proposed approximate message passing (AMP) algorithm. Experiments on a large image database show that our turbo LBP approach maintains state-ofthe-art reconstruction performance at half the complexity.1
Here are two other  new papers and one of them is about a cheap compressive sensing hardware:

This paper introduces the Witness-Averaging Algorithm for sparse reconstruction using the Reed Muller sieve. The Reed-Muller sieve is a deterministic measurement matrix for compressed sensing. The columns of this matrix are obtained by exponentiating codewords in the quaternary second order Reed Muller code of length N. For k = ~O (N), the Witness-Averaging improves upon prior methods for identifying the support of a k-sparse vector by removing the requirement that the signal entries be independent, and by providing computational efficiency. It also enables local detection; that is, the proposed algorithm detects the presence or absence of a signal at any given position in the data domain without explicitly reconstructing the entire signal. Reconstruction is shown to be resilient to noise in both the measurement and data domains; the average-case `2=`2 error bounds derived in this paper are tighter than the worst-case `2=`1 bounds arising from random ensembles.

New ideas in compressive sensing are expanding our imaging capability. Research that models the retina has led to an understanding of the eye’s acuity.
Some of the write-up can be found here.

There was also a presentation recently by  Shannon Hughes on “Using the Kernel Trick in Compressive Sensing: Accurate Signal Recovery from Fewer Measurements
Since their development in the mid-1990s, kernel methods have dramatically enhanced the capabilities of machine learning and signal processing. In these methods, a clever strategy termed “the kernel trick” is employed to easily extend standard algorithms to perform more complex tasks with little to no increase in computational complexity. In this talk, we show how the kernel trick can be used in a new domain: compressive sensing. Using the kernel trick, we are no longer constrained to model our signal as a sum of many Fourier or wavelet components as in typical compressive sensing. Instead, our signal can be modeled as a complex, nonlinear function of several underlying parameters if we wish. Signals, including sections of natural images, can often be described very efficiently in this way. This more efficient signal description then pays off, allowing us to reconstruct the signal based on very few measurements, sometimes an order of magnitude fewer measurements than that required by typical compressive sensing.
No paper yet, but I am looking forward to more.

Credit: SpaceX, Congratulations SpaceX folks for being the first private entity to reenter a capsule from orbit.

No comments:

Post a Comment