Pages

Monday, April 28, 2008

CS: Necessary and Sufficient Conditions on Sparsity Pattern Recovery, Seeing Speech: Capturing Vocal Tract Shaping using Real-time MR and a talk

Here is a fascinating paper . We are generally accustomed to big O notation for the number of measurements necessary to recover signals in CS, this paper addresses the issue of the functional dependence of the constant that goes into the big O notation as a function of the signal-to-noise ratio (SNR) and mean-to-average ratio (MAR). The paper is Necessary and Sufficient Conditions on Sparsity Pattern Recovery and was written by Alyson Fletcher, Sundeep Rangan, and Vivek Goyal. The abstract reads:

The problem of detecting the sparsity pattern of a k-sparse vector in Rn from m random noisy measurements is of interest in many areas such as system identification, denoising, pattern recognition, and compressed sensing. This paper addresses the scaling of the number of measurements m, with signal dimension n and sparsity-level nonzeros k, for asymptotically-reliable detection. We show a necessary condition for perfect recovery at any given SNR for all algorithms, regardless of complexity, is m = (k log(n − k)) measurements. Conversely, it is shown that this scaling of (k log(n − k)) measurements is sufficient for a remarkably simple “maximum correlation” estimator. Hence this scaling is optimal and does not require more sophisticated techniques such as lasso or matching pursuit. The constants for both the necessary and sufficient conditions are precisely defined in terms of the minimum-to average ratio of the nonzero components and the SNR. The necessary condition improves upon previous results for maximum likelihood estimation. For lasso, it also provides a necessary condition at any SNR and for low SNR improves upon previous work. The sufficient condition provides the first asymptotically-reliable detection guarantee at finite SNR.




Much more can be gathered from reading the paper.

The next paper does not make a specific headline with respect to Compressed Sensing. This is good, it means that the Compressed Sensing approach is becoming mature (at least in fMRI) to the point where it enables new ways of obtaining data thereby producing new ways of matching different types of data (here MRI and voice recording). It's a paper by Erik Bresch, Yoon-Chul Kim, Krishna S. Nayak, Dani Bird, and Shrikanth Narayanan entitled Seeing Speech: Capturing Vocal Tract Shaping using Real-time Magnetic Resonance Imaging (also here). The beginning of the paper reads:

Understanding human speech production is of great interest from engineering, linguistic, and several other research points of view. While several types of data available to speech understanding studies lead to different avenues for research, in this article we focus on real-time (RT) magnetic resonance imaging (MRI) as an emerging technique for studying speech production. We discuss the details and challenges of RT magnetic resonance (MR) acquisition and analysis, and modeling approaches that make use of MRI data for studying speech production.



Here is an interesting new approach whereby compressed sensing is used not to reconstruct an image but to grab essential characteristics of the phenomena being monitored. This is along the lines of the manifold techniques (red arrows in the graph in the Compressed Sensing Big Picture graphic). As we all know the reconstruction is a testimony on how many samples are needed to get an information, the real eldorado is in using these compressed measurements directly. In fMRI, there has much work in finding the right trajectories in the k-space to produce adequate full images, the next steps is obviously a patchwork of techniques using different measurement tools and machine learning techniques to enable the right data fusion process and thereby produce new methodologies and new discoveries ( I have yet to write on the process which I think needs a very good data fusion effort).



On a different note, Trac Tran will give a talk at Simon Fraser University on Fast, Efficient, and Practical Algorithms for Compressed Sensing. Let us note the emergence of a new greedy algorithm: GOMP.

Date: Thursday, May 22, 2008
Time: , 3:00 pm to 4:00 pm
Location: SFU ASSC-1: Room 10041

It is going to be listed on the CS calendar. The abstract of the talk is:

In the conventional uniform sampling framework, the Shannon/Nyquist theorem tells us to sample a signal at a rate at least two times faster than its bandwidth for the original signal to be perfectly reconstructed from its samples. Recently, compressed sensing has emerged as a revolutionary signal sampling paradigm which shows that Shannon theorem is indeed overly pessimistic for signals with a high degree of sparsity or compressibility. The compressed sensing framework demonstrates that a small number of random linear projections, called measurements, contains sufficient information for signal reconstruction, even exactly. The two key components of compressed sensing are: (i) the sensing matrix at the encoder must be highly incoherent with the sparsifying signal transformation; and (ii) sophisticated non-linear algorithms such as basis pursuit or orthogonal matching pursuit are employed at the decoder to recover the sparsest signal from the received measurements.

The first part of this talk gives an overview of the new compressed sensing framework along with the most elegant breakthrough results in the field. The second part focuses on two recent compressed sensing discoveries from the JHU Digital Signal Processing Lab. Particularly, a fast and efficient sampling algorithm for compressed sensing based on structurally random matrices will be presented. Our proposed sampling scheme provides several crucial features for practical implementation: fast computable, memory efficient, streaming capable, and hardware friendly while retaining comparable theoretical performance bounds with current state-of-the-art techniques. Secondly, at the decoder side, we present a novel iterative reconstruction algorithm for compressed sensing called Generalized Orthogonal Matching Pursuit (GOMP) that can adaptively, at each iteration step, admit new atoms to join the current selected set from a small candidate set while discard from the selected set atoms that might be highly regarded in previous steps. Simulation results show that GOMP’s performance far exceeds the best existing iterative algorithms with reasonable complexity overhead. Finally, future research directions in compressed sensing are also discussed if time permits.

No comments:

Post a Comment