Pages

Friday, May 09, 2008

CS: Just Say No to L1, Kernels of random sign matrices, Compressive Sampling and Lossy Compression and other Compressed Sensing Updates

Muthu Muthukrishnan has new entry in his blog entitled: Compressed sensing: L0, L2 and no L1. It is a parallel between CoSaMP and another similar type of algorithm.

James Lee has a new entry in his not-a-blog blog entitled Kernels of random sign matrices

The Rice repository has new entries that I have not covered:
* Compressive Sampling and Lossy Compression by Vivek Goyal, Alyson Fletcher and Sundeep Rangan, The beginning of this introductory article reads:
Recent results in compressive sampling have shown that sparse signals can be recovered from a small number of random measurements. This property raises the question of whether random measurements can provide an efficient representation of sparse signals in an information-theoretic sense. Through both theoretical and experimental results, we show that encoding a sparse signal through simple scalar quantization of random measurements incurs a significant penalty relative to direct or adaptive encoding of the sparse signal. Information theory provides alternative quantization strategies, but they come at the cost of much
greater estimation complexity.
* Denoising by Sparse Approximation: Error Bounds Based on Rate-Distortion by Alyson Fletcher, Sundeep Rangan, and Vivek Goyal, and Kannan Ramchandran. The abstract reads:
If a signal x is known to have a sparse representation with respect to a frame, it can be estimated from a noise-corrupted observation y by finding the best sparse approximation to y. Removing noise in this manner depends on the frame efficiently representing the signal while it inefficiently represents the noise. The mean-squared error (MSE) of this denoising scheme and the probability that the estimate has the same sparsity pattern as the original signal are analyzed. First an MSE bound that depends on a new bound on approximating a Gaussian signal as a linear combination of elements of an overcomplete dictionary is given. Further analyses are for dictionaries generated randomly according to a spherically-symmetric distribution and signals expressible with single dictionary elements. Easily-computed approximations for the probability of selecting the correct dictionary element and the MSE are given. Asymptotic expressions reveal a critical input signal-to-noise ratio for signal recovery.
* On the Rate-Distortion Performance of Compressed Sensing by Alyson Fletcher, Sundeep Rangan, and Vivek Goyal. The abstract reads:
Encouraging recent results in compressed sensing or compressive sampling suggest that a set of inner products with random measurement vectors forms a good representation of a source vector that is known to be sparse in some ô€…¿xed basis. With quantization of these inner products, the encoding can be considered universal for sparse signals with known sparsity level. We analyze the operational rate distortion performance of such source coding both with genie-aided knowledge of the sparsity pattern and maximum likelihood estimation of the sparsity pattern. We show that random measurements induce an additive logarithmic rate penalty, i.e., at high rates the performance with rate R + O(log R) and random measurements is equal to the performance with rate R and deterministic measurements matched to the source.
* Rate-Distortion Bounds for Sparse Approximation by Alyson Fletcher, Sundeep Rangan, and Vivek Goyal. The abstract reads:

Sparse signal models arise commonly in audio and image processing. Recent work in the area of compressed sensing has provided estimates of the performance of certain widely-used sparse signal processing techniques such as basis pursuit and matching pursuit. However, the optimal achievable performance with sparse signal approximation remains unknown. This paper provides bounds on the ability to estimate a sparse signal in noise. Specifically, we show that there is a critical minimum signal-to-noise ratio (SNR) that is required for reliable detection of the sparsity pattern of the signal. We furthermore relate this critical SNR to the asymptotic mean squared error of the maximum likelihood estimate of a sparse signal in additive Gaussian noise. The critical SNR is a simple function of the problem dimensions.
* Multichannel Sampling of Parametric Signals with a Successive Approximation Property by Julius Kusuma and Vivek Goyal. The abstract reads:
Recently the sampling theory for certain parametric signals based on rate of innovation has been extended to all sampling kernels that satisfy the Strang-Fix conditions, thus including many attractive choices with finite support. We propose a new sampling scheme in which samples are taken simultaneously at the outputs of multiple channels. This new scheme is closely related to previously known cases, but provides a successive approximation property that can be used for detecting undermodeling. We also draw connections to splines and multi-scale sampling of signals.

Credit: NASA/JPL/Space Science Institute, Dark Boundary

No comments:

Post a Comment