Correcting Errors in Linear Measurements and Compressed Sensing of Multiple Sources by Alexander Petukhov, Inna Kozlov. The abstract reads:
We present an algorithm for finding sparse solutions of the system of linear equations $\Phi\mathbf{x}=\mathbf{y}$ with rectangular matrices $\Phi$ of size $n\times N$, where $n$ lt N, when measurement vector y is corrupted by a sparse vector of errors e. We call our algorithm the ℓ1-greedy-generous (LGGA) since it combines both greedy and generous strategies in decoding. Main advantage of LGGA over traditional error correcting methods consists in its ability to work efficiently directly on linear data measurements. It uses the natural residual redundancy of the measurements and does not require any additional redundant channel encoding. We show how to use this algorithm for encoding-decoding multichannel sources. This algorithm has a significant advantage over existing straightforward decoders when the encoded sources have different density/sparsity of the information content. That nice property can be used for very efficient blockwise encoding of the sets of data with a non-uniform distribution of the information. The images are the most typical example of such sources.
The important feature of LGGA is its separation from the encoder. The decoder does not need any additional side information from the encoder except for linear measurements and the knowledge that those measurements created as a linear combination of different sources
The attendant code implementation is here. Thanks Alex
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment