## Page Views on Nuit Blanche since July 2010

My papers on ArXiv:
Approximating Kernels at the speed of Light
&
Imaging with Nature

LightOn
LinkedIn (727)|| on CrunchBase || our Blog
(2452)
Compressive Sensing on LinkedIn
(3967)
(1333)||
Attendant references pages:
The Advanced Matrix Factorization Jungle Page ||

Paris Machine Learning
@Meetup.com (8016 members) || @Archives

## Friday, April 26, 2013

### Correcting Errors in Linear Measurements and Compressed Sensing of Multiple Sources - implementation -

Here is a cross-and-bouquet approach with a greedy algorithm this time. Of note this is the second paper on error correction in two days:

Correcting Errors in Linear Measurements and Compressed Sensing of Multiple Sources by Alexander PetukhovInna Kozlov. The abstract reads:
We present an algorithm for finding sparse solutions of the system of linear equations $\Phi\mathbf{x}=\mathbf{y}$ with rectangular matrices $\Phi$ of size $n\times N$, where $n$ lt N, when measurement vector y is corrupted by a sparse vector of errors e. We call our algorithm the ℓ1-greedy-generous (LGGA) since it combines both greedy and generous strategies in decoding. Main advantage of LGGA over traditional error correcting methods consists in its ability to work efﬁciently directly on linear data measurements. It uses the natural residual redundancy of the measurements and does not require any additional redundant channel encoding. We show how to use this algorithm for encoding-decoding multichannel sources. This algorithm has a signiﬁcant advantage over existing straightforward decoders when the encoded sources have different density/sparsity of the information content. That nice property can be used for very efﬁcient blockwise encoding of the sets of data with a non-uniform distribution of the information. The images are the most typical example of such sources.
The important feature of LGGA is its separation from the encoder. The decoder does not need any additional side information from the encoder except for linear measurements and the knowledge that those measurements created as a linear combination of different sources

The attendant code implementation is here. Thanks Alex

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.