## Page Views on Nuit Blanche since July 2010

My papers on ArXiv:
Approximating Kernels at the speed of Light
&
Imaging with Nature

LightOn
LinkedIn (727)|| on CrunchBase || our Blog
(2452)
(3967)
(1333)||
Attendant references pages:
The Advanced Matrix Factorization Jungle Page ||

Paris Machine Learning
@Meetup.com (8016 members) || @Archives

## Thursday, June 11, 2009

### CS: The Two Stage $l_1$ Approach to the Compressed Sensing Problem, with code, Sparse Sampling

Stephane Chretien who just released on arxiv his paper on his Two stage L1 method, also let me know that he released a webpage for the software as well. The paper is The Two Stage $l_1$ Approach to the Compressed Sensing Problem by Stephane Chretien. The abstract reads:

This paper gives new results on the recovery of sparse signals using $l_1$-norm minimization. We introduce a two-stage $l_1$ algorithm. The first step consists of the standard $\ell_1$ relaxation. The second step consists of optimizing the $\ell_1$ norm of a subvector whose components are indexed by the $\rho m$ largest components in the first stage. If $\rho$ is set to $\frac14$, an intuitive choice motivated by the fact that $\frac{m}4$ is an empirical breakdown point for the plain $\ell_1$ exact recovery probability curve, Monte Carlo simulations show that the two-stage $\ell_1$ method outperforms the plain $\ell_1$.
as presented in the page where the code is featured:
This method is a special case of the Alternating l1 algorithm.

The main advantage is that only two l1-type relaxations are performed thus enjoying low computational cost compared to the original Alternating l1 algorithm and the Reweighted l1 algorithm of Candès, Wakin and Boyd. Moreover, the simulations show that the percentage of exact recovery is almost as high as the highest one given by the Reweighted l1 without needing to tune any unknown parameter.

The paper was presented at SPARS'09 but I had not mentioned it before. It is included in the algorithms performing reconstruction section of the big picture. The code is located in this page. Stephane tells me that it will soon be translated in Matlab. It is currently written in Python.

Finally, here is a 172 pages tutorial to be shown at ICASSP 2009 entitled Sparse Sampling: Theory, Algorithms and Applications by Thierry Blu, Pier-Luigi Dragotti, Pina Marziliano, Martin Vetterli that features the Finite Rate of Innovation concept. If you are interested in trying it out, you may want to check Vincent Y. F. Tan and and Vivek Goyal's paper entitled Estimating signals with finite rate of innovation from noisy samples: A stochastic algorithm whose attendant matlab code is now available here.

#### 1 comment:

Anonymous said...

Yes, it is nice to see Python in field, for those who don't have Matlab as well. Thanks.