## Page Views on Nuit Blanche since July 2010

My papers on ArXiv:
Approximating Kernels at the speed of Light
&
Imaging with Nature

LightOn
LinkedIn (727)|| on CrunchBase || our Blog
(2452)
(3967)
(1333)||
Attendant references pages:
The Advanced Matrix Factorization Jungle Page ||

Paris Machine Learning
@Meetup.com (8016 members) || @Archives

## Wednesday, April 15, 2009

### CS: Analysis for Iterative Reweighted l_1, Projection proximal-point algorithm,Subsampling Algorithms, Imaging of moving targets with multi-static SAR

Weiyu Xu just sent me the following very interesting news on the re-weighted L1 mentioned a while ago:
...Please see [ below] our recent paper on the analysis for the iterative reweighted l_1 minimization algorithm. This algorithm was proposed by Candes, Wakin, and Boyd in 2007, but there has not be a rigorous proof for it even though it has been observed to outperform the l_1 minimization in experimentations.

In this paper, we rely on new developments of the Grassmann angle analytical framework for compressive sensing and proved rigorously that this iterative reweighted l_1 minimization algorithm can indeed provably boost the signal recovery sparsity thresholds....

It is now well understood that $\ell_1$ minimization algorithm is able to recover sparse signals from incomplete measurements [2], [1], [3] and sharp recoverable sparsity thresholds have also been obtained for the $\ell_1$ minimization algorithm. However, even though iterative reweighted $\ell_1$ minimization algorithms or related algorithms have been empirically observed to boost the recoverable sparsity thresholds for certain types of signals, no rigorous theoretical results have been established to prove this fact. In this paper, we try to provide a theoretical foundation for analyzing the iterative reweighted $\ell_1$ algorithms. In particular, we show that for a nontrivial class of signals, the iterative reweighted $\ell_1$ minimization can indeed deliver recoverable sparsity thresholds larger than that given in [1], [3]. Our results are based on a high-dimensional geometrical analysis (Grassmann angle analysis) of the null-space characterization for $\ell_1$ minimization and weighted $\ell_1$ minimization algorithms.
I am sure glad that this Grassman Angle framework provides more than one of the numerous NP-Hard property for sparse signal recovery :-). Thank you Weiyu.

Here is another paper investigating proximal point method in A projection proximal-point algorithm for l^1-minimization by Dirk A. Lorenz. The abstract reads:
The problem of the minimization of least squares functionals with $\ell^1$ penalties is considered in an infinite dimensional Hilbert space setting. While there are several algorithms available in the finite dimensional setting there are only a few of them which come with a proper convergence analysis in the infinite dimensional setting. In this work we provide an algorithm from a class which have not been considered for $\ell^1$ minimization before, namely a proximal-point method in combination with a projection step. We show that this idea gives a simple and easy to implement algorithm. We present experiments which indicate that the algorithm may perform better than other algorithms if we employ them without any special tricks. Hence, we may conclude that the projection proximal-point idea is a promising idea in the context of $\ell^1$-minimization.

I note the word "zoo" to characterize the plentiful solutions for reconstructing CS solution . This is fair to say :-) I also noted and the following:

...The contribution of the paper is hence twofold: one the one hand, we add another class of algorithms, namely a proximal-point-like algorithm, to the zoo of available methods and on the other hand we provide an algorithm which is globally and strongly convergent in the infinite dimensional setting. We stress, that the purpose of this paper is not to develop an algorithm which outperforms other existing methods....

The conclusion is itself very interesting:

The projection proximal point algorithm as proposed in [20] is applicable for the problem of ℓ1-minimization. It could be shown that under the FBI assumption the algorithm even converges linearly. Both the iterated soft-thresholding and the generalized conditional gradient method are applicable methods to solve the regularized subproblems. For the iterated soft-thresholding the projection proximal-point algorithm does not lead to a significant improvement. On the theoretical side, the iterated soft-thresholding itself converges linearly [4, 14] and on the practical side, both methods behave comparable (see Figure 3). However, for the generalized conditional gradient method the situation is different. For the algorithm itself as stated in [3] the distance to the minimizer only converges like O(n−1/2) but the projection proximal-point extension gives linear convergence. Moreover, also the practical behavior is better, see Figure 3. In total, it seems that the generalized conditional gradient method combines well with the projection proximal point idea. However, the aim of this paper was not to develop the fastest available method for ℓ1-minimization but to provide an algorithm from a different class of methods. Hence, it may be expected that after considerable fine tuning (parameter choice rules, more efficient solvers for the subproblems, backtracking. . . ) the projection proximal-point algorithm will become comparable to state-of-the-art methods.

Finally we remark, that Algorithm 4 does not rest upon linearity of the operator
and may also be applied to non-linear operators. Also for the solution of the subproblems either iterated soft-thresholding [1] or the generalized condition

I think I mention this paper before, but now there is a code associated with it: Subsampling Algorithms for Semidefinite Programming by Alexandre d'Aspremont. The abstract reads:

We derive two stochastic gradient algorithms for semidefinite optimization using randomization techniques. One is based on the robust stochastic approximation method and uses random sparsifications of the current iterate to both accelerate eigenvalue computations and reduce memory requirements. The other relies on gradient sampling techniques to reduce the per iteration cost of smooth semidefinite optimization methods.

The source code is here.

Finally, found on arxiv: Imaging of moving targets with multi-static SAR using an overcomplete dictionary by Ivana Stojanovic, William C. Karl. The abstract reads:
This paper presents a method for imaging of moving targets using multi-static SAR by treating the problem as one of spatial reflectivity signal inversion over an overcomplete dictionary of target velocities. Since SAR sensor returns can be related to the spatial frequency domain projections of the scattering field, we exploit insights from compressed sensing theory to show that moving targets can be effectively imaged with transmitters and receivers randomly dispersed in a multi-static geometry within a narrow forward cone around the scene of interest. Existing approaches to dealing with moving targets in SAR solve a coupled non-linear problem of target scattering and motion estimation typically through matched filtering. In contrast, by using an overcomplete dictionary approach we effectively linearize the forward model and solve the moving target problem as a larger, unified regularized inversion problem subject to sparsity constraints.

Image Credit: NASA/JPL/Space Science Institute, Saturn's rings taken 3 days ago from Cassini.