Thursday, February 28, 2013

General Matching Pursuit (GMP): Is Matching Pursuit Solving Convex Problems? - implementation -




Is Matching Pursuit Solving Convex Problems? by Mingkui Tan, Ivor W. Tsang, Li Wang. The abstract reads:
Sparse recovery ({\tt SR}) has emerged as a very powerful tool for signal processing, data mining and pattern recognition. To solve {\tt SR}, many efficient matching pursuit (\texttt{MP}) algorithms have been proposed. However, it is still not clear whether {\tt SR} can be formulated as a convex problem that is solvable using \texttt{MP} algorithms. To answer this, in this paper, a novel convex relaxation model is presented, which is solved by a general matching pursuit (\texttt{GMP}) algorithm under the convex programming framework. {\tt GMP} has several advantages over existing methods. At first, it solves a convex problem and guarantees to converge to a optimum. In addition, with $\ell_1$-regularization, it can recover any $k$-sparse signals if the restricted isometry constant $\sigma_k\leq 0.307-\nu$, where $\nu$ can be arbitrarily close to 0. Finally, when dealing with a batch of signals, the computation burden can be much reduced using a batch-mode \texttt{GMP}. Comprehensive numerical experiments show that \texttt{GMP} achieves better performance than other methods in terms of sparse recovery ability and efficiency. We also apply \texttt{GMP} to face recognition tasks on two well-known face databases, namely, \emph{{Extended using}} and \emph{AR}. Experimental results demonstrate that {\tt GMP} can achieve better recognition performance than the considered state-of-the-art methods within acceptable time. {Particularly, the batch-mode {\tt GMP} can be up to 500 times faster than the considered $\ell_1$ methods.}

Here is what Ivor states on his webpage:

A General Matching Pursuit (GMP) is proposed for very large-scale sparse recovery problems (e.g. 100k dictionary entries)
It can recover any k-sparse signals if the restricted isometry constant sigma_k < 0.307-nu, where nu can be arbitrarily close to 0 
When dealing with a batch of signals, the computation burden can be much reduced using a batch-mode matching pursuit, which can be up to 500 times faster than L1-methods 
Its decoding speed is much faster than many recently developed methods including PGH, OMPRA, FISTA, ADM, SP, CoSaMP, AIHT, OMP, Shotgun (Parallel) 
The decoding speed of GMP on face recognition is even as fast as L2-methods but it can achieve more robust and better recognition performances than L2-methods

An implementation of GMP is available here.


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly