Friday, April 27, 2012

Generalizing Compressed Sensing: GraSP implementation available




Through his Twitter feed, Petros Boufounos let us know that:

GraSP code is out: http://t.co/QnVvUfZo Go play! Here is the paper:http://t.co/dsC919S2
In plain english, the site is here. And starts with:


Gradient Support Pursuit (GraSP)
This algorithm can be used as an approximate solver for sparsity constrained optimization problems. The algorithm generalizes the CoSaMP algorithm used in Compressed Sensing, and can handle cost functions that are non-quadratic. 


The attendant supporting  presentation is hereAs a reminder, the paper is:Greedy Sparsity-Constrained Optimization by Sohail Bahmani, Petros Boufounos, and Bhiksha Raj.. The abstract reads:
Sparsity-constrained optimization has wide applicability in machine learning, statistics, and signal processing problems such as feature selection and Compressive Sensing. A vast body of work has studied the sparsity-constrained optimization from theoretical, algorithmic, and application aspects in the context of sparse estimation in linear models where the fidelity of the estimate is measured by the squared error. In contrast, relatively less effort has been made in the study of sparsity-constrained optimization in cases where nonlinear models are involved or the cost function is not quadratic. In this paper we propose a greedy algorithm, Gradient Support Pursuit (GraSP), to approximate sparse minima of cost functions of arbitrary form. Should a cost function have a Stable Restricted Hessian (SRH) or a Stable Restricted Linearization (SRL), both of which are introduced in this paper, our algorithm is guaranteed to produce a sparse vector within a bounded distance from the true sparse optimum. Our approach generalizes known results for quadratic cost functions that arise in sparse linear regression and Compressive Sensing. We also evaluate the performance of GraSP through numerical simulations on synthetic data, where the algorithm is employed for sparse logistic regression with and without $\ell_2$-regularization.



Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly