Pages

Tuesday, April 16, 2013

GAGA: GPU Accelerated Greedy Algorithms - implementation -


Jared Tanner just sent me the following:

Dear Igor, 
Hope this reaches you well. Jeff Blanchard and I have been developing GPU code for greedy compressed sensing algorithms, and it has now reached a sufficient stage of polish to be release online.
It would be great if you could mention it on Nuit Blanche. There are two accompanying papers, one on the software: 
and another on using the code to generate large amounts of date comparing the performance of algorithms, and develop "algorithm selection maps" where one can indicate which algorithm is quickest for a given type of matrix and sparse vector: 
This second paper was just recently posted. It is amazing how much data one can get with code this quick….
All the best,
Jared
-------
Jared Tanner
Professor of the Mathematics of Information
Mathematics Institute
University of Oxford
http://people.maths.ox.ac.uk/tanner
Thanks Jared ! From the GAGA page:

GPU Accelerate Greedy Algorithms for Compressed Sensing

Welcome to GAGA, a software package for solving large compressed sensing problems with millions of unknowns in fractions of a second by exploiting the power of graphics processing units.  The current release consists of five greedy algorithms using five matrix ensembles.  This release is set to compile as Matlab executables to enhance your compressed sensing research and applications.  A user guide is available for download detailing the capabilities inluding simple implementations for large-scale testing at problem sizes previously too computationally expensive for extensive testing.
The current version, GAGA 1.0.0, contains five algorithms and is equipped with three clases of matrix multiplication, generic dense matrices, sparse matrices, and the subsampled discrete cosine transform. For large-scale testing, there are a total of five randomly generated matrix ensembles and three randomly generated sparse vector ensembles. For applications, the algorithms are equipped to employ any dense matrix and any sparse matrix in COO format (the default in Matlab). GAGA provides massive acceleration with up to 70x speed-ups in the algorithms' subroutines over a CPU based matlab implementation. For large scale testing, the GPU based random problem generation can offer up to 1600x acceleration.

GAGA can be downloaded from here.

The attendant papers are:


For appropriate matrix ensembles, greedy algorithms have proven to be an efficient means of solving the combinatorial optimization problem associated with compressed sensing. This paper describes an implementation for graphics processing units (GPU) of hard thresholding, iterative hard thresholding, normalized iterative hard thresholding, hard thresholding pursuit, and a two-stage thresholding algorithm based on compressive sampling matching pursuit and subspace pursuit. The GPU acceleration of the former bottleneck, namely the matrix-vector multiplications, transfers a significant portion of the computational burden to the identification of the support set. The software solves high-dimensional problems in fractions of second which permits large-scale testing at dimensions currently unavailable in the literature. The GPU implementations exhibit up to 70x acceleration over standard Matlab central processing unit implementations using automatic multi-threading.


Compressed sensing has motivated the development of numerous sparse approximation algorithms designed to return a solution to an underdetermined system of linear equations where the solution has the fewest number of nonzeros possible, referred to as the sparsest solution. In the compressed sensing setting, greedy sparse approximation algorithms have been observed to be both able to recovery the sparsest solution for similar problem sizes as other algorithms and to be computationally efficient; however, little theory is known for their average case behavior. We conduct a large scale empirical investigation into the behavior of three of the state of the art greedy algorithms: NIHT, HTP, and CSMPSP. The investigation considers a variety of random classes of linear systems. The regions of the problem size in which each algorithm is able to reliably
recovery the sparsest solution is accurately determined, and throughout this region additional performance characteristics are presented. Contrasting the recovery regions and average computational time for each algorithm we present algorithm selection maps which indicate, for each problem size, which algorithm is able to reliably recovery the sparsest vector in the least amount of time. Though no one algorithm is observed to be uniformly superior, NIHT is observed to have an advantageous balance of large recovery region, absolute recovery time, and robustness of these properties to additive noise and for a variety of problem classes. The algorithm selection maps presented here are the first of their kind for compressed sensing

No comments:

Post a Comment