Thursday, March 01, 2012

AMP on fire! An ASPICS Matlab Implementation and a VSLI/FPGA implementation

Florent Krzakala just sent me the following:

Dear Igor
We now have the long-waited MATLAB implementation of our compressed sensing solver. It works fine and is fast (i.e. way better than L1 in most cases) with random matrices in general, and it is therefore a very good solver for compressed sensing problems (i.e. robust to noise, etc...) Of course, it can also be used with our 'magic' matrices that allows to perform optimal compressed sensing. The distribution is at: http://aspics.krzakala.org/
Best
Florent
Thanks Florent! If you recall ASPICS is this AMP solver built by Florent KrzakalaMarc MézardFrançois Sausset,Yifan SunLenka Zdeborová.which, when coupled with specific sensing matrices brought up a stunning development in breaking the Donoho-Tanner phase transition ?

As it turns out, Christoph Studer sent me the following





Dear Igor,
we just submitted our paper "VLSI Implementation of Approximate Message Passing for Signal Restoration and Compressive Sensing," where we develop two architectures and corresponding ASIC/FPGA implementations of the AMP algorithm:
We would really appreciate it if you can feature it on Nuit Blanche.
Thank you and best regards,
Christoph


Sparse signal recovery finds use in a variety of practical applications, such as signal and image restoration and the recovery of signals acquired by compressive sensing. In this paper, we present two generic VLSI architectures that implement the approximate message passing (AMP) algorithm for sparse signal recovery. The first architecture, referred to as AMP-M, employs parallel multiply-accumulate units, which are suitable for recovery problems based on unstructured (or random) matrices. The second architecture, referred to as AMP-T, takes advantage of fast linear transforms, which arise in many real-world applications. To demonstrate the effectiveness of both architectures, we present corresponding ASIC and FPGA implementation results for an audio restoration application. We show that AMP-T is superior to AMP-M with respect to silicon area, throughput, and power consumption, whereas AMP-M offers more flexibility and can be reconfigured at run time.
Looks like the de-clicking AMP solver has already been implemented.


I note from the paper:

"...The main bottleneck of the AMP-M architecture is the memory bandwidth required to deliver the entries of D to the parallel MAC units. For unstructured (e.g., random) matrices, (multi-port) LUTs, ROMs, or on-chip S-RAMs can be used for small dimensions (in the order of a few hundred kbit). For large-dimensional problems, external memories (e.g., offchip D-RAMs) become necessary, which shifts the memory bottleneck to the bandwidth of the external memory interface. Fortunately, in many real-world applications, the matrix D is highly structured; hence, it is often possible to generate its coefficients on the fly at very high throughput. The following paragraph discusses a corresponding solution for audio restoration.."
Then maybe the 'magic' matrices that Florent alluded to may not need external memories for bigger problems as these measurement matrices are sparse. The same goes for the random Count-Min sparse measurement matrices of Piotr IndykRadu Berinde., Milan Ružić.

From the Big Picture in Compressive Sensing some of the other AMP algorithms out there include:


No comments:

Printfriendly