Pages

Sunday, January 22, 2012

sFFT: Sparse Fast Fourier Transform

Remember the new FFT algorithm out of MIT that uses the sparsity of the signal to perform a Faster FFT ? Well, the authors (Dina KatabiPiotr IndykHaitham HassaniehEric Price), just set up a website that will  now eventually hosts an implementation of it. ( Piotr  tells me it should be up after the news storm has settled down). The KIHP sFFT page currently starts with:

IntroductionWe consider the sparse Fourier transform problem: given a complex vector x of length n, and a parameter k, estimate the k largest (in magnitude) coefficients of the Fourier transform of x. The problem is of key interest in several areas, including signal processing, audio/image/video compression, and learning theory. We propose a new algorithm for this problem. The algorithm leverages techniques from digital signal pro- cessing, notably Gaussian and Dolph-Chebyshev filters. The resulting algorithm is structurally simpler than its predecessors. As a consequence, we are able to extend considerably the range of sparsity, k, for which the algorithm is faster than FFT, both in theory and practice.

Algorithm

Algorithm

Complexity

Sparsity Range

sFFT 1.0 (Non-Iterative Alghorithm)::
sFFT 2.0 (sFFT 1.0 + Heuristic)::
sFFT 3.0 (Exact k Sparse Algorithm)::
sFFT 4.0 (General k Sparse Algorithm)::
Lower Bound::

The figure of deep interest is here I think with a comparison with the Fastest Fourier Transform in the West. (FFTW)




The paper presented at SODA is Simple and Practical Algorithm for Sparse Fourier Transform by  Haitham Hassanieh ,  Piotr Indyk  , Dina Katabi, and  Eric Price. The abstract reads:

 . We consider the sparse Fourier transform problem: given a complex vector x of length n, and a parameter k, estimate the k largest (in magnitude) coefficients of the Fourier transform of x. The problem is of key interest in several areas, including signal processing, audio/image/video compression, and learning theory. We propose a new algorithm for this problem. The algorithm leverages techniques from digital signal processing, notably Gaussian and Dolph-Chebyshev filters. Unlike the typical approach to this problem, our algorithm is not iterative. That is, instead of estimating “large” coefficients, subtracting them and recursing on the reminder, it identifies and estimates the k largest coefficients in “one shot”, in a manner akin to sketching/streaming algorithms. The resulting algorithm is structurally simpler than its predecessors. As a consequence, we are able to extend considerably the range of sparsity, k, for which the algorithm is faster than FFT, both in theory and practice.


Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

13 comments:

  1. Interesting post! I would also like to draw your attention to the previously existing code at:

    http://sourceforge.net/projects/gopherfft/

    It has been shown to be several times faster, and is significantly simpler, than AAFFT. See

    http://www.math.duke.edu/~markiwen/DukePage/Papers/JVerSub.pdf

    ReplyDelete
  2. Mark,

    I don't understand, the gopher FFT is in fact an implementation of AAFFT, right ?

    Igor.

    ReplyDelete
  3. No -- The gopher FFT is based on a different algorithm than AAFFT, and is substantially simpler than AAFFT. The algorithm is described here in a paper which was submitted to a journal in Sept. of 2010 (and has been "under review" ever since, but that's a different story...):

    http://www.math.duke.edu/~markiwen/DukePage/Papers/Iwen_Improved.pdf

    The fastest variant has guaranteed O(k log^4 N) runtime. If parallelized it ``should be'' O(k log^2 N / log log N)-time.

    Here are some of the algorithm's tradeoffs:

    Bad Things:
    It uses samples which correspond to those utilized by several smaller FFTs. This means that it does not (at least as currently implemented) take the sort of standard input data one expects an FFT to take. Thus, one must be able to sample (or choose grid points) in a nonstandard way.

    Good Things:
    1.) The fastest variant has exactly one non-sparsity parameter.
    2.) The algorithm only involves smaller FFTs, sorting, and standard chinese remaindering techniques as subroutines. If carefully implemented by a professional -- the version on the website I posted previously was implemented by an undergraduate student who I (closely) supervised -- it should be very fast.
    3.) The algorithm is highly parallelizable. In theory, all of the smaller FFTs and chinese remaindering steps mentioned above can be run independently of one another.

    Your confusion about GFFT being the same as AAFFT might be due to the fact than a slightly improved version of AFFT also appears along with the GFFT code on the GFFT website I posted above. This is due to my difficulties organizing things, for which I apologize!

    ReplyDelete
  4. Thanks Mark.

    It looks like the SODA paper makes a mention of the improved paper. Obviously for comparison they must have used AAFT. Piotr Indyk tells me that what will be released will be a prototype that I am sure can be much improved in speed after some specialists take over that task.

    I'll mention GopherFFT in the blog. Thanks for the heads-up.

    Cheers,

    Igor.

    ReplyDelete
  5. Hello
    It sounds very interesting. Is there any actual implementation in matlab or C/C++?

    Thanks

    ReplyDelete
  6. Not yet Anonymous.

    I'll mention it in the blog when it comes out.

    Cheers,

    Igor.

    ReplyDelete
  7. It looks like sFFT is faster than FFTW only for large signals.
    I a newbie to image compression field. I believe current video compression standards use a DCT on macro-block (MB) level, not on the complete frame. Typically, MB are small (16x16). So signal size is small. Can sFFT be used to speedup video compression for application used in practice today ?

    ReplyDelete
  8. Hi!
    Is there any actual implementation in matlab at this time? I am trying to implement this algorithm in matlab, but still I can't do it. I spent two weeks on it ))
    I think, fft in matlab rather different then fft which they use. Correct me if I wrong.
    Thanks,
    Alexey.

    ReplyDelete
  9. Hi!
    Is there any actual implementation in matlab at this time? I am trying to implement this algorithm in matlab, but still I can't do it. I spent two weeks on it ))
    I think, fft in matlab rather different then fft which they use. Correct me if I wrong.
    Thanks,
    Alexey.

    ReplyDelete
  10. Alexei

    what about : http://groups.csail.mit.edu/netmit/sFFT/code.html

    Cheers,

    Igor.

    ReplyDelete
  11. Alexei,

    yes, the sFFT of MIT deals with signals that are sparse in the first place. The FFT implementation in Matlab does not care if the initial signal is sparse.

    Cheers,

    Igor.

    ReplyDelete
  12. Is there any Matlab code for sfft? I am working on it.

    ReplyDelete