## Page Views on Nuit Blanche since July 2010

My papers on ArXiv:
Approximating Kernels at the speed of Light
&
Imaging with Nature

LightOn
LinkedIn (727)|| on CrunchBase || our Blog
(2452)
Compressive Sensing on LinkedIn
(3967)
(1333)||
Attendant references pages:
The Advanced Matrix Factorization Jungle Page ||

Paris Machine Learning
@Meetup.com (8016 members) || @Archives

## Sunday, May 26, 2013

### Sunday Morning Insight: The 200 Year Gap.

[ Please see updates at the end of this entry]

If one uses an algorithm as a way to find the sparsest solution to an underdetermined linear system of equations, then Yoram Bresler is right [1] when he points out that Gaspard de Prony might have been first in putting a stake in what we now call compressive sensing. What is so exceptional here is that the algorithm was found in 1795. If you consider the theoretical justification for some cases of compressive sensing were published in 2004-2006 (Donoho, Tao, Candes, Romberg), then it is probably one of the largest time gap between the use of an algorithm within an island of knowledge  [6-8] and some theoretical justification for it. Here this affirmation is to be taken in wider sense: Is there an algorithm and theoretical justification that connects the sparsity of the solution to an underdetermined system of linear equations and the size of said system. In effect, Larry Wasserman thinks it might win the biggest gap award. What's interesting is that historically, it might even precede the Least Squares solution technique [7] depending on whether you are a Gauss (1795) or a Legendre (1805) fan.

Let us note what Ben mentions at the every end of that lecture:

The main drawbacks of Prony's method is that it is not numerically stable for large D. Moreover, it is not stable if x is nearly sparse. Also, the algorithm is very speci fic to the structure of the Fourier transform and does not generalize to other linear encodings
Indeed, we are now a little bit beyond m = 2s with the Donoho-Tanner phase transition [6] and we can deal with noise thanks to the 2004 results [2,3].

But just like this little music that won't stop in the back of one's mind, one just wonders: what if an algorithm had been found in 1795 that had been numerically stable ? Would it have changed the world earlier like the Least Squares did in 1800's and the FFT did in the 1960's ? [7]

The webpage for Ben's class is here [5].

Update 1:

Peyman Milanfar commented:

While we are minding the gap, I will point out my dissertation work in which I proved (using Prony!) that one could reconstruct a plane polygon from a finite number of its Radon transform projections. At the time I referred to such objects as being "finitely parameterized" which in today's lingo would be "sparsely represented". Not long after, the notion of signals with finite rate of innovation came out of EPFL, followed later by compressed sensing etc. and the rest is history. So I am claiming my rightful place in history as having slightly shortened said gap. :-)

P. Milanfar, G.C. Verghese, W.C. Karl, A.S. Willsky, “Reconstructing Polygons from Moments with Connections to Array Processing”, IEEE Transactions on Signal Processing, vol. 43, no. 2, pp. 432-443, February 1995
Update 2:

" I would like to add that the first appearance of Prony's method in compressed sensing I am aware of are talks given by Daniel Potts (TU Chemnitz) and Stefan Kunis (U Osnabrück) which were probably in 2009. There they explicitely compared the thresholds and recovery guarantees by Prony and Basis Pursuit. However, I could not find slides or preprints from these days (although they have more recent papers on that topic).﻿...They have a recent preprint "A sparse Prony FFT" here http://www.tu-chemnitz.de/mathematik/preprint/2013/PREPRINT.php?year=2013&num=03"

A sparse Prony FFT by Sabine Heider, Stefan Kunis, Daniel Potts, Michael Veit
We describe the application of Prony-like reconstruction methods to the problem of the sparse Fast Fourier transform (sFFT) [5]. In particular, we adapt both important parts of the sFFT, quasi random sampling and ﬁltering techniques, to Prony-like methods
I note from the paper that the Sparse Prony FFT might fare better than some other sFFT implementations as it scales only as K^(3/2)..

Of related interest, there is also:

[2] E. J. Candès, J. Romberg and T. Tao. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inform. Theory, 52 489-509.
[3] Compressed Sensing, :Information Theory, IEEE Transactions on Information Theory, April 2006, Donoho, D.L.
[4] CS838 Topics In Optimization: Convex Geometry in High-Dimensional Data Analysis, Lecturer: Benjamin Recht, February 4, 2010 Scribes: Matt Malloy, Zhiting Xu

#### 1 comment:

Peyman Milanfar said...

While we are minding the gap, I will point out my dissertation work in which I proved (using Prony!) that one could reconstruct a plane polygon from a finite number of its Radon transform projections. At the time I referred to such objects as being "finitely parameterized" which in today's lingo would be "sparsely represented". Not long after, the notion of signals with finite rate of innovation came out of EPFL, followed later by compressed sensing etc. and the rest is history. So I am claiming my rightful place in history as having slightly shortened said gap. :-)

http://users.soe.ucsc.edu/~milanfar/publications/journal/moment_SP.pdf

P. Milanfar, G.C. Verghese, W.C. Karl, A.S. Willsky, “Reconstructing Polygons from Moments with Connections to Array Processing”, IEEE Transactions on Signal Processing, vol. 43, no. 2, pp. 432-443, February 1995