I am slowly getting back online but I noticed a definite upsurge of papers now integrating the Donoho-Tanner phase transition (or similar) in their analysis. So today is not Deuterium-Tritium fest but the the Donoho-Tanner phase transition fest, enjoy:
Universal and Efﬁcient Compressed Sensing Strategy through Spread Spectrum Modulation by Gilles Puy, Pierre Vandergheynst, Remi Gribonval, and Yves Wiaux. The abstract reads:
We propose a universal and efﬁcient compressed sensing strategy based on the use of a spread spectrum technique. The method essentially consists in a random pre-modulation of the signal of interest followed by projections onto randomly selected vectors of an orthonormal basis. Firstly, we show that the effectiveness of the technique is induced by a decrease of coherence between the sparsity and the sensing bases. Secondly, we prove that the sensing scheme is universal for a family of sensing bases, for which the number of measurements needed for accurate recovery is optimal and independent of the sparsity matrix. It is also efﬁcient as sensing matrices with fast matrix multiplication algorithms can be used. These results are conﬁrmed experimentally through analyses of the phase transition of the ℓ1-minimization problem. Finally, we illustrate the effectiveness of the technique by a study of its application to radio interferometry and magnetic resonance imaging.
I asked the authors the following:
Do you think the use of the DT phase transition is central in your argument/paper ?
For me yes: it is the argument we are somehow trying to optimize by getting as close as possible to the phase transition of the Dirac Fourier pair
Yves followed through with.
... and, as a sparkling example, it is nice to see that in the presence of a random pre-modulation the DT phase transition is reached even for the Fourier-Fourier pair (we mean "sparsity-sensing" bases pair)Remi concluded with:
From a strict theoretical perspective, as far as I know the DT phase transition is only proved for Gaussian matrices. Here, it serves as a benchmark. Empirically we get very close to it using pre-modulation, even in the Fourier-Fourier pair. This is appealing in practice and also raise once again the theoretical question of the universality of the DT phase transition for L1 with a wide range of random systems.
That last fact hit me in the face back three years ago when Jared presented his finding at Texas A&M University. In one of his slides, he had included the sparse measurement matrices of Piotr et al, (mentioned here as well) I think I recall my jaws dropped while at the same thinking: Whaaaat The..... Since then it seems to have been more universal than one could think, but I can't talk about it for the moment..
Thanks Yves , Remi and Pierre.
Cosparse Analysis Modeling - Uniqueness and Algorithms by Sangnam Nam, Michael E. Davies, Michael Elad, Remi Gribonval. The abstract reads:
In the past decade there has been a great interest in a synthesis-based model for signals, based on sparse and redundant representations. Such a model assumes that the signal of interest can be composed as a linear combination of few columns from a given matrix (the dictionary). An alternative analysis-based model can be envisioned, where an analysis operator multiplies the signal, leading to a cosparse outcome. In this paper, we consider this analysis model, in the context of a generic missing data problem (e.g., compressed sensing, inpainting, source separation, etc.). Our work proposes a uniqueness result for the solution of this problem, based on properties of the analysis operator and the measurement matrix. This paper also considers two pursuit algorithms for solving the missing data problem, an L1-based and a new greedy method. Our simulations demonstrate the appeal of the analysis model, and the success of the pursuit techniques presented.
Sparse recovery for spherical harmonic expansions by Holger Rauhut and Rachel Ward. The abstract reads:
We show that sparse spherical harmonic expansions can be e fficiently recovered from a small number of randomly chosen samples on the sphere. To establish the main result, we verify the restricted isometry property of an associated preconditioned random measurement matrix usingrecent estimates on the uniform growth of Jacobi polynomials.
Robust Matrix Completion with Corrupted Columns by Yudong Chen, Huan Xu, Constantine Caramanis, and Sujay Sanghavi. The abstract reads:
This paper considers the problem of matrix completion, when some number of the columns are arbitrarily corrupted, potentially by a malicious adversary. It is well-known that standard algorithms for matrix completion can return arbitrarily poor results, if even a single column is corrupted. What can be done if a large number, or even a constant fraction of columns are corrupted? In this paper, we study this very problem, and develop an efﬁcient algorithm for its solution. Our results show that with a vanishing fraction of observed entries, it is nevertheless possible to succeed in performing matrix completion, even when the number of corrupted columns grows. When the number of corruptions is as high as a constant fraction of the total number of columns, we show that again exact matrix completion is possible, but in this case our algorithm requires many more – a constant fraction – of observations. One direct application comes from robust collaborative ﬁltering. Here, some number of users are so-called manipulators, and try to skew the predictions of the algorithm. Signiﬁcantly, our results hold without any assumptions on the number, locations or values of the observed entries of the manipulated columns. In particular, this means that manipulators can act in a completely adversarial manner
The last Space Shuttle launch is supposed to launch at 15h30 Texas time. Discovery's launch is ending a series of launches that started thirty years ago. You can watch this launch here.