Looks like we are going to have a formula for one of the Donoho-Tanner Phase Transition in Nonuniform Sparse Recovery with Gaussian Matrices by Ulaş Ayaz, Holger Rauhut. The abstract reads:
Compressive sensing predicts that sufficiently sparse vectors can be recovered from highly incomplete information. Efficient recovery methods such as $\ell_1$-minimization find the sparsest solution to certain systems of equations. Random matrices have become a popular choice for the measurement matrix. Indeed, near-optimal uniform recovery results have been shown for such matrices. In this note we focus on nonuniform recovery using Gaussian random matrices and $\ell_1$-minimization. We provide a condition on the number of samples in terms of the sparsity and the signal length which guarantees that a fixed sparse signal can be recovered with a random draw of the matrix using $\ell_1$-minimization. The constant $2$ in the condition is optimal, and the proof is rather short compared to a similar result due to Donoho and Tanner.
On MathOverflow the following question was asked:
I'm wondering if compressed sensing can be applied to a problem I have in the way I describe, and also whether it should be applied to this problem (or whether it's simply the wrong tool).
I have a large network and plenty of data on each node. I want to find a set of communities that explain most data similarity and track these communities over time. In a compressed sensing formulation this amounts to the following:
-My graph's representation basis is a weighted set of communities, where each community is a subset of the set of all nodes (candidate communities can be narrowed down to a tractable number rather easily)
-Different feature measures (e.g. bigrams, topic profiles) serve as my sensing basis, with correlations between community membership and features serving as the coefficients of my measurement matrix. The big assumptions, that my feature measurements have the Restricted Isometry Property, that similarity is incoherent with community, and that similarity is a linear combination of community, are all almost certainly incorrect, however they seem plausible approximations to within (possibly significant) noise.
Ideally, I can use this strategy to describe my network as a collection of communities and to track over time the prominence of these communities. I wonder, however, if there isn't some straightforward bayesian method that I'm overlooking.
Misc Questions about Compressed Sensing:
i) If my measurements are not linear combinations of my representation basis, but at least convex, then can I still usefully use compressed sensing?
ii) What is the meaning of the dual of basis pursuit?
iii) How does one avoid basis mismatch in general? You choose your representation basis elements beforehand, so how do you make sure they're capable of representation?
Perhaps the answer to this question is obvious. Nonetheless, I would like to pose the following question:
Suppose we are presented with an image that has been corrupted by the addition of some small amount of white Gaussian noise. Further suppose that we also have available a uniformly random sub-sampling of the original uncorrupted image. The problem is to recover the original image, in its entirety.
Ignoring the trivial boundary cases, does a solution exist; if so, does it map directly to a known method?
At first glance it looks like a variation on the Matrix Completion problem, but the circumstances seem to suggest the existence of a more computationally efficient approach. On the other hand, it occurs to me that we are, effectively, asking to complete the additive noise matrix, which is by no means sparse.
Can anyone enlighten me in this regard?
No comments:
Post a Comment