Pages

Monday, March 30, 2009

CS: An answer to a question, Statistical RIP and Semi-Circle Distribution of Incoherent Dictionaries, Learning with Structured Sparsity


In response to a previous query in this entry, Matthew Herman responded with:

You posted a question recently from Deming Yuan (Vincent) about an upper bound on the number of missing entries in a measurement matrix. If that matrix is amenable to analysis with the RIP, then it seems that my new paper with Thomas Strohmer can be used to model this problem. Let

\hat(A) = A + E

be a complete measurement matrix (without any entries missing), where A is an incomplete matrix (with a few missing entries), and E is a perturbation to A consisting mostly of zeros except for a few nonzero entries corresponding to the locations of A which are "missing."

It would take some extra work, but in theory, one could then use this model and our results on perturbations to a measurement matrix to find an upper bound on the number of missing entries. In particular, further assumptions would have to be made on the nature of these entries (e.g., how they are grouped, if they have some kind of special structure, etc.). This is discussed briefly in the appendix of our paper "General Deviants: An Analysis of Perturbations in Compressed Sensing."


Thanks Matthew !

In this paper we formulate and prove a statistical version of the Candes-Tao restricted isometry property (SRIP for short) which holds in general for any incoherent dictionary which is a disjoint union of orthonormal bases. In addition, we prove that, under appropriate normalization, the eigenvalues of the associated Gram matrix fluctuate around 1 according to the Wigner semicircle distribution. The result is then applied to various dictionaries that arise naturally in the setting of finite harmonic analysis, giving, in particular, a better understanding on a remark of Applebaum-Howard-Searle-Calderbank concerning RIP for the Heisenberg dictionary of chirp like functions.

and Learning with Structured Sparsity by Junzhou Huang, Tong Zhang, Dimitris Metaxas. The abstract reads:
This paper investigates a new learning formulation called structured sparsity, which is a natural extension of the standard sparsity concept in statistical learning and compressive sensing. By allowing arbitrary structures on the feature set, this concept generalizes the group sparsity idea that has become popular in recent years. A general theory is developed for learning with structured sparsity, based on the notion of coding complexity associated with the structure. It is shown that if the coding complexity of the target signal is small, then one can achieve improved performance by using coding complexity regularization methods, which generalize the standard sparse regularization. Moreover, a structured greedy algorithm is proposed to efficiently solve the structured sparsity problem. It is shown that the greedy algorithm approximately solves the coding complexity optimization problem under appropriate conditions. Experiments are included to demonstrate the advantage of structured sparsity over standard sparsity on some real applications.


Credit photo: NASA, ISS as seen from STS-119

2 comments:

  1. Learning with structured sparsity is missing a huge chunk of prior work.

    ReplyDelete
  2. Dear ...V,

    Are you thinking in particular to the work at Rice on Block sparsity ? Anything else ?

    If you send me additional information (through the blog or by e-mail), I'll send them to the authors whom I am sure will be delighted to augment their current arxiv preprint. I can remove your name from that process if you wish.

    Cheers,

    Igor.

    ReplyDelete