Pages

Monday, July 07, 2008

CS: A Necessary and Sufficient Condition for Measurement Matrices in CS, Italian translation, a connection to EPH ?, a course, three meetings.

Here is another presentation from the Rice group by Richard Baraniuk. It is pretty much standard except that this slide reminded me of the set-up of a recent video (the counter is now at 251 from 52 initially!). The Rice repository also had a new addition this week-end.

Weiyu Xu, Babak Hassibi, Compressed sensing over the Grassmann manifold: A unified analytical framework. The abstract reads:

It is well known that compressed sensing problems reduce to finding the sparse solutions for large under-determined systems of equations. Although finding the sparse solutions in general may be computationally difficult, starting with the seminal work of [2], it has been shown that linear programming techniques, obtained from an l1-norm relaxation of the original non-convex problem, can provably find the unknown vector in certain instances. In particular, using a certain restricted isometry property, [2] shows that for measurement matrices chosen from a random Gaussian ensemble, l1 optimization can find the correct solution with overwhelming probability even when the support size of the unknown vector is proportional to its dimension. The paper [1] uses results on neighborly polytopes from [6] to give a “sharp” bound on what this proportionality should be in the Gaussian case. In this paper we shall focus on finding sharp bounds on the recovery of “approximately sparse” signals (also possibly for noisy measurements). While the restricted isometry property can be used to study the recovery of approximately sparse signals (and also in the presence of noisy measurements), the obtained bounds can be quite loose. On the other hand, the neighborly polytopes technique which yields sharp bounds in the ideal case cannot be generalized to approximately sparse signals. In this paper, starting from a necessary and sufficient condition for achieving a certain signal recovery accuracy, using high-dimensional geometry, we give a unified null-space Grassmannian angle-based analytical framework for compressive sensing. This new framework gives sharp quantitative tradeoffs between the signal sparsity and the recovery robustness of the l1 optimization for approximately sparse signals. As it will turn out, the neighborly polytopes result of [1] for ideally sparse signals can be viewed as a special case of ours. Our result concerns fundamental properties of linear subspaces and so may be of independent mathematical interest.


This is an important paper as it produces a sufficient and necessary condition for a measurement matrix to enable the recovery of a sparse solution with an L1 recovery method. We talked before how RIP was just a sufficient condition and how it was not optimal in guiding the design of certain systems. Let us hope there is an easy way to check this new condition out.


Recoiling from reading this entry, an Italian reader sent me a translation for CS in Italian:

so I'm currently putting down on paper some ideas on translations of the keywords in Compressive Sampling, "Campionamento Compressivo", Compressed/compressive Sensing, "Acquisizione Compressiva". I think the best translation is actually "Acquisizione Compressiva"

They talk about Nuit Blanche/The Big Picture and the List:

A course is offered in the statistics department at Oxford entitled:Compressive sampling for variable selection taught by Nicolai Meinshausen. The course description is as follows:
In recent years, compressive sampling (or compressed sensing), invented by Emanuel Candes, Terence Tao and David Donoho, has attracted a lot of attention, mostly in the signal processing community. The basic idea is that a set of n measurements / samples of a response variables (e.g. n pixels of a picture) can be projected into a lower-dimensional space with dimension m much less n, by taking for example random Gaussian projections of the original samples, while the original image can still be re-constructed with these m much less than n measurements if and only if the original signal (e.g. the image) has a sparse representation in terms of a suitable basis (e.g. wavelet basis for images).
In this project, it is of interest to apply the idea of compressive sampling to variable selection in regression problems with many predictor variables, using the standard framework of compressive sampling, but examining the effect of the projected dimension m (the regularisation) on predictive performance of the chosen model and on the reliability with which a correct sparse regression model can be identified. Data from biological and physical problems are available.


This is interesting: Jeremie Bigot made a a presentation (in French) on Problemes Sparses en statistique. As I went through the goals of the Research Group (GdR) called "Méthodes d'Analyse Stochastique pour les COdes et Traitements NUMériques", I could not help but notice the connection to another subject I mentioned earlier: The Experimental Probabilistic Hypersurface. More on that later.

Three meetings: Two upcoming and one past:

This workshop, sponsored by AIM and the NSF, will be devoted to frame theory in finite dimensions with an emphasis on compressive sensing and quantization theory for frames. Recent years have seen significant advances in a number of subjects in signal processing and information theory in which frame theory has played a central role. Some of the most important of these "finite world" applications are analog-to-digital (A/D) conversion, coding theory, and compressive sensing. While these subjects fit well within electrical engineering, several key contributions have been made by mathematicians who have strong interest in real-world applications, resulting in fruitful interdisciplinary research collaborations.

A frame is a collection of vectors in a Hilbert space allowing for a stable linear decomposition of any element in the space, similar to a basis expansion. However, a frame is not required to be linearly independent, and consequently, expansion coefficients can be highly non-unique. This redundancy is advantageous for applications in which additional constraints need to be imposed, such as the set of coefficients being sparse or quantized. The nature of such constraints also establishes a venue for the design of specific frames. This workshop will be concerned with the construction, properties, and applications of frames in finite dimensions.

The workshop will focus on the following specific topics:

  • Construction of good deterministic frames (measurement matrices) for compressive sensing.
  • Quantization theory for frames: rate-distortion theory, coarse quantization, redundancy vs. robustness, A/D conversion with compressive sensing.

The workshop will differ from typical conferences in some regards. Participants will be invited to suggest open problems and questions before the workshop begins, and these will be posted on the workshop website. These include specific problems on which there is hope of making some progress during the workshop, as well as more ambitious problems which may influence the future activity of the field. Lectures at the workshop will be focused on familiarizing the participants with the background material leading up to specific problems, and the schedule will include discussion and parallel working sessions.

The deadline to apply for support to participate in this workshop has passed.

For more information email workshops@aimath.org
International Symposium on Low Power Electronics and Design 2008, National Science Seminar Complex, Indian Institute of Science, Bangalore, India, August 11-13, 2008. A presentation entitled Noninvasive Leakage Power Tomography of. Integrated Circuits by Compressive Sensing will be featured.

International Conference on Interdisciplinary Mathematical and Statistical Techniques - IMST 2008 / FIM XVI on May 16-18, 2008. A presentation entitled: "Probabilistic existence theorems for group testing with lies" by Anatoly Zhigljavsky. The abstract reads:

For a wide range of search problems with lies the upper bounds for the length of optimal non-adaptive algorithms are derived. The method of deriving these bounds is probabilistic and is therefore not constructive; it does not provide the way to construct the optimal algorithms. The class of search problems considered in the presentation include many known combinatorial group testing problems such as testing in binary, additive and multiaccess channel models.

For several general models we show that the leading asymptotic term for the number of tests does not depend on the number of lies and is thus the same as for the zero-lie case. However, the other terms in the asymptotic upper bounds depend on the number of lies and substantially influence the upper bounds in the nonasymptotic situation.

At the end of the talk, a connection will be established to the so-called 'compressed sensing', where non-adaptive random designs are used for compressing information.

No comments:

Post a Comment