Pages

Thursday, April 24, 2008

Compressed Sensing: Nuclear Norm for Rank Minimization, Random Matrix Theory, Fundamental Limits on Sensing Capacity for Sensor Networks and CS


We mentioned the type of work that draws a strong parallel between the sparse approximation and rank minimization before (CS is not just Compressed Sampling nor Compressed Sensing. and in Trace: What is it good for ? - How Compressed Sensing relates to Dimensionality Reduction), Babak Hassibi, Benjamin Recht, and Weiyu Xu now go a little further in Necessary and Sufficient Condtions for Success of the Nuclear Norm Heuristic for Rank Minimization. The abstract reads:

Rank minimization–minimizing the rank of a matrix subject to constraints–is a challenging problem that arises in many control applications including controller design, realization theory and model reduction. The general formulation of rank minimization subject to convex constraints is NP-hard, and for most practical problems there are no efficient exact algorithms. A popular heuristic algorithm is to replace the rank function with the sum of the singular values of the decision variable. In this paper, we provide a necessary and sufficient condition that quantifies when this heuristic successfully finds the minimum rank solution of a linear constraint set. We further show that most of the problems of interest in control can be formulated as rank minimization subject to such linear constraints. We additionally provide a probability distribution over instances of the affine rank minimization problem such that instances sampled from this distribution satisfy our necessary conditions for success with overwhelming probability provided the number of constraints is appropriately large. Finally we give empirical evidence that these probabilistic bounds are realistic in numerical simulations.

There is also now the second version of the CoSAMP paper by Deanna Needel and Joel Tropp.

The subject of Random Matrices is central to some aspect of the measurement problems in Compressed Sensing. For the theoretically inclined, here is a small compilation of what is happening in the field:

Let us first note two entries in Terry Tao's blog:Lewis lectures and On the permanent of a random Bernoulli matrix.

Following ChapterZero links, I found these lecture notes by Roman Vershynin on Deanna Needel page. The course is entitled: Non-Asymptotic Random Matrix Theory
(updated thanks to an anonymous commenter)

Lecture Notes

Attention: this new tutorial partially supersedes the old lecture notes below, see the publications page. However, geometric topics (Lectures 12-14) are not covered in the new tutorial. They are going to be included in the Lecture notes in geometric functional analysis which are being written.
These notes were taken by students, and they have not been edited.
Lecture 1. Background, Techniques, Methods. 
Lecture 2. Concentration of Measure. 
Lecture 3. Concentration of Measure (cont'd). 
Lecture 4. Dimension Reduction. 
Lecture 5. Subgaussian Random Variables. 
Lecture 6. Norm of a Random Matrix. 
Lecture 7. Largest, Smallest, Singular Values of Random Rectangular Matrices. 
Lecture 8. Dudley's Integral Inequality. 
Lecture 9. Applications of Dudley's Inequality -- Sharper Bounds for Random Matrices. 
Lecture 10. Slepian's Inequality - Sharpness Bounds for Gaussian Matrices. 
Lecture 11. Gordon's Inequality. 
Lecture 12. Sudakov's Minoration. 
Lecture 13. Sections of Convex Sets via Entropy and Volume. 
Lecture 14. Sections of Convex Sets via Entropy and Volume (cont'd). 
Lecture 15. Invertibility of Square Gaussian Matrices, Sparse Vectors. 
Lecture 16. Invertibility of Gaussian Matrices and Compressible/Incompressible Vectors. 
Lecture 17. Invertibility of Subgaussian Matrices -- Small Ball Probability via the Central Limit Theorem. 
Lecture 18. Strong Invertibility of Subgaussian Matrices and Small Ball Probability via Arithmetic Progressions. 
Lecture 19. Small Ball Probability via Sum-Sets. 
Lecture 20. The Recurrence Set (Ergodic Approach). 

Other sites of interest include that of Jiri Matousek and this post by Suresh Venkatasubramanian and Piotr Indyk on the concentration of measure

Following up on their work in sensor networks and compressed sensing (Compressed Sensing: Sensor Networks, Learning Compressed Sensing by Uncertain Component Analysis, Sparsity or Positivity ?, New CVX build) Shuchin Aeron, Manqi Zhao, and Venkatesh Saligrama just released on arxiv.org the following preprint: Fundamental Limits on Sensing Capacity for Sensor Networks and Compressed Sensing. The abstract reads:
Modern applications of signal processing problems arising in sensor networks require efficient sensing of multi-dimensional data or phenomena. In this context it becomes important to understand fundamental performance limits of both sensing and communication between sensors. In this paper we focus primarily on sensing aspects. We propose the notion of sensing capacity to characterize the performance and effectiveness of various sensing configurations. We define Sensing Capacity as the maximal number of signal dimensions reliably identified per sensor deployed. The inverse of sensing capacity is the compression rate, i.e., the number of measurements required per signal dimension for accurate reconstruction, a concept that is of interest in compressed sensing(CS). Using information theoretic arguments we quantify sensing capacity (compression rate) as a function of information rate of the event domain, SNR of the observations, desired distortion in the reconstruction and diversity of sensors. In this paper we consider fixed SNR linear observation models for sensor network (SNET) and CS scenarios for different types of distortions motivated by detection, localization and field estimation problems. The principle difference between the SNET and CS scenarios is the way in which the signal-tonoise ratio (SNR) is accounted. For SNET scenarios composed of many sensors, it makes sense to impose fixed SNR for each individual sensor which leads to row-wise normalization of the observation matrix. On the other hand CS has dealt with column-wise normalization, i.e., the case where the vector valued observation corresponding to each signal component is normalized. The two cases lead to qualitatively different results. In the SNET scenario sensing capacity is generally small and correspondingly the number of sensors required does not scale linearly with the target density(sparsity) in contrast to the CS setting. Specifically, the number of measurements are generally proportional to the signal dimension and is weakly dependent on target density or sparsity. This raises questions on compressive gains in power limited sensor network applications based on sparsity of the underlying source domain. For compressed sensing scenario, we also illustrate gaps between existing state of the art convex programming methods and optimal estimators.

Image Credit: NASA/JPL/Space Science Institute, Saturn F-ring as seen by Cassini on March 31, 2007, at a distance of about 2 million kilometers (1.2 million miles) from Saturn.

4 comments:

  1. Hi, Your links don't exist. Please review this, it's very important to get this information. Thanks

    ReplyDelete
  2. Hello anonymous,

    if you had care to use google you'd know that these lectures are now at:

    http://www-stat.stanford.edu/~dneedell/280.html

    Cheers,

    Igor.

    ReplyDelete
  3. These have moved again. I found them on this site:
    http://www-personal.umich.edu/~romanv/teaching/2006-07/280/course.html

    ReplyDelete