Tuesday, December 03, 2013

Instance Optimality Property Extended: Fundamental performance limits for ideal decoders in high-dimensional linear inverse problems

Anthony Bourrier. just sent me the following:

Hello Igor,
I submitted recently, along with four coauthors, a paper to IEEE Trans-IT about fundamental performance limits of decoders in linear inverse problems on a much more general setting than the usual sparse vectors case. In particular, among other things, it extends relationships between Instance Optimality, Null Space Property and Restricted Isometry Property for any model of vectors, in noiseless and noisy settings.
It is available on arXiv: http://arxiv.org/abs/1311.6239
Is it possible to feature it on your blog Nuit Blanche?
Thanks in advance and best regards,
I sure can feature it. But first a small word to put this paper in context. If you recall this Sunday Morning Insight entry on A Quick Panorama of Sensing from Direct Imaging to Machine Learning, the continuum of approaches used in imaging from direct imaging to neural network based classification tasks could only come from the common vision that what you measure is what is you get. From the very beginning in compressive sensing, people have modeled this "x=x" with what is called Instance Optimality Decoding. The following paper goes beyond the traditional CS framework and tries to address this same Instance Optimality Decoding in a larger context that also include data on manifold but also classification (A is not I for instance). Without further due, here it is: Fundamental performance limits for ideal decoders in high-dimensional linear inverse problems by Anthony Bourrier, Mike E. Davies, Tomer Peleg, Patrick Pérez, Rémi Gribonval

This paper focuses on characterizing the fundamental performance limits that can be expected from an ideal decoder given a general model, ie, a general subset of "simple" vectors of interest. First, we extend the so-called notion of instance optimality of a decoder to settings where one only wishes to reconstruct some part of the original high dimensional vector from a low-dimensional observation. This covers practical settings such as medical imaging of a region of interest, or audio source separation when one is only interested in estimating the contribution of a specific instrument to a musical recording. We define instance optimality relatively to a model much beyond the traditional framework of sparse recovery, and characterize the existence of an instance optimal decoder in terms of joint properties of the model and the considered linear operator. Noiseless and noise-robust settings are both considered. We show somewhat surprisingly that the existence of noise-aware instance optimal decoders for all noise levels implies the existence of a noise-blind decoder. A consequence of our results is that for models that are rich enough to contain an orthonormal basis, the existence of an L2/L2 instance optimal decoder is only possible when the linear operator is not substantially dimension-reducing. This covers well-known cases (sparse vectors, low-rank matrices) as well as a number of seemingly new situations (structured sparsity and sparse inverse covariance matrices for instance). We exhibit an operator-dependent norm which, under a model-specific generalization of the Restricted Isometry Property (RIP), always yields a feasible instance optimality and implies instance optimality with certain familiar atomic norms such as the L1 norm.


Image Credit: NASA/JPL/Space Science Institute
N00218248.jpg was taken on November 30, 2013 and received on Earth December 02, 2013. The camera was pointing toward TITAN at approximately 42,845 miles (68,952 kilometers) away, and the image was taken using the CL1 and CB3 filters.




Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly