It was only on his fourth voyage that Christopher Columbus actually set foot on what I would call the real American continent (not far from where the Panama canal would stand 400 years or so later).
Map of Juan de la Cosa (1500)
If one is to look at last Sunday Morning Insight's (The Map Makers) and this week's entries, one can witness how the navitational maps are being used:
- scaling of a sensing system: Sparse nonnegative deconvolution for compressive calcium imaging: algorithms and phase transitions
- influence of noise in an algorithm capability: Compressed Sensing with Linear Correlation Between Signal and Measurement Noise
and how they potentially constrain technologies:
and how they should be compared to previous work (see [2] Compressive Phase Retrieval via Generalized Approximate Message Passing)
In all these cases, the issue generally revolves around the performance of the solver or how the sharp phase transitions can be seen as a rule of thumb for dimensionality studies. One of the most important element of these sharp phase transitions can be thought along a different line of investigation: Ever since 2004, the whole reasoning for the admissibility conditions for measurement matrices or the size of an underdetermined system of linear equations, has revolved around conditions like RIP, NSP, etc...It certainly is a problem when one tries to map the discretization of an inverse problem and find out if it fits a compressive approach. None of these conditions help in providing an insight. In particular, one does not know how to locate sensors or even how to operate them.
If a matrix doesn't fit any of these conditions: is there some hope to extract the possibly sparsest solution ? It turns out, the only way to quantify this is through the study of the sharp phase transitions. In the past two weeks, we had two arxiv preprints trying to address this issue of too coherent measurement matrices and how phase transitions establish the performance of the actual sensing.
The first paper uses tomography as defined in [1] but instead of going the traditional route of the synthesis approach, the paper studies the phase transition of the co-sparse problem instead.
Back in 2010, I wondered (and still do) if, in order to map tomogaphic problems, we shouldn't use Boogie-Woogie grids in order to bring back some randomization in the problematic. The first paper seems to have the beginning of an answer to this question and it looks like it is mildly negative (see result for perturbed matrices). It certainly is the beginning of an answer but I wonder if this holds for CT-like tomography which gets to be nonlinear as soon as you make it a source coding system (think about it, a linear problem becomes nonlinear as soon as the excitation sources are used simultaneously, mmmmhhhhh).
Here are the papers:
Phase Transitions and Cosparse Tomographic Recovery of Compound Solid Bodies from Few Projections by Andreea Deniţiu, Stefania Petra, Claudius Schnörr, Christoph Schnörr
We study unique recovery of cosparse signals from limited-angle tomographic measurements of two- and three-dimensional domains. Admissible signals belong to the union of subspaces defined by all cosupports of maximal cardinality ℓ with respect to the discrete gradient operator. We relate ℓ both to the number of measurements and to a nullspace condition with respect to the measurement matrix, so as to achieve unique recovery by linear programming. These results are supported by comprehensive numerical experiments that show a high correlation of performance in practice and theoretical predictions. Despite poor properties of the measurement matrix from the viewpoint of compressed sensing, the class of uniquely recoverable signals basically seems large enough to cover practical applications, like contactless quality inspection of compound solid bodies composed of few materials.
From the paper
Several observations are in order.
- Perturbation of projection matrices brings no significant advantage in the practically relevant case of unknown co-support. The empirical transitions will remain the same for perturbed and unperturbed matrices. This is very different to the `1-minimization problem (1.1), where perturbation boosts the recovery performance significantly as shown in [PS13].
- In the case of known co-support, when Bu = 0 is added as additional constraint, unperturbed matrices perform better. We notice that the empirical phase transition is above the red curve, and deduce that linear dependencies might be beneficial when the co-support is known.
- When increasing the number of projecting directions (4,5,6 or more) the differences between estimated (dashed) and theoretical (continuous line) phase transition become smaller. This might be due to the fact that linear dependencies between the columns (and rows) of A become “rare”, and the assumptions of Propositions 4.1 and 4.2 are more likely to be satisfied.
- In 3D the difference between empirical phase transitions for 3 and 4 projecting directions is very small, i.e. relative phase transitions are almost equal. This is different to the 2D case above. We currently do not have an explanation for this phenomenon.
- The log-log plot in Figure 15 shows that phase transitions in 3D exhibit a power law behavior, similar to the theoretical phase transitions for `1-recovery from [PS13], [PSS13]. Moreover, the plot also shows the scaling exponent of the green and red curves is higher, which results in significantly higher sparsity levels of the image gradient then image sparsity which allow exact recovery for big volumes and large d.
The second paper maps how a new solver perform with these highly coherent matrices.
Guaranteed sparse signal recovery with highly coherent sensing matrices by Guangliang Chen, Atul Divekar, Deanna Needell
Compressive sensing is a methodology for the reconstruction of sparse or compressible signals using far fewer samples than required by the Nyquist criterion. However, many of the results in compressive sensing concern random sampling matrices such as Gaussian and Bernoulli matrices. In common physically feasible signal acquisition and reconstruction scenarios such as super-resolution of images, the sensing matrix has a non-random structure with highly correlated columns. Here we present a compressive sensing type recovery algorithm, called Partial Inversion (PartInv), that overcomes the correlations among the columns. We provide theoretical justification as well as empirical comparisons.
[1] Average Case Recovery Analysis of Tomographic Compressive Sensing by Stefania Petra, Christoph Schnörr
The reconstruction of three-dimensional sparse volume functions from few tomographic projections constitutes a challenging problem in image reconstruction and turns out to be a particular instance problem of compressive sensing. The tomographic measurement matrix encodes the incidence relation of the imaging process, and therefore is not subject to design up to small perturbations of non-zero entries. We present an average case analysis of the recovery properties and a corresponding tail bound to establish weak thresholds, in excellent agreement with numerical experiments. Our result improve the state-of-the-art of tomographic imaging in experimental fluid dynamics by a factor of three.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment