Within the past week, we saw two different hardware implementation of compressive sensing in the image/video realm ( Compressive Light Field Photography Using Overcomplete Dictionaries And Optimized Projections, Compressive Sensing by Larry Carin ( compressive hyperspectral camera and Compressive video, Coded aperture compressive temporal imaging also featured on this OSA spotlight on Optics). All this is fine, but deep down one of the most important issue when building hardware revolves around calibration. Thanks to the Donoho-Tanner phase transition that was established in the noiseless case, we now have some real visibility ou how future compressive sensors will behave, That instrument (the DT phase transition) is one of the most useful idea since the original papers on compressive sensing in 2004 because there is a clearer connection between hardware makers and deep mathematics without having to argue about RIP and any such nonsense sufficient admissibility conditions ( I mean nonsense with the view that most of the time they are not a directly useful metric for hardware designers).
That diagram is difficult to read because of the choice of axis and most of the literature stops at x=1 (the limit between undetetermined and overdetermined systems)
This is why I like the following figure very much. Not because it features the usual Donoho-Tanner transitions but because it shows that, given noise and calibration issues the noisy phase transition can move past the traditional x = 1 limit as shown in the top right diagram. It's just beautiful. One then wonders if convex optimization does better than traditional least squares beyond x =1 (in the overdetermined system area). The figure is from version 2 of Blind Sensor Calibration in Sparse Recovery Using Convex Optimization by Cagdas Bilen, Gilles Puy, Remi Gribonval and Laurent Daudet
Abstract—We investigate a compressive sensing system in which the sensors introduce a distortion to the measurements in the form of unknown gains. We focus on blind calibration, using measures performed on a few unknown (but sparse) signals. We extend our earlier study on real positive gains to two generalized cases (signed real-valued gains; complex-valued gains), and show that the recovery of unknown gains together with the sparse signals is possible in a wide variety of scenarios. The simultaneous recovery of the gains and the sparse signals is formulated as a convex optimization problem which can be solved easily using off-the-shelf algorithms. Numerical simulations demonstrate that the proposed approach is effective provided that sufﬁciently many (unknown, but sparse) calibrating signals are provided, especially when the sign or phase of the unknown gains are not completely random.
Let us note in a different realm version 2 of this paper:
In this work we address the problem of blindly reconstructing compressively sensed signals by exploiting the co-sparse analysis model. In the analysis model it is assumed that a signal multiplied by an analysis operator results in a sparse vector. We propose an algorithm that learns the operator adaptively during the reconstruction process. The arising optimization problem is tackled via a geometric conjugate gradient approach. Different types of sampling noise are handled by simply exchanging the data fidelity term. Numerical experiments are performed for measurements corrupted with Gaussian as well as impulsive noise to show the effectiveness of our method.
I wonder how the Donoho-Tanner phase transition diagram changes when using analysis based methods ?
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.