First, we have two papers from Duke that are focused on implementing compressive sensing in some new kind of hardware. The first paper entitled Fast disambiguation of superimposed images for increased field of view by Roummel Marcia, Changsoon Kim, Jungsang Kim, David Brady, and Rebecca Willett evaluate reconstruction techniques from multiplexed images.
The abstract reads:
Many infrared optical systems in wide-ranging applications such as surveillance and security frequently require large fields of view. Often this necessitates a focal plane array (FPA) with a large number of pixels, which, in general, is very expensive. In this paper, we propose a method for increasing the field of view without increasing the pixel resolution of the FPA by superimposing the multiple subimages within a scene and disambiguating the observed data to reconstruct the original scene. This technique, in effect, allows each subimage of the scene to share a single FPA, thereby increasing the field of view without compromising resolution. To disambiguate the subimages, we develop wavelet regularized reconstruction methods which encourage sparsity in the solution. We present results from numerical experiments that demonstrate the effectiveness of this approach.
I like it because I think this disambiguiation technique c
an be used for other purposes. A follow-up of a paper I refer to often is Compressive coded aperture video reconstruction by Roummel Marcia and Rebecca Willett. The abstract reads:
This paper concerns compressive sensing methods for overcoming the pixel-limited resolution of digital video imaging systems. Recent developments in coded aperture mask designs have led to the reconstruction of static images from a single, low-resolution, noisy observation image. Our methods apply these coded mask designs to each video frame and use compressive sensing optimization techniques for enhanced resolution digital video recovery. We demonstrate that further improvements can be attained by solving for multiple frames simultaneously, even when the total computation time budget is held fixed.
MSE values and related videos for each methods used in the paper can be found here.
Here is the arrival of a new and potential very fast solver, the Expander Matching Pursuit algorithm featured and explained in Near-Optimal Sparse Recovery in the L1 norm, by Piotr Indyk and Milan Ruzic. The abstract reads:
We consider the approximate sparse recovery problem, where the goal is to (approximately) recover a high-dimensional vector x element of Rn from its lower-dimensional sketch Ax element of Rm. Specifically, we focus on the sparse recovery problem in the L1 norm: for a parameter k, given the sketch Ax, compute an approximation ^x of x such that the L1 approximation error ||x - ^x||_1 is close to min_x' ||x - x'||_1, where x0 ranges over all vectors with at most k terms. The sparse recovery problem has been subject to extensive research over the last few years. Many solutions to this problem have been discovered, achieving different trade-offs between various attributes, such as the sketch length, encoding and recovery times. In this paper we provide a sparse recovery scheme which achieves close to optimal performance on virtually all attributes (see Figure 1). In particular, this is the rst scheme that guarantees O(k log(n/k)) sketch length, and near-linear O(n log(n/k)) recovery time simultaneously. It also features low encoding and update times, and is noise-resilient.
Also found in the Rice repository a paper by Rayan Saab and Ozgur Yilmaz entitled Sparse recovery by non-convex optimization - instance optimality. The abstract reads:
In this note, we address the theoretical properties of \Delta_p, a class of compressed sensing decoders that rely on ℓp minimization with p element of (0, 1) to recover estimates of sparse and compressible signals from incomplete and inaccurate measurements. In particular, we extend the results of Candes, Romberg and Tao [3] and Wojtaszczyk [30] regarding the decoder \Delta_1, based on ℓ1 minimization, to \Delta_p with p 2 (0, 1). Our results are two-fold. First, we show that under certain sufficient conditions that are weaker than the analogous sufficient conditions for \Delta_1 the decoders \Delta_p are robust to noise and stable in the sense that they are (2, p) instance optimal. Second, we extend the results of Wojtaszczyk to show that, like \Delta_1, the decoders \Delta_p are (2, 2) instance optimal in probability provided the measurement matrix is drawn from an appropriate distribution. While the extension of the results of [3] to the setting where p element of (0, 1) is straightforward, the extension of the instance optimality in probability result of [30] is non-trivial. In particular, we need to prove that the LQ1 property, introduced in [30], and shown to hold for Gaussian matrices and matrices whose columns are drawn uniformly from the sphere, generalizes to an LQp property for the same classes of matrices. Our proof is based on a result by Gordon and Kalton [18] about the Banach-Mazur distances of p-convex bodies to their convex hulls.
Beyond the good results of the paper, I note the attempt at determining the Restricted Isometry Constants through the use of Monte Carlo method and the fact that the graph above seems to show how in-sufficient the Restricted Isometry Property really is.
This note closes a gap in the proof of the main lemma from the paper “Just Relax: Convex Programming Methods for Identifying Sparse Signals in Noise.
And finally, there is a report on New Directions in Complex Data Analysis for Emerging Applications mentioning compressive sensing.
No comments:
Post a Comment