Monday, December 19, 2011

Compressive Sensing This Week

Today we have a few papers and presentations. First, I just found some slides by Gabriel Peyré on Compressed Sensing.

Next, you'd think the following paper is not about compressive sensing and you'd be very wrong: 4%! with a Hadamard transform, I wonder if this number can go down ?


Abstract. A compressive sensing method combined with decomposition of a matrix formed with image frames of a surveillance video into low rank and sparse matrices is proposed to segment the background and extract moving objects in a surveillance video. The video is acquired by compressive measurements, and the measurements are used to reconstruct the video by a low rank and sparse decomposition of matrix. The low rank component represents the background, and the sparse component is used to identify moving objects in the surveillance video. The decomposition is performed by an augmented Lagrangian alternating direction method. Experiments are carried out to demonstrate that moving objects can be reliably extracted with a small amount of measurements.




Weighted Measurement Reuse for Compressive Video Sensing Reconstruction by Ivan Lee. The abstract reads:
Compressive sensing is an emerging technique which samples signals with sparsity below Nyquist sampling rate. For video compression, encoding individual frames independently using compressive sensing is inefficient since inter-frame similarities in temporal domain are not utilised. This paper investigates the performance of weighted compressive video sensing, which brings the function of predicted frames and bi-directional predicted frames in traditional video codecs to compressive video sensing. The weighted compressive video sensing applies only to the decoder to reconstruct video frames, without increasing the computational complexity for compressive video sensors. This paper examines weighted measurement reuse on different scenarios with previous frame only, subsequent frame only, and both previous and subsequent frames.


Ori Katz let me know of the following compressive sensing hardware: Compressive laser ranging by Wm. Randall Babbitt, Zeb W. Barber, and Christoffer Renner
Compressive sampling has been previously proposed as a technique for sampling radar returns and determining sparse range profiles with a reduced number of measurements compared to conventional techniques. By employing modulation on both transmission and reception, compressive sensing in ranging is extended to the direct measurement of range profiles without intermediate measurement of the return waveform. This compressive ranging approach enables the use of pseudorandom binary transmit waveforms and return modulation, along with low-bandwidth optical detectors to yield high-resolution ranging information. A proof-of-concept experiment is presented. With currently available compact, off-the-shelf electronics and photonics, such as high data rate binary pattern generators and high-bandwidth digital optical modulators, compressive laser ranging can readily achieve subcentimeter resolution in a compact, lightweight package.

Noise Robust Joint Sparse Recovery using Compressive Subspace Fitting by Jong Min Kim, Ok Kyun Lee, Jong Chul Ye. The abstract reads:
We study a multiple measurement vector (MMV) problem where multiple signals share a common sparse support set and are sampled by a common sensing matrix. Although we can expect that joint sparsity can improve the recovery performance over a single measurement vector (SMV) problem, compressive sensing (CS) algorithms for MMV exhibit performance saturation as the number of multiple signals increases. Recently, to overcome these drawbacks of CS approaches, hybrid algorithms that optimally combine CS with sensor array signal processing using a generalized MUSIC criterion have been proposed. While these hybrid algorithms are optimal for critically sampled cases, they are not efficient in exploiting the redundant sampling to improve noise robustness. Hence, in this work, we introduce a novel subspace fitting criterion that extends the generalized MUSIC criterion so that it exhibits near-optimal behaviors for various sampling conditions. In addition, the subspace fitting criterion leads to two alternative forms of compressive subspace fitting (CSF) algorithms with forward and backward support selection, which significantly improve the noise robustness. Numerical simulations show that the proposed algorithms can nearly achieve the optimum.

This paper proposes a compressed  sensing (CS) framework for the acquisition and reconstruction of frequency-sparse signals with chaotic dynamical systems. The sparse signal is acting as an excitation term of a discrete-time chaotic system and the compressed measurement is obtained by downsampling the system output. The reconstruction is realized through the estimation of  the excitation coefficients with principle of impulsive chaos synchronization. The 1l -norm regularized nonlinear least squares is used to find the estimation. The proposed framework is easily implementable  and creates secure measurements. The Henon map is used as an example to illustrate the principle and the  performance.


Prior image constrained compressed sensing: Implementation and performance evaluation by Pascal Thériault Lauzier, Jie Tang, and Guang-Hong Chen. The abstract reads:
Purpose: Prior image constrained compressed sensing (PICCS) is an image reconstruction framework which incorporates an often available prior image into the compressed sensing objective function. The images are reconstructed using an optimization procedure. In this paper, several alternative unconstrained minimization methods are used to implement PICCS. The purpose is to study and compare the performance of each implementation, as well as to evaluate the performance of the PICCS objective function with respect to image quality.
Methods: Six different minimization methods are investigated with respect to convergence speed and reconstruction accuracy. These minimization methods include the steepest descent (SD) method and the conjugate gradient (CG) method. These algorithms require a line search to be performed. Thus, for each minimization algorithm, two line searching algorithms are evaluated: a backtracking (BT) line search and a fast Newton-Raphson (NR) line search. The relative root mean square error is used to evaluate the reconstruction accuracy. The algorithm that offers the best convergence speed is used to study the performance of PICCS with respect to the prior image parameter α and the data consistency parameter λ. PICCS is studied in terms of reconstruction accuracy, low-contrast spatial resolution, and noise characteristics. A numerical phantom was simulated and an animal model was scanned using a multirow detector computed tomography (CT) scanner to yield the projection datasets used in this study.
Results: For λ within a broad range, the CG method with Fletcher-Reeves formula and NR line search offers the fastest convergence for an equal level of reconstruction accuracy. Using this minimization method, the reconstruction accuracy of PICCS was studied with respect to variations in α and λ. When the number of view angles is varied between 107, 80, 64, 40, 20, and 16, the relative root mean square error reaches a minimum value for α ≈ 0.5. For values of α near the optimal value, the spatial resolution of the reconstructed image remains relatively constant and the noise texture is very similar to that of the prior image, which was reconstructed using the filtered backprojection (FBP) algorithm.
Conclusions: Regarding the performance of the minimization methods, the nonlinear CG method with NR line search yields the best convergence speed. Regarding the performance of the PICCS image reconstruction, three main conclusions can be reached. (1) The performance of PICCS is optimal when the weighting parameter of the prior image parameter is selected to be near α = 0.5. (2) The spatial resolution measured for static objects in images reconstructed using PICCS from undersampled datasets is not degraded with respect to the fully-sampled reconstruction for α near its optimal value. (3) The noise texture of PICCS reconstructions is similar to that of the prior image, which was reconstructed using the conventional FBP method.

Laurent me know that just  before MIA2012, Justin Romberg will give a course on compressive sensing in Lyon, the capital city of gastronomy:


Lecturer : Justin Romberg (Georgia Institute of Technology, Atlanta)
Outline of the school :“The dogma of signal processing maintains that a signal must be sampled at a rate at least twice its highest frequency in order to be represented without error. However, in practice, we often compress the data soon after sensing, trading off signal representation complexity (bits) for some error (consider JPEG image compression in digital cameras, for example). Clearly, this is wasteful of valuable sensing resources. Over the past few years, a new theory of “compressive sensing” has begun to emerge, in which the signal is sampled (and simultaneously compressed) at a greatly reduced rate.
Compressive sensing is also referred to in the literature by the terms: compressed sensing, compressive sampling, and sketching/heavy-hitters.” [by courtesy of: http://dsp.rice.edu/cs]
  • Basis decompositions and frames (3-4 hours)
  •  fundamentals of basis and frame decompositions
  •  the discrete cosine transform and applications to image/video compression
  •    the lapped orthogonal transform
  •     wavelets
  •     thresholding for noise reduction
  • Sparsest decomposition from a dictionary (3-4 hours)
  •    omp and basis pursuit for selecting atoms
  •     uncertainty principles and sparsest decomposition
  •     the “spikes+sines” dictionary
  •     general unions of orthobases
  • Introduction to compressive sampling and applications (2 hours)
  • Recovering sparse vectors from linear measurements (6 hours/ 1 day)
  •    review of classical least-squares theory: the svd, pseudo-inverse, stability analysis, regularization
  •    sparse recovery conditions: l1 duality, sparsest decomposition revisited (with random support)
  •     the restricted isometry property and sparse recovery : l1 for perfect recovery from noise-free measurements,
  • l1 stability, l2 stability
  • Random matrices are restricted isometries (2 hours)
  • Optimization (6 hours / 1 day)
  • conjugate gradients
  •   log-barrier methods
  •       first-order l1 solvers
  •   greedy algorithms and iterative thresholding
  •  recursive least-squares
  •    the Kalman filter
  •     dynamic l1 updating
  • Low-rank recovery (2 hours)
Dates : 9-13/01/2012
Correspondant local : Paulo Goncalves



I forgot to mention the following presentation last time, here is Classification With Invariant Scattering by Joan Bruna, Stéphane Mallat, Laurent Sifre, Irène Waldspurger. In the same meeting there was also







Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly