Pages

Tuesday, January 27, 2009

CS:OLS/ROLS/StOLS, Group sparsity, Poisson CS, CS of parameterized shapes, IT meets free discontinuity, An homotopy algorithm for the Lasso


After yesterday's entry, Angshul Majumdar also provided us with a modified version
....of the original OLS algorithm proposed in

Their algorithm is implemented in SparseOLS.zip.

Angshul has also added two modified versions

These algorithms are also listed in the Reconstruction Section of the Big Picture. Angshul also tells me that Matlab Central now has a link to the Greedy Algorithms promoting Group Sparsity. Thanks Angshul!


Also found on the interwebs:

Limits of Deterministic Compressed Sensing Considering Arbitrary Orthonormal Basis for Sparsity by Arash Amini and Farokh Marvasti. The abstract reads:

It is previously shown that proper random linear samples of a finite discrete signal (vector) which has a sparse representation in an orthonormal basis make it possible (with probability 1) to recover the original signal. Moreover, the choice of the linear samples does not depend on the sparsity domain. In this paper, we will show that the replacement of random linear samples with deterministic functions of the signal (not necessarily linear) will not result in unique reconstruction of k-sparse signals except for k=1. We will show that there exist deterministic nonlinear sampling functions for unique reconstruction of 1- sparse signals while deterministic linear samples fail to do so.
Minimax risk for Poisson compressed sensing by Rebecca Willett and Maxim Raginsky. The abstract reads:
This paper describes performance bounds for compressed sensing in the presence of Poisson noise and shows that, for sparse or compressible signals, they are within a log factor of known lower bounds on the risk. The signal-independent and bounded noise models used in the literature to analyze the performance of compressed sensing do not accurately model the effects of Poisson noise. However, Poisson noise is an appropriate noise model for a variety of applications, including low-light imaging, in which sensing hardware is large or expensive and limiting the number of measurements collected is important. In this paper, we describe how a feasible sensing matrix can be constructed and prove a concentration-of-measure inequality for these matrices. We then show that minimizing an objective function consisting of a negative Poisson log likelihood term and a penalty term which could be used as a measure of signal sparsity results in near-minimax rates of error decay.


The Benefit of Group Sparsity by Junzhou Huang and Tong Zhang. The abstract reads:
This paper develops a theory for group Lasso using a concept called strong group sparsity. Our result shows that group Lasso is superior to standard Lasso for strongly group-sparse signals. This provides a convincing theoretical justication for using group sparse regularization when the underlying group structure is consistent with the data. Moreover, the theory predicts some limitations of the group Lasso formulation that are confirmed by simulation studies.

The Rice repository also featured the following preprints and papers, that I have not covered before, over the week-end:

Compressive image fusion by Tao Wan, Nishan Canagarajah, and Alin Achim. The abstract reads:
Compressive sensing (CS) has received a lot of interest due to its compression capability and lack of complexity on the sensor side.In this paper, we present a study of three sampling patterns and investigate their performance on CS reconstruction. We then proposeWe then propose a new image fusion algorithm in the compressive domain by using an improved sampling pattern. There are few studies regarding the applicability of CS to image fusion. The main purpose of this work is to explore the properties of compressive measurements through different sampling patterns and their potential use in image fusion. The study demonstrates that CS-based image fusion has a number of perceived advantages in comparison with image fusion in the multiresolution (MR) domain. The simulations show that the proposed CS-based image fusion algorithm provides promising results.

Compressive sensing of parameterized shapes in images in Ali Gurbuz, James McClellan, Justin Romberg, and Waymond Scott, Jr. The abstract reads:
Compressive Sensing (CS) uses a relatively small number of non-traditional samples in the form of randomized projections to reconstruct sparse or compressible signals. The Hough transform is often used to find lines and other parameterized shapes in images. This paper shows how CS can be used to find parameterized shapes in images, by exploiting sparseness in the Hough transform domain. The utility of the CS-based method is demonstrated for finding lines and circles in noisy images, and then examples of processing GPR and seismic data for tunnel detection are presented.

Iterative thresholding meets free discontinuity problems by Massimo Fornasier and Rachel Ward. The abstract reads:

Free-discontinuity problems describe situations where the solution of interest is defined by a function and a lower dimensional set consisting of the discontinuities of the function. Hence, the derivative of the solution is assumed to be a ‘small’ function almost everywhere except on sets where it concentrates as a singular measure. This is the case, for instance, in crack detection from fracture mechanics or in certain digital image segmentation problems. If we discretize such situations for numerical purposes, the free-discontinuity problem in the discrete setting can be re-formulated as that of finding a derivative vector with small components at all but a few entries that exceed a certain threshold. This problem is similar to those encountered in the field of ‘sparse recovery’, where vectors with a small number of dominating components in absolute value are recovered from a few given linear measurements via the minimization of related energy functionals. Several iterative thresholding algorithms that intertwine gradient-type iterations with thresholding steps have been designed to recover sparse solutions in this setting. It is natural to wonder if and/or how such algorithms can be used towards solving discrete free-discontinuity problems. The current paper explores this connection, and, by establishing an iterative thresholding algorithm for discrete free-discontinuity problems, provides new insights on properties of minimizing solutions thereof.

An homotopy algorithm for the lasso with online observations by Pierre Garrigues and Laurent El Ghaoui, The abstract reads:

It has been shown that the problem of l1-penalized least-square regression commonly referred to as the Lasso or Basis Pursuit DeNoising leads to solutions that
are sparse and therefore achieves model selection. We propose in this paper RecLasso,an algorithm to solve the Lasso with online (sequential) observations. Weintroduce an optimization problem that allows us to compute an homotopy fromthe current solution to the solution after observing a new data point. We compareour method to Lars and Coordinate Descent, and present an application tocompressive sensing with sequential observations. Our approach can easily beextended to compute an homotopy from the current solution to the solution thatcorresponds to removing a data point, which leads to an efficient algorithm forleave-one-out cross-validation. We also propose an algorithm to automatically update the regularization parameter after observing a new data point.

The poster, and attendant python code are listed on Pierre Garrigues' page. This implementation is also listed in the Reconstruction Section of the Big Picture.

Finally, I missed this one BAA for a postdoc proposal on page 48 (BAA # HM1582-09-BAA-0001), but it is interesting to read how one would go from a current hardware system into a CS one:

8.25. LIDAR Compressed Sensing and Denoising
Proposed Research Project:

LIDAR sensors generate significant amounts of data at the acquisition stage, thus making the transmittal, processing, storage, dissemination, and visualization difficult to manage. The goal of this research is to investigate the feasibility of employing compressed sensing techniques for LIDAR sensors at the data acquisition stage. That is, reduce the data on the sensor, thereby easing the demands for transmission and storage, while retaining the ability to reassemble the full point cloud data set with minimal error.

Technical Objectives:

This project seeks the following objectives:
  • Determine the feasibility of collecting LIDAR single point and/or point cloud data using compressed sensing techniques.
  • Determine which representation should be employed both for sampling and for reconstruction. Is it best to use a fixed set of vectors (Fourier, Wavelets, a combination of these, etc.) or can these be learned from the data?
  • Determine to what extent the feasibility, and the optimal method, depends on the application (e.g., in urban vs. rural scenes how do we find the smallest features within those environments?).
  • What are the computational demands of the compressed sensing technique, at the time of collection (on-board processing requirements) and for reconstruction?
  • How robust can a LIDAR compressed sensing technique be with respect to noise, spurious returns, scene clutter, scene content, and features of varying scale?
  • Would the proposed technique introduce compression artifacts that could make further processing difficult?
Goals:
The anticipated outcome of this research includes a technical document with mathematical details describing the above findings. Assuming that compressed sensing of LIDAR is feasible, we also expect recommendations for how this may best be accomplished.


Image Credit: NASA/JPL-Caltech, Looking Back at Victoria Crater, August 2008.

No comments:

Post a Comment