Pages

Tuesday, March 17, 2009

CS: This week's long post


Andrew McGregor has his slides from the Barbados workshop mentioned last week: part1, part2

Andrew Yagle is at it again and wow, just wow. If you recall it looks like designing the psf of an instrument so that it be a kronecker product of 1-D s can decrease tremendously the speed for calibration but is also a good way to reduce the memory size of the measurement matrix. Andrew Yagle seems to be showing now that the reconstruction stage does not need any iterative schemes (LP, greedy, IT,.....) in Non-Iterative Computation of Sparsifiable Solutions to Underdetermined Kronecker Product Linear Systems of Equations by Andrew Yagle. The abstract reads:

The problem of computing sparse (mostly zero) or sparsifiable (by linear transformation) solutions to underdetermined linear systems of equations has applications in compressed sensing and minimum exposure medical imaging. We present a simple, noniterative, low-computational-cost algorithm for computing a sparse solution to an underdetermined linear system of equations. The system matrix is the Kronecker (tensor) product of two matrices, as in separable 2D deconvolution and reconstruction from partial 2D Fourier data, where the image is sparsifiable by a separable 2D wavelet or other transform. A numerical example and program illustrates the new algorithm.
The code allowing this reconstruction is simple and given in the paper:
clear;rand(’state’,0);
X=ceil(rand(256,256)-0.99965);
H=rand(22,256);
[U S V]=svd(H);
Y=H*(H*X)’;
Q=V(:,1:22);
%Y(:)=kron(H,H)*X(:).484X65536 matrixXvector too big.
%GOAL: Get sparse X(length=65536) from Y(length=484).
W=inv(S(:,1:22))*U’;
YY=reshape(kron(W,W)*Y(:),22,22);
I=find(abs(Q*null(YY))¡0.000001);
%=find(sum(X’)¿0)’
J=find(abs(Q*null(YY’))¡0.000001);
%=find(sum(X)¿0)’
Z(256,256)=0;Z(I,:)=1;Z(:,J)=1;%Lines with nonzero X
figure,imagesc(X),colormap(gray),title(’SPARSE IMAGE’)
figure,imagesc(Y),colormap(gray),title(’DATA IMAGE’)
figure,imagesc(Z+3*X),colormap(gray),title(’INDICATOR’)


I mentioned the somewhat very surprising papers of Andrew Yagle about a month ago, as a reminder here they are:

  • A.E. Yagle, "A New Algorithm for the Nearest Singular Toeplitz Matrix to a Given Toeplitz Matrix" .pdf
  • A.E. Yagle, "A Non-Iterative Procedure for Sparse Solutions to Linear Equations with Bandlimited Rows" .pdf
  • A.E. Yagle, "A Non-Iterative Procedure for Computing Sparse and Sparsifiable Solutions to Slightly Underdetermined Linear Systems of Equations".pdf
Just WOW.

also of interest:

We extend the alternating minimization algorithm recently proposed in [38, 39] to the case of recovering blurry multichannel (color) images corrupted by impulsive rather than Gaussian noise. The algorithm minimizes the sum of a multichannel extension of total variation (TV), either isotropic or anisotropic, and a data fidelity term measured in the L1-norm. We derive the algorithm by applying the well-known quadratic penalty function technique and prove attractive convergence properties including finite convergence for some variables and global q-linear convergence. Under periodic boundary conditions, the main computational requirements of the algorithm are fast Fourier transforms and a low-complexity Gaussian elimination procedure. Numerical results on images with different blurs and impulsive noise are presented to demonstrate the efficiency of the algorithm. In addition, it is numerically compared to an algorithm recently proposed in [20] that uses a linear program and an interior point method for recovering grayscale images.


We propose a fast algorithm for solving the ℓ1-regularized minimization problem minx2Rn μ|x||_1 + ||Ax − b||_2^2 for recovering sparse solutions to an undetermined system of linear equations Ax = b. The algorithm is divided into two stages that are performed repeatedly. In the first stage a first-order iterative method called “shrinkage” yields an estimate of the subset of components of x likely to be nonzero in an optimal solution. Restricting the decision variables x to this subset and fixing their signs at their current values reduces the ℓ1-norm ||x||_1 to a linear function of x. The resulting subspace problem, which involves the minimization of a smaller and smooth quadratic function, is solved in the second phase. Our code FPC AS embeds this basic two-stage algorithm in a continuation (homotopy) approach by assigning a decreasing sequence of values to μ. This code exhibits state-of-the-art performance both in terms of its speed and its ability to recover sparse signals. It can even recover signals that are not as sparse as required by current compressive sensing theory.


A Fast TVL1-L2 Minimization Algorithm for Signal Reconstruction from Partial Fourier Data
by Junfeng Yang, Wotao Yin, Yin Zhang. The abstract reads:
Recent compressive sensing results show that it is possible to accurately reconstruct certain compressible signals from relatively few linear measurements via solving nonsmooth convex optimization problems. In this paper, we propose a simple and fast algorithm for signal reconstruction from partial Fourier data. The algorithm minimizes the sum of three terms corresponding to total variation, l_1-norm regularization and least squares data fitting. It uses an alternating minimization scheme in which the main computation involves shrinkage and fast Fourier transforms (FFTs), or alternatively discrete cosine transforms (DCTs) when available data are in the DCT domain. We analyze the convergence properties of this algorithm, and compare its numerical performance with two recently proposed algorithms. Our numerical simulations on recovering magnetic resonance images (MRI) indicate that the proposed algorithm is highly efficient, stable and robust.

An Efficient Algorithm for Compressed MR Imaging using Total Variation and Wavelets by Shiqian Ma, Wotao Yin, Yin Zhang and Amit Chakraborty. The abstract reads:

Compressed sensing, an emerging multidisciplinary field involving mathematics, probability, optimization, and signal processing, focuses on reconstructing an unknown signal from a very limited number of samples. Because information such as boundaries of organs is very sparse in most MR images, compressed sensing makes it possible to reconstruct the same MR image from a very limited set of measurements significantly reducing the MRI scan duration. In order to do that however, one has to solve the difficult problem of minimizing nonsmooth functions on large data sets. To handle this, we propose an efficient algorithm that jointly minimizes the l_1 norm, total variation, and a least squares measure, one of the most powerful models for compressive MR imaging. Our algorithm is based upon an iterative operator-splitting framework. The calculations are accelerated by continuation and takes advantage of fast wavelet and Fourier transforms enabling our code to process MR images from actual real life applications. We show that faithful MR images can be reconstructed from a subset that represents a mere 20 percent of the complete set of measurements.


Credit: NASA, sptizer photo via the Bad Astronomy blog

No comments:

Post a Comment