Pages

Tuesday, December 28, 2010

CS: Some blog entries, Calibrationless Parallel Imaging with Structured Low-Rank Matrix Completion, Accelerated dynamic MRI and low-rank structure: k-t SLR , Sparse Recovery for Earth Mover Distance, CMUX for multi-channel CS, QP and LP in CS, MMDS 2010, A Sparser JLT, Low rank perturbations of large random matrices


Here are some blog entries of interest:

As I was reading Miki Lustig's seminar summary at the Technion this thursday:
Pixel Club Seminar: Practical Parallel Imaging Compresses Sensing MRI: Summary of Two Years of Experience Accelerating Body MRI of Pediatric Patients
Speaker:
Michael Lustig (Electrical Engineering and Computer Sciences, College of Engineering, UC Berkeley)
Date:
Thursday, 30.12.2010, 11:30
Place:
Room 337-8 Taub Bld.

Magnetic Resonance Imaging has revolutionized diagnostic medicine. It is an excellent tool for disease diagnosis and monitoring, offering superb soft tissue contrast and high anatomic resolution; unlike computed tomography (CT), it lacks of ionizing radiation. However MRI suffers from several shortcomings, one of which is the inherently slow data acquisition. This has limited the penetration of MRI to applications that require sharp images of fast moving small body parts, such as angiography, cardiovascular imaging, imaging small children and fetal imaging.

To overcome this limitation, many methods for fast imaging by reduced sampling have been proposed. These are based on exploiting various redundancies in the data. Parallel Imaging is the most noteworthy. It is a well-established accelerated imaging technique based on the spatial sensitivity of array receivers. Compressed sensing is an emerging accelerated imaging based on the compressibility of medical images. Synergistic combination of parallel imaging and compressed sensing offers much higher acceleration and better quality imagery.

For the last two years, we have been experimenting with applying compressed sensing parallel imaging for MR body imaging of pediatric patients. It is a joint-effort by teams from UC Berkeley, Stanford University and GE Healthcare. The talk aims to summarize our experience so far. I will start with some background on MR imaging, parallel imaging and compressed sensing MRI. I will then turn to describe our unique approach of data acquisition and reconstruction, our implementation on parallel processors (multi-core and GPGPU), applications and clinical studies. Our approach is implemented and installed at Lucile Packard Children's Hospital at Stanford. So far, it is the first and only place in the world in which compressed sensing is routinely used in clinical practice. We can routinely achieve clinical quality reconstructions of body pediatric MR, at more than 8-fold acceleration. These are reconstructed and displayed in about a minute using our GPU-based parallel processing reconstruction.

Time permitting I will also describe some recent advances in parallel imaging that leads to interesting low-rank structured matrix completion problems.
The last part of the summary caught my attention so I looked around and saw the following abstract

A new method for parallel imaging that requires no special autocalibration lines or calibration scans is presented. Instead, the method jointly calibrates, and synthesizes missing data from the entire acquired k-space. The proposed method is based on low-rank matrix completion, which is an extension of the compressed sensing theory to matrices. It is implemented by iterative singular value thresholding. The method can be used to reconstruct undersampled data, to generate calibration data for GRAPPA-like methods, or just to improve calibration when the calibration area is too small.

Mmnuh! Calibrationaless I like this paper already and look forward to it. Others are also looking at matrix completion issues here is a recent paper:

We introduce a novel algorithm to reconstruct dynamic MRI data from under-sampled k-t space data. In contrast to classical model based cine MRI schemes that rely on the sparsity or banded structure in Fourier space, we use the compact representation of the data in the Karhunen Louve transform (KLT) domain to exploit the correlations in the dataset. The use of the data-dependent KL transform makes our approach ideally suited to a range of dynamic imaging problems, even when the motion is not periodic. In comparison to current KLT-based methods that rely on a two-step approach to first estimate the basis functions and then use it for reconstruction, we pose the problem as a spectrally regularized matrix recovery problem. By simultaneously determining the temporal basis functions and its spatial weights from the entire measured data, the proposed scheme is capable of providing high quality reconstructions at a range of accelerations. In addition to using the compact representation in the KLT domain, we also exploit the sparsity of the data to further improve the recovery rate. Validations using numerical phantoms and in-vivo cardiac perfusion MRI data demonstrate the significant improvement in performance offered by the proposed scheme over existing methods.
Here some recent papers that I have somehow forgotten to feature:

Sparse Recovery for Earth Mover Distance by Rishi Gupta, Piotr Indyk, Eric Price. The abstract reads:
We initiate the study of sparse recovery problems under the Earth-Mover Distance (EMD). Speci cally, we design a distribution over m x n matrices A, for m n, such that for any x, given Ax, we can recover a k-sparse approximation to x under the EMD distance. We also provide an empirical evaluation of the method that, in some scenarios, shows its advantages over the \usual" recovery in the `p norms.
The recently developed compressive sensing (CS) framework enables the design of sub-Nyquist analog-to-digital converters. Several architectures have been proposed for the acquisition of sparse signals in large swaths of bandwidth. In this paper we consider a more flexible multi-channel signal model consisting of several discontiguous channels where the occupancy of the combined bandwidth of the channels is sparse. We introduce a new compressive acquisition architecture, the compressive multiplexer (CMUX), to sample such signals. We demonstrate that our architecture is CS-feasible and suggest a simple implementation with numerous practical advantages.

Here we show that the problem of realizing a polytope with specified combinatorics is equivalent to a low rank matrix completion problem. This is comparable to known results reducing realizability to solving systems of multinomial equations and inequalities, but the conditions we give here are more simply stated. We see how this matrix is related to a matrix appearing in a similar result by D\'iaz.

Relations between $\beta$ and $\delta$ for QP and LP in Compressed Sensing Computations by Jun Zhang, Jun Wang, Guangwu Xu. The abstract reads:
In many compressed sensing applications, linear programming (LP) has been used to reconstruct a sparse signal. When observation is noisy, the LP formulation is extended to allow an inequality constraint and the solution is dependent on a parameter $\delta$, related to the observation noise level. Recently, some researchers also considered quadratic programming (QP) for compressed sensing signal reconstruction and the solution in this case is dependent on a Lagrange multiplier $\beta$. In this work, we investigated the relation between $\delta$ and $\beta$ and derived an upper and a lower bound on $\beta$ in terms of $\delta$. For a given $\delta$, these bounds can be used to approximate $\beta$. Since $\delta$ is a physically related quantity and easy to determine for an application while there is no easy way in general to determine $\beta$, our results can be used to set $\beta$ when the QP is used for compressed sensing. Our results and experimental verification also provide some insight into the solutions generated by compressed sensing.


The 2010 Workshop on Algorithms for Modern Massive Data Sets (MMDS 2010) was held at Stanford University, June 15--18. The goals of MMDS 2010 were (1) to explore novel techniques for modeling and analyzing massive, high-dimensional, and nonlinearly-structured scientific and Internet data sets; and (2) to bring together computer scientists, statisticians, applied mathematicians, and data analysis practitioners to promote cross-fertilization of ideas. MMDS 2010 followed on the heels of two previous MMDS workshops. The first, MMDS 2006, addressed the complementary perspectives brought by the numerical linear algebra and theoretical computer science communities to matrix algorithms in modern informatics applications; and the second, MMDS 2008, explored more generally fundamental algorithmic and statistical challenges in modern large-scale data analysis.

Of interest to us starts at page 5 of the document. To recall all the presentations made at MMDS 2010 can be found here.

A Sparser Johnson-Lindenstrauss Transform by Daniel M. Kane, Jelani Nelson. The abstract reads:
 We give a Johnson-Lindenstrauss transform with column sparsity s = Theta(eps^{-1}log(1/delta)) into optimal dimension k = O(eps^{-2}log(1/delta)) to achieve distortion 1+eps with success probability 1-delta. This is the first distribution to provide an asymptotic improvement over the Theta(k) sparsity bound for all values of eps,delta. Previous work of [Dasgupta-Kumar-Sarlos, STOC 2010] gave a distribution with s = O~(eps^{-1}log^3(1/delta)), with tighter analyses later in [Kane-Nelson, CoRR abs/1006.3585] and [Braverman-Ostrovsky-Rabani, CoRR abs/1011.2590] showing that their construction achieves s = O~(eps^{-1}log^2(1/delta)). As in the previous work, our scheme only requires limited independence hash functions. In fact, potentially one of our hash functions could be made deterministic given an explicit construction of a sufficiently good error-correcting code.
    Our linear dependence on log(1/delta) in the sparsity allows us to plug our construction into algorithms of [Clarkson-Woodruff, STOC 2009] to achieve the fastest known streaming algorithms for numerical linear algebra problems such as approximate linear regression and best rank-k approximation. Their reductions to the Johnson-Lindenstrauss lemma require exponentially small delta, and thus a superlinear dependence on log(1/delta) in s leads to significantly slower algorithms. 
We consider the eigenvalues and eigenvectors of finite, low rank perturbations of random matrices. Specifically, we prove almost sure convergence of the extreme eigenvalues and appropriate projections of the corresponding eigenvectors of the perturbed matrix for additive and multiplicative perturbation models. The limiting non-random value is shown to depend explicitly on the limiting eigenvalue distribution of the unperturbed random matrix and the assumed perturbation model via integral transforms that correspond to very well known objects in free probability theory that linearize non-commutative free additive and multiplicative convolution. Furthermore, we uncover a phase transition phenomenon whereby the large matrix limit of the extreme eigenvalues of the perturbed matrix differs from that of the original matrix if and only if the eigenvalues of the perturbing matrix are above a certain critical threshold. Square root decay of the eigenvalue density at the edge is sufficient to ensure that this threshold is finite. This critical threshold is intimately related to the same aforementioned integral transforms and our proof techniques bring this connection and the origin of the phase transition into focus. Consequently, our results extend the class of `spiked' random matrix models about which such predictions (called the BBP phase transition) can be made well beyond the Wigner, Wishart and Jacobi random ensembles found in the literature. We examine the impact of this eigenvalue phase transition on the associated eigenvectors and observe an analogous phase transition in the eigenvectors. Various extensions of our results to the problem of non-extreme eigenvalues are discussed.


Image Credit: NASA/JPL/Space Science Institute, W00065998.jpg was taken on December 24, 2010 and received on Earth December 27, 2010. The camera was pointing toward SATURN at approximately 1,793,709 kilometers away, and the image was taken using the MT3 and CL2 filters.

No comments:

Post a Comment