Pages

Sunday, May 16, 2010

CS: The real lengthy post of the week: Courses, Diffusive EDOF, IMRT, CBCT, Coherent Imaging CS, Tensors Factorizations, Coresets and much more.



A day after this entry, the following listing of sites showed up on the first page of this weekly realtime Google search. Coincidence ? I think not. Hey Google, instead of paying me, I'll settle for increasing my Pagerank to 7.

Since power laws are a good proxy for sparsity, I liked reading this blog entry on power laws.

Jort Gemmeke sent me the following:

Hi Igor,

I didnt check properly if you mentioned this one on the blog yet:

http://www2.imm.dtu.dk/projects/sparse/

Seems nice provided there are no ash clouds ;)

Jort

This is a link to the Summer School on Sparse Representations and Sparse Coding and it will take place in Iceland. As of last week, we still had problems navigating between the U.S. and Europe because of this. In fact, this is still a problem within Europe. What's up with Summer school headed to far away places and islands ? I am thinking this is a ploy so that students cannot run away after the first class :)


Emil Sidky sent me the following:
... [This] reminds me to tell you that we have a paper on another application for CS-style image recon. in X-ray phase-contrast tomography in Optics Express (which offers free downloads):
http://www.opticsinfobase.org/oe/abstract.cfm?uri=oe-18-10-10404

It's an interesting application for CS, because the object function is an edge map which should be sparse in pixels. From the practical point of view, reducing the number of projections is interesting because data acquisition is time consuming and the repeated projections can damage biological samples. The imaging application itself is cool, because it uses a coherent X-ray beam at the advanced photon source (APS) at Argonne national lab.

Best regards,
Emil

The paper is Image reconstruction exploiting object sparsity in boundary-enhanced X-ray phase-contrast tomography by Emil Y. Sidky, Mark A. Anastasio, and Xiaochuan Pan. The abstract reads:
Propagation-based X-ray phase-contrast tomography (PCT) seeks to reconstruct information regarding the complex-valued refractive index distribution of an object. In many applications, a boundary-enhanced image is sought that reveals the locations of discontinuities in the real-valued component of the refractive index distribution. We investigate two iterative algorithms for few-view image reconstruction in boundary-enhanced PCT that exploit the fact that a boundary-enhanced PCT image, or its gradient, is often sparse. In order to exploit object sparseness, the reconstruction algorithms seek to minimize the ℓ1-norm or TV-norm of the image, subject to data consistency constraints. We demonstrate that the algorithms can reconstruct accurate boundary-enhanced images from highly incomplete few-view projection data.


Thanks Emil.

muuhhh, compressive sensing with coherent photons..... I can't talk about it for the moment.


But let's focus on what is already out there. See one of the fascinating element of compressed sensing is the deeper connection between the theoretical aspect of things as featured for instance in the presentations going on at this workshop in Paris and the fact that actual hardware is being built and fulfills by default some of the conditions for random mixing. By just looking at how the mixing is taking place, you have the comforting knowledge that, even if the traditional methods for recovering solutions were to fail, the whole arm race that has taken place in the reconstruction solvers for compressed sensing will help you and will provide you a resolution for a sparse solution. I mentioned recently the Reinterpretable Imager: Towards Variable Post-Capture Space, Angle and Time Resolution in Photography by Amit Agrawal, Ashok Veeraraghavan and Ramesh Raskar, today we have another fantastic hardware that seems to have taken its cue from the Random Lens Imager, it's the Diffuser aided Extended Depth of Field Camera from Columbia. The project page is here. The paper presenting it is: Diffusion Coded Photography for Extended Depth of Field by Oliver Cossairt, Changyin Zhou and Shree Nayar. The abstract reads:
In recent years, several cameras have been introduced which extend depth of field (DOF) by producing a depth-invariant point spread function (PSF). These cameras extend DOF by deblurring a captured image with a single spatially-invariant PSF. For these cameras, the quality of recovered images depends both on the magnitude of the PSF spectrum (MTF) of the camera, and the similarity between PSFs at different depths. While researchers have compared the MTFs of different extended DOF cameras, relatively little attention has been paid to evaluating their depth invariances. In this paper, we compare the depth invariance of several cameras, and introduce a new diffusion coding camera that achieves near identical performance to a focal sweep camera, but without the need for moving parts. Our technique utilizes a novel optical element placed in the pupil plane of an imaging system. Whereas previous approaches use optical elements characterized by their amplitude or phase profile, our approach utilizes one whose behavior is characterized by its scattering properties. Such an element is commonly referred to as an optical diffuser, and thus we refer to our new approach as diffusion coding. We show that diffusion coding can be analyzed in a simple and intuitive way by modeling the effect of a diffuser as a kernel in light field space. We provide detailed analysis of diffusion coded cameras and show results from an implementation using a custom designed diffuser.
Wow. just wow. No need to invoke sparsity. I am pretty sure I'll come back to it eventually. Let us note in the meantime that the kinoform diffuser used in this paper can also be used in X-ray work, yes, the same photons used in the Advanced Photon Source (APS) at Argonne. And yes, you guessed it right, we ought to be looking at those random diffuser for particle detectors in general.

In other news, we have:

A presentation by Laurent Jacques on "Dequantizing Compressed Sensing: When Oversampling and Non-Gaussian Constraints Combine".

Sparse Multipath Channel Estimation Using Compressive Sampling Matching Pursuit Algorithm by Guan Gui, Qun Wan, Wei Peng, Fumiyuki Adachi. The abstract reads:

Wideband wireless channel is a time dispersive channel and becomes strongly frequency-selective. However, in most cases, the channel is composed of a few dominant taps and a large part of taps is approximately zero or zero. To exploit the sparsity of multi-path channel (MPC), two methods have been proposed. They are, namely, greedy algorithm and convex program. Greedy algorithm is easy to be implemented but not stable; on the other hand, the convex program method is stable but difficult to be implemented as practical channel estimation problems. In this paper, we introduce a novel channel estimation strategy using compressive sampling matching pursuit (CoSaMP) algorithm which was proposed in [1]. This algorithm will combine the greedy algorithm with the convex program method. The effectiveness of the proposed algorithm will be confirmed through comparisons with the existing methods.

Sparse Recovery with Orthogonal Matching Pursuit under RIP by Tong Zhang. The abstract reads:
This paper presents a new analysis for the orthogonal matching pursuit (OMP) algorithm. It is shown that if the restricted isometry property (RIP) is satisfied at sparsity level $O(\bar{k})$, then OMP can recover a $\bar{k}$-sparse signal in 2-norm. For compressed sensing applications, this result implies that in order to uniformly recover a $\bar{k}$-sparse signal in $\Real^d$, only $O(\bar{k} \ln d)$ random projections are needed. This analysis improves earlier results on OMP that depend on stronger conditions such as mutual incoherence that can only be satisfied with $\Omega(\bar{k}^2 \ln d)$ random projections.

This is the end of the semester for most universities. Several classes featured Compressed Sensing one way or another, here is a list of courses thus far:


Some folks graduated, Rayan Saab's thesis is out. It is entitled: Compressed sensing : decoding and quantization. The abstract reads:
Recently, great strides in sparse approximation theory and its application have been made. Many of these developments were spurred by the emerging area of compressed sensing. Compressed sensing is a signal acquisition paradigm that entails recovering estimates of sparse and compressible signals from n linear measurements, many fewer than the signal ambient dimension N. In general, these measurements are assumed to be imperfect. For example, they may be noisy or quantized (or both). Thus, the associated sparse recovery problem requires the existence of stable and robust “decoders” to reconstruct the signal. In this thesis, we first address the theoretical properties of ∆p, a class of compressed sensing decoders that rely on ℓp minimization with p element of (0, 1). In particular, we extend the known results regarding the decoder ∆₁, based on ℓ₁ minimization, to ∆p with p element of (0, 1). Our results are two-fold. We show that under sufficient conditions that are weaker than the analogous sufficient conditions for ∆₁, the decoders ∆p are robust to noise and stable. In particular, they are (2, p) instance optimal. We also show that, like ∆₁, the decoders ∆p are (2, 2) instance optimal in probability provided the measurement matrix is drawn from an appropriate distribution. Second, we address quantization of compressed sensing measurements. Typically, the most commonly assumed approach (called PCM) entails quantizing each measurement independently, using a quantization alphabet with step-size d. Subsequently, by using a stable and robust decoder one can guarantee that the estimate produced by the decoder is within O(d) of the signal. As a more effective alternative, we propose using sigma-delta schemes to quantize compressed sensing measurements of a k-sparse signal. We show that there is an associated two-stage recovery method which guarantees a reduction of the approximation error by a factor of (n/k){α(r−¹/²)}, for any α less than 1 if n greater than k(log N)¹{¹−α)} (with high probability on the initial draw of the measurement matrix). In particular, the first recovery stage employs a stable decoder to estimate the support of the signal, while the second stage employs Sobolev dual frames to approximate the signal.

Linear prediction of speech based on 1-norm minimization has already proved to be an interesting alternative to 2-norm minimization. In particular, choosing the 1-norm as a convex relaxation of the 0-norm, the corresponding linear prediction model offers a sparser residual better suited for coding applications. In this paper, we propose a new speech modeling technique based on reweighted 1-norm minimization. The purpose of the reweighted scheme is to overcome the mismatch between 0-norm minimization and 1-norm minimization while keeping the problem solvable with convex estimation tools. Experimental results prove the effectiveness of the reweighted 1-norm minimization, offering better coding properties compared to 1-norm minimization.

Nonparametric Bayesian Dictionary Learning for Analysis of Noisy and Incomplete Images by Mingyuan Zhou, Haojun Chen, John Paisley, Lu Ren, Lingbo Li, Zhengming Xing, David Dunson, Guillermo Sapiro, and Lawrence Carin. The abstract reads:
We present a nonparametric Bayesian model for completing low-rank, positive semidefinite matrices. Given an N × N matrix with underlying rank r, and noisy measured values and missing values with a symmetric pattern, the proposed Bayesian hierarchical model nonparametrically uncovers the underlying rank from all positive semidefinite matrices, and completes the matrix by approximating the missing values. We analytically derive all posterior distributions for the fully conjugate model hierarchy and discuss variational Bayes and MCMC Gibbs sampling for inference, as well as an efficient measurement selection procedure. We present results on a toy problem, and a music recommendation problem, where we complete the kernel matrix of 2,250 pieces of music.
The poster is here.

Undersampling the k-space is an efficient way to speed up the magnetic resonance imaging (MRI). Recently emerged compressed sensing MRI shows promising results. However, most of them only enforce the sparsity of images in single transform, e.g. total variation, wavelet, etc. In this paper, based on the principle of basis pursuit, we propose a new framework to combine sparsifying transforms in compressed sensing MRI. Each transform can efficiently represent specific feature that the other can not. This framework is implemented via the state-of-art smoothed ℓ0 norm in overcomplete sparse decomposition. Simulation results demonstrate that the proposed method can improve image quality when comparing to single sparsifying transform.
The poster is here.

Iterative thresholding compressed sensing MRI based on contourlet transform by Xiaobo Qu, Weiru Zhang, Di Guo, Congbo Cai, Shuhui Cai, Zhong Chen. The abstract reads:
Reducing the acquisition time is important for clinical magnetic resonance imaging (MRI). Compressed sensing has recently emerged as a theoretical foundation for the reconstruction of magnetic resonance (MR) images from undersampled k-space measurements, assuming those images are sparse in a certain transform domain. However, most real-world signals are compressible rather than exactly sparse. For example, the commonly used 2D wavelet for compressed sensing MRI (CS-MRI) does not sparsely represent curves and edges. In this paper, we introduce a geometric image transform, the contourlet, to overcome this shortage. In addition, the improved redundancy provided by the contourlet can successfully suppress the pseudo-Gibbs phenomenon, a tiresome artifact produced by undersampling of k-space, around the singularities of images. For numerical calculation, a simple but effective iterative thresholding algorithm is employed to solve 1 l norm optimization for CS-MRI. Considering the recovered information and image features, we introduce three objective criteria, which are the peak signal-to-noise ratio (PSNR), mutual information (MI) and transferred edge information (TEI), to evaluate the performance of different image transforms. Simulation results demonstrate that contourlet-based CS-MRI can better reconstruct the curves and edges than traditional wavelet-based methods, especially at low k-space sampling rate.

Combined sparsifying transforms for compressed sensing MRI by Xiaobo Qu, X. Cao, Di Guo, C. Hu and Zhong Chen. The abstract reads:
In traditional compressed sensing MRI methods, single sparsifying transform limits the reconstruction quality because it cannot sparsely represent all types of image features. Based on the principle of basis pursuit, a method that combines sparsifying transforms to improve the sparsity of images is proposed. Simulation results demonstrate that the proposed method can well recover different types of image features and can be easily associated with total variation. Introduction: Recently emerged compressed sensing (CS) [1,2] provides a firm foundation to reconstruct signal from incomplete measurements assuming signal can be sparsely represented by certain transform. CS is first applied in MRI in [3] and the result is very impressive. Sparsity of images limits the quality of reconstructed image for compressing sensing MRI (CS-MRI) [1,2,3]. Traditional 2D wavelet was employed to sparsify magnetic resonance (MR) images [3]. Wavelet is good at sparsely representing point-like features but fails in sparsely representing the curve-like features. New tailored geometric transforms like curvelet [4] and contourlet [5,6] can be used to sparsely represent curves. But neither of them is good at representing point-like image features. So far, how to exploit multiple geometric transforms to improve the sparsity of MR images for CS reconstruction has not been discussed. This paper will present a new method to combine these transforms for CS-MRI and
improve quality of reconstructed image.

Compressed sensing with coherent and redundant dictionaries by Emmanuel Candès, Yonina Eldar and Deanna Needell and Yi Ma. The abstract reads:
This article presents novel results concerning the recovery of signals from undersampled data in the common situation where such signals are not sparse in an orthonormal basis or incoherent dictionary, but in a truly redundant dictionary. This work thus bridges a gap in the literature and shows not only that compressed sensing is viable in this context, but also that accurate recovery is possible via an l_1-analysis optimization problem. We introduce a condition on the measurement/sensing matrix, which is a natural generalization of the now well-known restricted isometry property, and which guarantees accurate recovery of signals that are nearly sparse in (possibly) highly overcomplete and coherent dictionaries. This condition imposes no incoherence
restriction on the dictionary and our results may be the first of this kind. We discuss practical examples and the implications of our results on those applications, and complement our study by demonstrating the potential of l_1-analysis for such problems.

I just noticed the IPAM workshop on Mathematical Problems, Models and Methods in Biomedical Imaging that took place on February 8 - 12, 2010. The program is here. One of the authors Lei Xing has several interesting papers that include:

Search for IMRT inverse plans with piecewise constant fluence maps using compressed sensing techniques by Zhu L, and Lei Xing. The abstract reads:
An intensity-modulated radiation therapy (IMRT) field is composed of a series of segmented beams. It is practically important to reduce the number of segments while maintaining the conformality of the final dose distribution. In this article, the authors quantify the complexity of an IMRT fluence map by introducing the concept of sparsity of fluence maps and formulate the inverse planning problem into a framework of compressing sensing. In this approach, the treatment planning is modeled as a multiobjective optimization problem, with one objective on the dose performance and the other on the sparsity of the resultant fluence maps. A Pareto frontier is calculated, and the achieved dose distributions associated with the Pareto efficient points are evaluated using clinical acceptance criteria. The clinically acceptable dose distribution with the smallest number of segments is chosen as the final solution. The method is demonstrated in the application of fixed-gantry IMRT on a prostate patient. The result shows that the total number of segments is greatly reduced while a satisfactory dose distribution is still achieved. With the focus on the sparsity of the optimal solution, the proposed method is distinct from the existing beamlet- or segment-based optimization algorithms.

Iterative image reconstruction for CBCT using edge-preserving prior
by Wang J, Li T, Lei Xing The abstract reads:
On-board cone-beam computed tomography (CBCT) is a new imaging technique for radiation therapy guidance, which provides volumetric information of a patient at treatment position. CBCT improves the setup accuracy and may be used for dose reconstruction. However, there is great concern that the repeated use of CBCT during a treatment course delivers too much of an extra dose to the patient. To reduce the CBCT dose, one needs to lower the total mAs of the x-ray tube current, which usually leads to reduced image quality. Our goal of this work is to develop an effective method that enables one to achieve a clinically acceptable CBCT image with as low as possible mAs without compromising quality. An iterative image reconstruction algorithm based on a penalized weighted least-squares (PWLS) principle was developed for this purpose. To preserve edges in the reconstructed images, we designed an anisotropic penalty term of a quadratic form. The algorithm was evaluated with a CT quality assurance phantom and an anthropomorphic head phantom. Compared with conventional isotropic penalty, the PWLS image reconstruction algorithm with anisotropic penalty shows better resolution preservation.


The problem of incomplete data - i.e., data with missing or unknown values - in multi-way arrays is ubiquitous in biomedical signal processing, network traffic analysis, bibliometrics, social network analysis, chemometrics, computer vision, communication networks, etc. We consider the problem of how to factorize data sets with missing values with the goal of capturing the underlying latent structure of the data and possibly reconstructing missing values (i.e., tensor completion). We focus on one of the most well-known tensor factorizations that captures multi-linear structure, CANDECOMP/PARAFAC (CP). In the presence of missing data, CP can be formulated as a weighted least squares problem that models only the known entries. We develop an algorithm called CP-WOPT (CP Weighted OPTimization) that uses a first-order optimization approach to solve the weighted least squares problem. Based on extensive numerical experiments, our algorithm is shown to successfully factorize tensors with noise and up to 99% missing data. A unique aspect of our approach is that it scales to sparse large-scale data, e.g., 1000 x 1000 x 1000 with five million known entries (0.5% dense). We further demonstrate the usefulness of CP-WOPT on two real-world applications: a novel EEG (electroencephalogram) application where missing data is frequently encountered due to disconnections of electrodes and the problem of modeling computer network traffic where data may be absent due to the expense of the data collection process.
Ultrahigh dimensional variable selection for Cox's proportional hazards model by Jianqing Fan, Yang Feng, Yichao Wu. The abstract reads:
Variable selection in high dimensional space has challenged many contemporary statistical problems from many frontiers of scientific disciplines. Recent technology advance has made it possible to collect a huge amount of covariate information such as microarray, proteomic and SNP data via bioimaging technology while observing survival information on patients in clinical studies. Thus, the same challenge applies to the survival analysis in order to understand the association between genomics information and clinical information about the survival time. In this work, we extend the sure screening procedure Fan and Lv (2008) to Cox's proportional hazards model with an iterative version available. Numerical simulation studies have shown encouraging performance of the proposed method in comparison with other techniques such as LASSO. This demonstrates the utility and versatility of the iterative sure independent screening scheme.
and Sure Independence Screening for Ultra-High Dimensional Feature Space by Jianqing Fan, Jinchi Lv. The abstract reads:
Variable selection plays an important role in high dimensional statistical modeling which nowadays appears in many areas of scientific discoveries. With large scale or dimensionality p, computational cost and estimation accuracy are two top concerns for any statistical procedure. In a recent paper, Candes and Tao (2007) propose a minimum `1 estimator, the Dantzig selector, and show that it mimics the ideal risk within a factor of log p. Their innovative procedure and remarkable result are challenged when the dimensionality is ultra high. The factor log p can be large and their uniform uncertainty principle can fail. Motivated by those concerns, we introduce the concept of sure screening and propose a sure screening method Sure Independence Screening (SIS) to reduce dimensionality from high to a relatively large scale d that is below sample size. In a fairly general asymptotic framework, SIS is shown to have the sure screening property for even exponentially growing dimension. As a methodological extension, an iterate SIS (ISIS) is also proposed to enhance the finite sample performance. With high dimension reduced accurately to below sample size, variable selection can be accomplished by some refined lower dimensional method such as the SCAD, Dantzig selector, Lasso, or adaptive Lasso.
In the same vein here is this presentation on the same subject: ISIS: A two-scale framework to sparse inference by Jianqing Fan.

What's a coreset you ask, well wikipedia has an answer for that, which makes this paper all the more interesting: Coresets and Sketches for High Dimensional Subspace Approximation Problems by Dan Feldman, Morteza Monemizadeh, Christian Sohler, David P. Woodruff. The abstract reads:

We consider the problem of approximating a set P of n points in Rd by a j-dimensional subspace under the `p measure, in which we wish to minimize the sum of `p distances from each point of P to this subspace. More generally, the Fq(`p)-subspace approximation problem asks for a j-subspace that minimizes the sum of qth powers of `p-distances to this subspace, up to a multiplicative factor of (1 + ). We develop techniques for subspace approximation, regression, and matrix approximation that can be used to deal with massive data sets in high dimensional spaces. In particular, we develop coresets and sketches, i.e. small space representations that approximate the input point set P with respect to the subspace approximation problem. Our results are:

A dimensionality reduction method that can be applied to Fq(`p)-clustering and shape fitting problems, such as those in [8, 15].

The first strong coreset for F1(`2)-subspace approximation in high-dimensional spaces, i.e. of size polynomial in the dimension of the space. This coreset approximates the distances to any j-subspace (not just the optimal one).

A (1 + )-approximation algorithm for the j-dimensional F1(`2)-subspace approximation problem with running time nd(j= )O(1) + (n + d)2poly(j= ).

A streaming algorithm that maintains a coreset for the F1(`2)-subspace approximation problem and uses a space of d 2plogn 2 !poly(j) (weighted) input points. Streaming algorithms for the above problems with bounded precision in the turnstile model, i.e, when coordinates appear in an arbitrary order and undergo multiple updates. We show that bounded precision can lead to further improvements. We extend results of [7] for approximate linear regression, distances to subspace approximation, and optimal rank-j approximation, to error measures other than the Frobenius norm.


I think I also mentioned the following papers before. They hide behind a paywall: Compressed-sensing Photoacoustic Imaging based on random optical illumination by Dong Liang, Hao F. Zhang and Leslie Ying. The abstract reads:
Limited-view acquisition suffers from artefacts and loss of resolution for reconstruction-based Photoacoustic Imaging (PAI). In this paper, a completely new acquisition and reconstruction scheme is reported to achieve artefact-free imaging from limited-view acquisition. The method employs spatially and temporally varying random optical illumination instead of the uniform illumination used in conventional PAI methods. Compressed Sensing (CS) is also used to reduce the number of random illuminations for fast data acquisition speed. Numerical simulations demonstrated that images can be recovered from acquisitions with as few as two transducer view angles. This novel scheme can potentially eliminate both the mechanical scanning of single-element ultrasonic transducer and the expensive ultrasonic array transducer

Compressed sensing in photoacoustic tomography in vivo. by Guo Z, Li C, Song L, Wang LV. The abstract reads:
The data acquisition speed in photoacoustic computed tomography (PACT) is limited by the laser repetition rate and the number of parallel ultrasound detecting channels. Reconstructing an image with fewer measurements can effectively accelerate the data acquisition and reduce the system cost. We adapt compressed sensing (CS) for the reconstruction in PACT. CS-based PACT is implemented as a nonlinear conjugate gradient descent algorithm and tested with both phantom and in vivo experiments.

No comments:

Post a Comment