Tuesday, March 09, 2010

CS: Tomosynthesis, CS rendering, Sparsity Gap, Hardware and more, A job

Emil Sidky sent me the following:
Hi Igor,
Congratulations on getting linked from Wired!
I think your web-site is a great resource, and it's good that it's getting noticed.

I just wanted to add to your last entry in which you asked Gordon to comment on an article. Gordon referred to the problem of reconstruction from a limited angle and limited number of views. This is being actively investigated by us and others especially for application to mammo screening. It is called tomosynthesis. I believe you even have one of our papers from 2008 (co-authored with Chartrand) on your blog. [ Practical iterative image reconstruction in digital breast tomosynthesis by non-convex TpV optimization by Emil Sidky, Ingrid Reiser, Robert Nishikawa, Xiaochuan Pan, Rick Chartrand, Daniel Kopans and Richard H. Moore as featured here.]

There are industry researchers investigating the deconvolution approach similar to what Gordon is talking about. One resource you might find interesting is the website www.fully3D.org. From that site there are links to the various workshops (held on the odd years) each of which has a downloadable proceedings book. There are 4 page articles on current research some of which include the few view problem. I believe the deconvolution work was presented at the 2007 meeting in Lindau.

Also, many of the references cited by our IP review [ Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction? by Xiaochuan Pan, Emil Sidky and Michael Vannier] present more recent work on the few view or limited angle problem. In particular, [please find below a link to] a 2002 paper from Tsukuba, Japan published in PMB that you might find interesting (looks familiar doesn't it?).


Cheers,
Emil


Thanks Emil ! The paper is available here: Accurate Iterative Reconstruction Algorithm for Sparse Objects: Application to 3-D Blood-Vessel Reconstruction from a Limited Number of Projections by Meihua Li, Haiquan Yang, and Hiroyuki Kudo and it is indeed very familiar, from the use of the regularization term using p=1.1 because the term stays convex to the smaller number of measurements.

For those interested, Emil provides some interesting explanation on the incoherence of the X-ray transform.


My webcrawler found the following papers of interest this week:

In an incoherent dictionary, most signals that admit a sparse representation admit a unique sparse representation. In other words, there is no way to express the signal without using strictly more atoms. This work demonstrates that sparse signals typically enjoy a higher privilege: each nonoptimal representation of the signal requires far more atoms than the sparsest representation-unless it contains many of the same atoms as the sparsest representation. One impact of this finding is to confer a certain degree of legitimacy on the particular atoms that appear in a sparse representation. This result can also be viewed as an uncertainty principle for random sparse signals over an incoherent dictionary.
I had already mentioned the next two but they were behind a paywall, they are now available for our perusal: Imaging through turbulence using compressive coherence sensing by Ashwin Wagadarikar, Daniel L. Marks, Kerkil Choi, Ryoichi Horisaki, and David Brady. The abstract reads:
Previous studies have shown that the isoplanatic distortion due to turbulence and the image of a remote object may be jointly estimated from the 4D mutual intensity across an aperture. This paper shows that decompressive inference on a 2D slice of the 4D mutual intensity, as measured by a rotational shear interferometer, is su±cient for estimation of sparse objects imaged through turbulence. The 2D slice is processed using an iterative algorithm that alternates between estimating the sparse objects and estimating the turbulence-induced phase screen. This approach may enable new systems that infer object properties through turbulence without exhaustive sampling of coherence functions.
It is also here. And Compressive video sensors using multichannel imagers by Mohan Shankar, Nikos P. Pitsianis, and David Brady. The abstract reads:
We explore the possibilities of obtaining compression in video through modified sampling strategies using multichannel imaging systems. The redundancies in video streams are exploited through compressive sampling schemes to achieve low power and low complexity video sensors. The sampling strategies as well as the associated reconstruction algorithms are discussed. These compressive sampling schemes could be implemented in the focal plane readout hardware resulting in drastic reduction in data bandwidth and computational complexity.
In this paper we develop a new approach to sparse principal component analysis (sparse PCA). We propose two single-unit and two block optimization formulations of the sparse PCA problem, aimed at extracting a single sparse dominant principal component of a data matrix, or more components at once, respectively. While the initial formulations involve nonconvex functions, and are therefore computationally intractable, we rewrite them into the form of an optimization program involving maximization of a convex function on a compact set. The dimension of the search space is decreased enormously if the data matrix has many more columns (variables) than rows. We then propose and analyze a simple gradient method suited for the task. It appears that our algorithm has best convergence properties in the case when either the objective function or the feasible set are strongly convex, which is the case with our single-unit formulations and can be enforced in the block case. Finally, we demonstrate numerically on a set of random and gene expression test problems that our approach outperforms existing algorithms both in quality of the obtained solution and in computational speed.

Recovery of sparse signals from noisy observations is a problem that arises in many information processing contexts. LASSO and the Dantzig selector (DS) are two well-known schemes used to recover high-dimensional sparse signals from linear observations. This paper presents some results on the equivalence between LASSO and DS. We discuss a set of conditions under which the solutions of LASSO and DS are same. With these conditions in place, we formulate a shrinkage procedure for which LASSO and DS follow the same solution path. Furthermore, we show that under these shrinkage conditions the solution to LASSO and DS can be attained in at most S homotopy steps, where S is the number of nonzero elements in the final solution. Thus the computational cost for finding complete homotopy path for an M x N system is merely O(SMN).
Also from these authors the following two presentations: Channel protection andBasis pursuit with dynamic updates.

I mentioned the following two papers in the previous entry Surely You Must Be Joking Mr. Screenwriter. Following the 2005 Dual Photography paper where a hierarchical scheme was devised for different illumination in order to obtain the 6D transfer function (or transport matrix as is being called in the paper). Pradeep Sen and Soheil Darabi have gone and used compressed sensing as a way to provide a simple and efficient scheme to obtain this transport matrix (as opposed to the hierachical and complex scheme devised in the previous paper):


Compressive Dual Photography by Pradeep Sen and Soheil Darabi. The abstract reads:
The accurate measurement of the light transport characteristics of a complex scene is an important goal in computer graphics and has applications in relighting and dual photography. However, since the light transport data sets are typically very large, much of the previous research has focused on adaptive algorithms that capture them efficiently. In this work, we propose a novel, non-adaptive algorithm that takes advantage of the compressibility of the light transport signal in a transform domain to capture it with less acquisitions than with standard approaches. To do this, we leverage recent work in the area of compressed sensing, where a signal is reconstructed from a few samples assuming that it is sparse in a transform domain. We demonstrate our approach by performing dual photography and relighting by using a much smaller number of acquisitions than would normally be needed. Because our algorithm is not adaptive, it is also simpler to implement than many of the current approaches.

Compressive Rendering: A Rendering Application of Compressed Sensing by Pradeep Sen and Soheil Darabi. The abstract reads:
Recently, there has been growing interest in compressed sensing (CS), the new theory that shows how a small set of linear measurements can be used to reconstruct a signal if it is sparse in a transform domain. Although CS has been applied to many problems in other fields, in computer graphics it has only been used so far to accelerate the acquisition of light transport. In this paper, we propose a novel application of compressed sensing by using it to accelerate ray-traced rendering in a manner that exploits the sparsity of the final image in the wavelet basis. To do this, we raytrace only a subset of the pixel samples in the spatial domain and use a simple, greedy CS-based algorithm to estimate the wavelet transform of the image during rendering. Since the energy of the image is concentrated more compactly in the wavelet domain, less samples are required for a result of given quality than with conventional spatial-domain rendering. By taking the inverse wavelet transform of the result, we compute an accurate reconstruction of the desired final image. Our results show that our framework can achieve high-quality images with approximately 75% of the pixel samples using a non-adaptive sampling scheme. In addition, we also perform better than other algorithms that might be used to fill in the missing pixel data, such as interpolation or inpainting. Furthermore, since the algorithm works in image space, it is completely independent of scene complexity.
I think in many respect, this rendering work is extremely important as it has a good potential for guiding more physics based experimentation in that these techniques provides a control beyond what is achievable in any labs. For instance, one can just be in awe when they look at the kind of wavelets are the most suitable in terms of the compression base. No physics based experiment could even touch that. All in all, I hope Pradeep Sen and Soheil Darabi ow whoever is using their techinques are looking to help in the conception of future compressive sensing hardware.

In a different area, we have: Randomized Group Testing for Acoustic Source Localization by William Mantzele, Justin Romberg, and Karim Sabram. The abstract reads:
Undersea localization requires a computationally expensive partial di erential equation simulation to test each candidate hypothesis location via matched fi lter. We propose a method of batch testing that e ffectively yields a test sequence output of random combinations of location-specifi c matched lter correlations, such that the computational run time varies with the number of tests instead of the number of locations. We show that by fi nding the most likely location that could have accounted for these batch test outputs, we are able to perform almost as well as if we had computed each location's matched lter. In particular, we show that we can reliably resolve the target's location up to the resolution of incoherence using only logarithmically many measurements when the number of candidate locations is less than the dimension of the matched lter. In this way, our random mask pattern not only performs substantially the same as cleverly designed deterministic masks in classical batch testing scenarios, but also naturally extends to other scenarios when the design of such deterministic masks may be less obvious.

Random Channel Coding and Blind Deconvolution by M. Salman Asif, William Mantzel and Justin Romberg. The abstract reads:
Blind deconvolution arises naturally when dealing with finite multipath interference on a signal. In this paper we present a new method to protect the signals from the effects of sparse multipath channels—we modulate/encode the signal using random waveforms before transmission and estimate the channel and signal from the observations, without any prior knowledge of the channel other than that it is sparse. The problem can be articulated as follows. The original message x is encoded with an overdetermined m x n (m \gt n) matrix A whose entries are randomly chosen; the encoded message is given by Ax. The received signal is the convolution of the encoded message with h, the s-sparse impulse response of the channel. We explore three different schemes to recover the message x and the channel h simultaneously. The first scheme recasts the problem as a block `1 optimization program. The second scheme imposes a rank-1 structure on the estimated signal. The third scheme uses nuclear norm as a proxy for rank, to recover the x and h. The simulation results are presented to demonstrate the efficiency of the random coding and proposed recovery schemes.
we also have another fundamental paper: The null space property for sparse recovery from multiple measurement vectors by Ming-Jun Lai and Louis Y. Liu. The abstract reads:
We prove a null space property for the uniqueness of the sparse solution vectors recovered from a minimization in `q quasi-norm subject to multiple systems of linear equations, where q 2 (0; 1]. Furthermore, we show that the null space property for the setting of the sparse solution vectors for multiple linear systems is equivalent to the null space property for the standard minimization in `q quasi-norm subject to one linear system. This answers the questions raised in [Foucart and Gribonval'09, [15]].

and finally, Non-Iterative Valid Blind Deconvolution of Sparsifiable Images using an Inverse Filter by Andrew E. Yagle. The abstract reads:
We propose a new non-iterative algorithm for the blind deconvolution problem of reconstructing both a sparsifiable image and a point-spread function (PSF) from their valid 2D convolution. No support constraint is needed for either the image or the PSF, nor is non-negativity of the image. The only requirements are that the PSF be modelled as having a finitesupport inverse or equalizing filter (IF), and that the product of the (known) numbers of nonzero pixels in this inverse filter and the sparsified image be less than the total number of image pixels. The algorithm requires only the solution of a large linear system of equations and a small rank-one matrix decomposition.
A presentation entitled:

Compressed Sensing - Near Optimal Recovery of Signals from Highly Incomplete Measurements
by Wolfgang Dahmen. The introduction to the presentation reads:
The usual paradigm for signal processing is to model a signal as a bandlimited function and to recover it by means of its time samples. The Shannon-Nyquist theory says that the sampling rate needs to be at least twice the bandwidth. For broadbanded signals, such high sampling rates may be impossible to implement in circuitry. Nevertheless such broadbanded signals may be sparse or nearly sparse in the sense that they can be well approximated by linear combinations of relatively few elements of some suitable basis. Compressed Sensing is a new paradigm for signal processing whose aim is to circumvent the dilemma of the Nyquist rate by taking advantage of such an “information sparsity.” Instead of standard time samples one employs more general linear measurment functionals whose number is now to be kept close to the actual information content of the signal. We give a brief introduction to Compressed Sensing that centers on the effectiveness and implementation of random sampling and corresponding reconstruction techniques. The discussion of the concepts is to bring out the potential relevance of compressed sensing for imaging in electron microscopy. The central goal would be to facilitate highly accurate reconstructions of the material structures from a possibly small number of observations viz. low energy exposure exploiting the information sparsity of the observed object in a best possible way.

His slides on TEM and STEM reminded me of this entry on Islamic Tilings.


In a different direction, the University of Edinburgh has an opening for a Research Associate in Compressed Sensing SAR:

School of Engineering: Research Associate in Compressed Sensing SAR

Vacancy details

* Vacancy Reference: 3012541
* Department: School of Engineering
* Job Title: Research Associate in Compressed Sensing SAR
* Job Function: Academic
* Job Type: Full Time
* Live Date: 04-Mar-2010
* Expiry Date: 04-Apr-2010
* Salary Scale: £25,001 - £28,983
* Internal job: No. Anybody can apply for this position.
* Further Information: Further Information
* Conditions Of Employment: View Conditions of Employment

School of Engineering: Research Associate in Compressed Sensing SAR

This post is to investigate a compressed sensing coding techniques for use in Synthetic Aperture Radar (SAR) remote sensing applications.

The purpose of "Compressed sensing SAR" is to provide SAR imaging with a superior compression system that has the capability of practically transmitting large quantities of SAR data to the receiving station over restricted bandwidth with minimal on board computation. The communication or data storage bottleneck is a key factor currently limiting coverage and/or resolution in SAR instruments. Our unique approach to addressing this problem is to utilize the new concept of "compressed sensing" to remove this bottleneck. Compressed sensing has the potential to provide a compression scheme that is bandwidth efficient while requiring minimal on-board processing. The project will explore the feasibility of such a technique and design and evaluate a proof of concept compression system. In addition, you will have the opportunity to work closely with other members of the sparse representations research group within the Institute for Digital Communications.

You will have a PhD (or have recently submitted) or equivalent in engineering, computer science or related discipline, and have a strong background in signal processing. The appointment will be for 18 months starting on 1st April 2010 or as soon as possible thereafter.

Fixed Term: 18 months


The position has been added to the Compressive Sensing Jobs page.

No comments:

Printfriendly