If you haven't done so yet, please go out and fill in yesterday's poll (please note the possibility of saying that you do not own an iPod/iPhone).
Today, we have two preprints from arxiv but before that (given some exchange with some of you) I would like to remind all of you of the following short discussion I had with Gerry Skinner, a specialist in Coded Aperture Imaging back in August'08. Once you are done reading this insightful interview from a veteran in coded aperture, who, for all intent and purpose tells people to stay away from coded aperture after having worked on it for the past thirty years, you may want to reflect on the three other recent entries:
Today, the first paper is about providing speed in the reconstruction in the CT case, please note the number of x-ray projections for a "good" image:
GPU-based Cone Beam CT Reconstruction via Total Variation Regularization by Xun Jia, Yifei Lou, John Lewis, Ruijiang Li, Xuejun Gu, Chunhua Men, Steve B. Jiang. The abstract reads:
the second paper is about providing some theoretical justification for the use of sparse measurement matrices in tensor products for high dimensional cases:
Today, we have two preprints from arxiv but before that (given some exchange with some of you) I would like to remind all of you of the following short discussion I had with Gerry Skinner, a specialist in Coded Aperture Imaging back in August'08. Once you are done reading this insightful interview from a veteran in coded aperture, who, for all intent and purpose tells people to stay away from coded aperture after having worked on it for the past thirty years, you may want to reflect on the three other recent entries:
- About the non-use of compressive sensing in a recent application in Is Compressive Sensing a zero-th or a first order capability ?
- About the current uncertainty with regards to the use of nonlinear solvers in current commercial CT (ART in CTs ?)
- About whether we have reached a ceiling (or the ground) with respect to how fast these solvers can go ?
Today, the first paper is about providing speed in the reconstruction in the CT case, please note the number of x-ray projections for a "good" image:
GPU-based Cone Beam CT Reconstruction via Total Variation Regularization by Xun Jia, Yifei Lou, John Lewis, Ruijiang Li, Xuejun Gu, Chunhua Men, Steve B. Jiang. The abstract reads:
Cone-beam CT (CBCT) reconstruction is of central importance in image guided radiation therapy due to its broad applications in many clinical contexts. However, the high image dose in CBCT scans is a clinical concern, especially when it is used repeatedly for patient setup purposes before each radiotherapy treatment fraction. A desire for lower imaging does has motivated a vast amount of interest in the CBCT reconstruction based on a small number of X-ray projections. Recently, advances in image processing and compressed sensing have led to tremendous success in recovering signals based on extremely low sampling rates, laying the mathematical foundation for reconstructing CBCT from few projections. In this paper, we present our recent development on a GPU-based iterative algorithm for the highly under-sampled CBCT reconstruction problem. We considered an energy functional consisting of a data fidelity term and a regularization term of a total variation norm. In order to solve our model, we developed a modified fixed-point continuation algorithm. Our numerical computations demonstrated satisfactory reconstruction accuracy and promising efficiency. We evaluated the reconstruction results under different number of projections. It is found that about 15-40 X-ray projections are enough to provide satisfactory image quality for clinical purpose in cancer radiotherapy. Potential applications of our algorithm in 4D-CBCT and possible approaches to improve our algorithm will also be discussed.
the second paper is about providing some theoretical justification for the use of sparse measurement matrices in tensor products for high dimensional cases:
A simple construction of almost-Euclidean subspaces of $\ell_1^N$ via tensor products by Piotr Indyk, Stanislaw Szarek. The abstract reads:
It has been known since 1970's that the N-dimensional $\ell_1$-space contains nearly Euclidean subspaces whose dimension is $\Omega(N)$. However, proofs of existence of such subspaces were probabilistic, hence non-constructive, which made the results not-quite-suitable for subsequently discovered applications to high-dimensional nearest neighbor search, error-correcting codes over the reals, compressive sensing and other computational problems. In this paper we present a "low-tech" scheme which, for any $a > 0$, allows to exhibit nearly Euclidean $\Omega(N)$-dimensional subspaces of $\ell_1^N$ while using only $N^a$ random bits. Our results extend and complement (particularly) recent work by Guruswami-Lee-Wigderson. Characteristic features of our approach include (1) simplicity (we use only tensor products) and (2) yielding arbitrarily small distortions, or "almost Euclidean" subspaces.
Credit Photo: NASA/ESA, SOHO spacecraft imaging the demise of a comet crashing on the Sun on January 3, 2010
Hi
ReplyDeleteThis paper might be of interest:
http://arxiv.org/abs/0912.5338
In general, the statistics arxiv:
http://arxiv.org/list/stat/new
often has papers related to this blog.
Great blog, by the way.
Larry Wasserman
Larry,
ReplyDeleteThanks for the heads-up. I try to avoid spending time doing these searches and tend to rely on keywords. Do you think Matrix completion would do for the type of work you have seen there ?
Cheers,
Igor.