Pages

Wednesday, January 06, 2010

CS: The reports of Compressive Sensing's demise have been greatly exaggerated, GPU-TV-CT, Tensor product of sparse matrices.


If you haven't done so yet, please go out and fill in yesterday's poll (please note the possibility of saying that you do not own an iPod/iPhone).

Today, we have two preprints from arxiv but before that (given some exchange with some of you) I would like to remind all of you of the following short discussion I had with Gerry Skinner, a specialist in Coded Aperture Imaging back in August'08. Once you are done reading this insightful interview from a veteran in coded aperture, who, for all intent and purpose tells people to stay away from coded aperture after having worked on it for the past thirty years, you may want to reflect on the three other recent entries:
All these interrogations could provide some doubts in the minds of the weak :-), however, one should really look at Compressive Sensing, not just as an incremental technology, but rather as a way to do something that would be infinitely more complex and resource intensive to do with current technology. Some won't work because current technology will catch up faster thanks to Moore's law (the one pixel example of Gerry), some will work because even if Moore's law is helping them, they will still be more expensive to do than a low tech CS approach. In the CT business, CS brings the ability to reduce the dose to the patient. In MRI, it brings the potential of studying the internal dynamics of our body. In bio, group testing procedures allow testing reduction by one order of magnitude (for the moment). In coded aperture, CS brings the ability to get superresolution, 3D or some multi-spectral capability. As one can see, CS is not reaching an end-game, rather it opens new futures and it is really up to us to make them happen. For those of us concerned about noise, maybe we should be in the business of redefining noise and use compressive sensing as a way to do this, surely we cannot always rely on electrical engineers to do that. If you have one sentence feel good story about how CS changed your area of expertise, I could group it with others. Coming back to the app phone poll, how much do you think that insight is worth :-) ? I bet more than $0.99 and surely more than free.

Today, the first paper is about providing speed in the reconstruction in the CT case, please note the number of x-ray projections for a "good" image:

GPU-based Cone Beam CT Reconstruction via Total Variation Regularization by Xun Jia, Yifei Lou, John Lewis, Ruijiang Li, Xuejun Gu, Chunhua Men, Steve B. Jiang. The abstract reads:
Cone-beam CT (CBCT) reconstruction is of central importance in image guided radiation therapy due to its broad applications in many clinical contexts. However, the high image dose in CBCT scans is a clinical concern, especially when it is used repeatedly for patient setup purposes before each radiotherapy treatment fraction. A desire for lower imaging does has motivated a vast amount of interest in the CBCT reconstruction based on a small number of X-ray projections. Recently, advances in image processing and compressed sensing have led to tremendous success in recovering signals based on extremely low sampling rates, laying the mathematical foundation for reconstructing CBCT from few projections. In this paper, we present our recent development on a GPU-based iterative algorithm for the highly under-sampled CBCT reconstruction problem. We considered an energy functional consisting of a data fidelity term and a regularization term of a total variation norm. In order to solve our model, we developed a modified fixed-point continuation algorithm. Our numerical computations demonstrated satisfactory reconstruction accuracy and promising efficiency. We evaluated the reconstruction results under different number of projections. It is found that about 15-40 X-ray projections are enough to provide satisfactory image quality for clinical purpose in cancer radiotherapy. Potential applications of our algorithm in 4D-CBCT and possible approaches to improve our algorithm will also be discussed.

the second paper is about providing some theoretical justification for the use of sparse measurement matrices in tensor products for high dimensional cases:

It has been known since 1970's that the N-dimensional $\ell_1$-space contains nearly Euclidean subspaces whose dimension is $\Omega(N)$. However, proofs of existence of such subspaces were probabilistic, hence non-constructive, which made the results not-quite-suitable for subsequently discovered applications to high-dimensional nearest neighbor search, error-correcting codes over the reals, compressive sensing and other computational problems. In this paper we present a "low-tech" scheme which, for any $a > 0$, allows to exhibit nearly Euclidean $\Omega(N)$-dimensional subspaces of $\ell_1^N$ while using only $N^a$ random bits. Our results extend and complement (particularly) recent work by Guruswami-Lee-Wigderson. Characteristic features of our approach include (1) simplicity (we use only tensor products) and (2) yielding arbitrarily small distortions, or "almost Euclidean" subspaces.


Credit Photo: NASA/ESA, SOHO spacecraft imaging the demise of a comet crashing on the Sun on January 3, 2010

2 comments:

  1. Hi
    This paper might be of interest:

    http://arxiv.org/abs/0912.5338

    In general, the statistics arxiv:

    http://arxiv.org/list/stat/new

    often has papers related to this blog.

    Great blog, by the way.

    Larry Wasserman

    ReplyDelete
  2. Larry,

    Thanks for the heads-up. I try to avoid spending time doing these searches and tend to rely on keywords. Do you think Matrix completion would do for the type of work you have seen there ?

    Cheers,

    Igor.

    ReplyDelete