Monday, January 12, 2009

CS: Combining Parallel Imaging and Compressed Sensing, A short discussion with Michael Lustig


Michael Lustig just pointed my attention to the work he and his co-authors were presenting in th upcoming workshop on data sampling and image reconstruction. There is 1 page abstract on-line and a related presentation on the subject entitled:





I went ahead and first checked the pdf version and asked him the following question:
In the incoherent sampling section (slide 107 in the pdf presentation), one thing I'd like to understand, why are you saying that it is has "too many holes", do you mean to say you cannot control the accuracy of the signal in certain region of the frequency space because you don't gather information there ?
Miki responded with:
In the figure, only the white pixels are collected.

There are several aspects here. We are trying to use two separate sources of prior information on the signal in order to recover it.
The first source is that we use a multiple receiver array (slide 7,8), where each array element receives the image information weighted by receive sensitivity of the element, so there is redundancy there. The second, is the compressibility of the signal (slide 9,10).

If the undersampling is much less than the number of elements, the first source is enough to reconstruct, as the measurement matrix is over-determined. But, as the undersampling gets close to the number of elements and beyond, the system of equations becomes ill-conditioned (see slide 88, that shows the noise amplification due to the ill-conditionedness) and then under-determined.  -

The rule of thumb is that when points are relatively close in frequency, the reconstruction is well conditioned, but when they are far away, the reconstruction becomes ill-conditioned.

Now, in order to exploit  sparsity, we need to have an incoherent sampling pattern. But it turns out, that just choosing samples at random is not a good idea, because the distances between samples are not preserved. Globally,  such a sampling scheme has uniform density, but locally, you will get high density areas and "holes". These "holes" really mess up the receiver array source of redundancy. On the other hand, using a Poisson disc sampling distribution provides local uniform sampling everywhere, and also incoherency. So, both sources of redundancy are maximally exploited.

Slides 86-102 explain how parallel imaging reconstruction is done in practice. But I guess if you are not familiar with the concept it is harder to understand without the animation. Try the quicktime version, it is cool.


The conditioning of the parallel imaging reconstruction is actually more than a rule of thumb, It can  be calculated analytically. But I wanted to simplify the explanation.

I watched the Quicktime version and then asked:

One more thing, since MRI is ahead with regards to other technologiesimplementing CS, can you poinpoint for me a formula/bound in your work that lays out thw analytical calculation you just mentioned ?

Miki kindly responded with 

Pruessmann KP, Weiger M, Scheidegger MB, Boesiger P., SENSE: Sensitivity Encoding for Fast MRI , Magn Reson Med. 1999 Nov;42(5):952-62.

Pruessmann KP, Weiger M, Börnert P, Boesiger P., Advances in sensitivity encoding with arbitrary k-space trajectories. Magn Reson Med. 2001 Oct;46(4):638-51.

Thanks Miki!

The first reference pinpoints the issue at hand in gathering a signal from multiple sensors at the same time 

There are actually two kinds of noise that affect SENSE images, i.e., noise in sample values and noise in sensitivity data. The latter, however, can usually be reduced to a negligible level by smoothing. Then Eq. [8] for the calculation of image noise holds. This equation illustrates two important aspects of noise propagation in SENSE reconstruction.

First, with multiple channels the diagonal entries in \Psi vary from channel to channel and there is noise correlation between samples taken simultaneously, i.e., there are non-zero cross-terms. Second, unlike a matrix representation of FFT, a SENSE reconstruction matrix generally is not unitary. As a consequence, unlike standard Fourier images the noise level in a SENSE image varies from pixel to pixel and there is noise correlation between pixels.

4 comments:

Anonymous said...

I wonder how random sampling is better than poisson-disc sampling in case of single coil with regard to CS point of view as we have local high and low density areas in case of random sampling?
Thanks

Anonymous said...

I am not yet sure if random sampling is better. Random sampling has uniform coherence (or lack of) between pixels. Poisson disc has higher coherence between pixels at a distance larger than the disc radius, and lower coherence with closer pixels.

I guess it depends on the statistics of the signal.

Anonymous said...

Which method you are using to generate poisson disc sampling pattern as it is quite computationally expensive.

Davide

Anonymous said...

I used dart throwing usin the code available at: http://www.cs.virginia.edu/~gfx/pubs/antimony/

The computational complexity is negligible. We generated a very large set of a poisson disc sampling pattern (supports a Nyquist rate equivalent of 1024x1024 matrix). Whenever we need to scan, we crop the sampling pattern and stretch it according to the desired field of view and resolution. This is implemented on an MRI scanner and has no delay what so ever.

Originally, I think it took about 20 min to generate that sampling pattern, but we did it only once.

Printfriendly