Pages

Monday, July 27, 2009

CS: CS for radio interferometry, Probe Design for Compressive Sensing DNA Microarrays, Search for IMRT inverse plans using CS, Benchmark trial


Yves Wiaux let me know that I previously mentioned the conference paper whereas what follows is the paper submission. Here it is: Compressed sensing for radio interferometry: spread spectrum imaging techniques by Yves Wiaux, Gilles Puy, Yannick Boursier and Pierre Vandergheynst.

We consider the probe of astrophysical signals through radio interferometers with small field of view and baselines with non-negligible and constant component in the pointing direction. In this context, the visibilities measured essentially identify with a noisy and incomplete Fourier coverage of the product of the planar signals with a linear chirp modulation. In light of the recent theory of compressed sensing and in the perspective of defining the best possible imaging techniques for sparse signals, we analyze the related spread spectrum phenomenon and suggest its universality relative to the sparsity dictionary. Our results rely both on theoretical considerations related to the mutual coherence between the sparsity and sensing dictionaries, as well as on numerical simulations.


For some reason, I never mentioned that paper but here it is: Probe Design for Compressive Sensing DNA Microarrays by Wei Dai, Mona A. Sheikh, Olgica Milenkovic, and Richard G. Baraniuk. The abstract reads:
Compressive Sensing Microarrays (CSM) are DNA based sensors that operate using group testing and compressive sensing (CS) principles. In contrast to conventional DNA microarrays, in which each genetic sensor is designed to respond to a single target, in a CSM each sensor responds to a group of targets. We study the problem of designing CS probes that simultaneously account for both the constraints from group testing theory and the biochemistry of probe-target DNA hybridization. Our results show that, in order to achieve accurate hybridization profiling, consensus probe sequences are required to have sequence homology of at least 80% with all targets to be detected. Furthermore, experiments show that out of-equilibrium datasets are usually as accurate as those obtained from equilibrium conditions. Consequently, one can use CSMs in applications for which only short hybridization times are allowed.


I could not find the original paper but it showed up on my radar screen, here is the abstract

Search for IMRT inverse plans with piecewise constant fluence maps using compressed sensing techniques by Lei Zhu and Lei Xing. The abstract reads:
An intensity-modulated radiation therapy (IMRT) field is composed of a series of segmented beams. It is practically important to reduce the number of segments while maintaining the conformality of the final dose distribution. In this article, the authors quantify the complexity of an IMRT fluence map by introducing the concept of sparsity of fluence maps and formulate the inverse planning problem into a framework of compressing sensing. In this approach, the treatment planning is modeled as a multiobjective optimization problem, with one objective on the dose performance and the other on the sparsity of the resultant fluence maps. A Pareto frontier is calculated, and the achieved dose distributions associated with the Pareto efficient points are evaluated using clinical acceptance criteria. The clinically acceptable dose distribution with the smallest number of segments is chosen as the final solution. The method is demonstrated in the application of fixed-gantry IMRT on a prostate patient. The result shows that the total number of segments is greatly reduced while a satisfactory dose distribution is still achieved. With the focus on the sparsity of the optimal solution, the proposed method is distinct from the existing beamlet- or segment-based optimization algorithms.

This is outside of CS but a nonetheless interesting application of benchmarks. One could also argue since CS is a dimensionality reduction technique, it is a good fit:

Does Dimensionality Reduction Improve the Quality of Motion Interpolation? by Sebastian Bitzer, Stefan Klanke and Sethu Vijayakumar . The abstract reads:

In recent years nonlinear dimensionality reduction has frequently been suggested for the modelling of high-dimensional motion data. While it is intuitively plausible to use dimensionality reduction to recover low dimensional manifolds which compactly represent a given set of movements, there is a lack of critical investigation into the quality of resulting representations, in particular with respect to generalisability. Furthermore it is unclear how consistently particular methods can achieve good results. Here we use a set of robotic motion data for which we know the ground truth to evaluate a range of nonlinear dimensionality reduction methods with respect to the quality of motion interpolation. We show that results are extremely sensitive to parameter settings and data set used, but that dimensionality reduction can potentially improve the quality of linear motion interpolation, in particular in the presence of noise.

Credit: NASA, Lunar Eclipse of July 22 over China.

No comments:

Post a Comment