Another way to look at structured sparsity is to evaluate reconstruction through several basis. This is what Yves Wiaux et al do in the first paper mentioned today with an analysis slant. Yves sent me the following:
Hi Igor,You've certainly seen my two tweetsNew article "Sparsity Averaging for Compressive Imaging" accessible at http://infoscience.epfl.ch/record/180382?ln=en ... and tomorrow on ArXiv.New article submitted to Neuroimage on "Reweighted sparse deconvolution for high angular resolution diffusion MRI" available at: http://arxiv.org/abs/1208.2247 .May I ask you to list those papers and abstract on nuit blanche?....thankscheersYves
Thanks Yves.
We propose a novel regularization method for sparse image reconstruction from compressive measurements. The approach relies on the conjecture that natural images exhibit strong average sparsity over multiple coherent frames. The associated reconstruction algorithm, based on an analysis prior and a reweighted ℓ1 scheme, is dubbed Sparsity Averaging Reweighted Analysis (SARA). We test our prior and the associated algorithm through extensive numerical simulations for spread spectrum and Gaussian acquisition schemes suggested by the recent theory of compressed sensing with coherent and redundant dictionaries. Our results show that average sparsity outperforms state-of-the-art priors that promote sparsity in a single orthonormal basis or redundant frame, or that promote gradient sparsity. We also illustrate the performance of SARA in the context of Fourier imaging, for particular applications in astronomy and medicine.
Yves tells me the SARA should be available soon. Stay tuned.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
Reweighted sparse deconvolution for high angular resolution diffusion MRI by Alessandro Daducci, Dimitri Van De Ville, Jean-Philippe Thiran, Yves Wiaux. The abstract reads:
Diffusion MRI is a well established imaging modality providing a powerful and innovative way to non-invasively probe the structure of the white matter. Despite the potential of the technique, the intrinsic long scan times of these sequences have hampered their use in clinical practice. For this reason, a wide variety of methods have been proposed recently with the aim to shorten acquisition times. Among them, spherical deconvolution approaches have gained a lot of interest for their ability to reliably recover the intra-voxel fiber configuration with a relatively small number of data samples. To overcome the intrinsic instabilities in solving the deconvolution problem, these methods make use of regularization schemes generally based on the assumption that the fiber orientation distribution (FOD) to be recovered is sparse, either explicitly or implicitly. In particular, convex optimization methods have recently been advocated in a compressed sensing perspective for FOD reconstruction from accelerated acquisitions. In this paper, we propose to exploit further the versatility of this powerful framework with the aim to exploit sparsity more optimally. We define a new convex minimization problem for FOD reconstruction through a constrained formulation between sparsity prior and data, also making use of a reweighting scheme. The method has been tested on both synthetic and real data. Experimental results indicate that this approach provides more robust and accurate estimates than the state of the art in terms of both the number and orientation of fiber compartments in each voxel.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment