- Andrew SODA Accepts
- Bob: Comparing Audio Signals in a Sparse Domain, A Successful HAID 2010!
- Gonzalo: Spectral Sensing at CIP 2010
- Terry: A second draft of a non-technical article on universality, 245A, prologue: The problem of measure
- Frank: Earth mover distance,
- Sarah: Johnson-Lindenstrauss and RIP
- Mademoiselle CyberGi: Compressed sensing e representaĆ§Ć£o esparsa
Dror Baron , now at NC State, has new talk on sudocodes - a fast compressed sensing reconstruction approach that uses a hybrid dense/sparse measurement matrix and two corresponding algorithms (joint work with I. Poltorak and Deanna Needell)
Simon Foucart, now at Drexel, released the Hard Thresholding Pursuit (HPT) code that was described back in August in Hard thresholding pursuit: an algorithm for Compressive Sensing. The code is here. I have added it to the reconstruction solver section of the Big Picture.
On the LinkedIn group, Marc asked the following question:
Now let us see the publication and preprints that just showed up on my radar screen:Dear all,
I've started my Master Thesis and it's about CS for Electron Microscopy Data. We have the common problem of the "missing cone", the specimen which we want to capture the 3D-model makes a shadow when we turn it. This shadow in the Fourier Domain is a cone, and the data is not useful. In pixel domain we obtain an elongated image, so none of the two representations is sparse.
So the question is: do you think that it's possible to find any basis where the data is Sparse??? Or can we modified the data to obtain a Sparse Signal, apply CS and then revert the modifications??? Or can we force to obtain a sparse signal in the EM???
Any suggestion will be welcome.
Thank you everybody.
Rapid Volumetric OCT Image Acquisition Using Compressive Sampling (abstract is here) by Evgeniy Lebed, Paul J. Mackenzie, Marinko V. Sarunic, and Faisal M. Beg. The abstract reads:
Acquiring three dimensional image volumes with techniques such as Optical Coherence Tomography (OCT) relies on reconstructing the tissue layers based on reflection of light from tissue interfaces. One B-mode scan in an image is acquired by scanning and concatenating several A-mode scans, and several contiguous slices are acquired to assemble a full 3D image volume. In this work, we demonstrate how Compressive Sampling (CS) can be used to accurately reconstruct 3D OCT images with minimal quality degradation from a subset of the original image. The full 3D image is reconstructed from sparsely sampled data by exploiting the sparsity of the image in a carefully chosen transform domain. We use several sub-sampling schemes, recover the full 3D image using CS, and show that there is negligible effect on clinically relevant morphometric measurements of the optic nerve head in the recovered image. The potential outcome of this work is a significant reduction in OCT image acquisition time, with possible extensions to speeding up acquisition in other imaging modalities such as ultrasound and MRI.
Analyzing Weighted $\ell_1$ Minimization for Sparse Recovery with Nonuniform Sparse Models by M. Amin Khajehnejad, Weiyu Xu, A. Salman Avestimehr, Babak Hassibi. The abstract reads:
In this paper we introduce a nonuniform sparsity model and analyze the performance of an optimized weighted $\ell_1$ minimization over that sparsity model. In particular, we focus on a model where the entries of the unknown vector fall into two sets, with entries of each set having a specific probability of being nonzero. We propose a weighted $\ell_1$ minimization recovery algorithm and analyze its performance using a Grassmann angle approach. We compute explicitly the relationship between the system parameters-the weights, the number of measurements, the size of the two sets, the probabilities of being nonzero- so that when i.i.d. random Gaussian measurement matrices are used, the weighted $\ell_1$ minimization recovers a randomly selected signal drawn from the considered sparsity model with overwhelming probability as the problem dimension increases. This allows us to compute the optimal weights. We demonstrate through rigorous analysis and simulations that for the case when the support of the signal can be divided into two different subclasses with unequal sparsity fractions, the optimal weighted $\ell_1$ minimization outperforms the regular $\ell_1$ minimization substantially. We also generalize the results to an arbitrary number of classes.
A Lower Bound on the Estimator Variance for the Sparse Linear Model by Sebastian Schmutzhard, Alexander Jung, Franz Hlawatsch, Zvika Ben-Haim, Yonina C. Eldar. The abstract reads:
We study the performance of estimators of a sparse nonrandom vector based on an observation which is linearly transformed and corrupted by additive white Gaussian noise. Using the reproducing kernel Hilbert space framework, we derive a new lower bound on the estimator variance for a given differentiable bias function (including the unbiased case) and an almost arbitrary transformation matrix (including the underdetermined case considered in compressed sensing theory). For the special case of a sparse vector corrupted by white Gaussian noise-i.e., without a linear transformation-and unbiased estimation, our lower bound improves on previously proposed bounds.
We propose a shrinkage procedure for simultaneous variable selection and estimation in generalized linear models (GLMs) with an explicit predictive motivation. The procedure estimates the coefficients by minimizing the Kullback-Leibler divergence of a set of predictive distributions to the corresponding predictive distributions for the full model, subject to an $l_1$ constraint on the coefficient vector. This results in selection of a parsimonious model with similar predictive performance to the full model. Thanks to its similar form to the original lasso problem for GLMs, our procedure can benefit from available $l_1$-regularization path algorithms. Simulation studies and real-data examples confirm the efficiency of our method in terms of predictive performance on future observations.
We propose the Bayesian adaptive Lasso (BaLasso) for variable selection and coefficient estimation in linear regression. The BaLasso is adaptive to the signal level by adopting different shrinkage for different coefficients. Furthermore, we provide a model selection machinery for the BaLasso by assessing the posterior conditional mode estimates, motivated by the hierarchical Bayesian interpretation of the Lasso. Our formulation also permits prediction using a model averaging strategy. We discuss other variants of this new approach and provide a unified framework for variable selection using flexible penalties. Empirical evidence of the attractiveness of the method is demonstrated via extensive simulation studies and data analysis.
If you think this blog provides a service, please support it by ordering through the Amazon - Nuit Blanche Reference Store
No comments:
Post a Comment