Pages

Thursday, March 17, 2011

CS: A note on IHT experiments, Compressive Sensing in Bats, Constructing test instances for Basis Pursuit Denoising, Xampling: Compressed Sensing of Analog Signals, Universal low-rank matrix recovery from Pauli measurements

Kun Qiu sent me and Bob the following:

Hi, Bob and Igor,
I’ve recently read Bob’s interesting experiments of IHT methods through Nuit Blanche. Since I also have quite a lot empirical experience with IHT type of methods, I would like to share my take on IHT and discuss the efforts that we make to improve the hard thresholding methods.

(1) In paper [1], we conducted similar experiments as Bob’s. We also found that in the “small scale” toy examples (Gaussian random sensing matrices, with dimension of the problem around hundreds), IHT methods (actually the more stable version NIHT) perform poorly compared to other existing methods (such as CoSaMP, l1-convex relaxation, and also our previously developed ExCoV method), see Fig. 2 [1].

But the story is very different for large scale examples (such as image reconstruction, using partial Fourier ensembles, with dimension of the signal exceeding 10^4), see Fig. 4 [1]. In these large scale examples, IHT is very competitive. We have posted the code for these experiments at http://home.eng.iastate.edu/~ald/ExCoV.htm

(2) The tuning of IHT is an important issue, particularly the sparsity tuning parameter. Our experience is, for IHT it is easier to directly tune the sparsity level (i.e. the number of non-zeros allowed in the solution) than the Lagrange multiplier type of tuning parameter in the optimization problem || y – H s ||_2^2 + \lambda || s ||_0. In other words, use the following form of IHT is simpler:
                               s(n+1) = T_r [ s(n) + \mu H^T ( y - H s(n) ) ]
where the hard thresholding operator T_r keeps the r largest-magnitude elements of the operand vector intact and sets the rest to zero. We are actually developing an automatic version of hard thresholding method which estimates the sparsity level parameter r directly from the measurements. The preliminary work was published in conference paper [2]. See also the arXiv report [3] for a full discussion.

(3) The convergence of IHT method is typically linear and the choice of step size \mu influences the convergence speed. In [2], we have proposed to use the following iteration
                            s(n+1) = T_r [ s(n) + H^T (H H^T)^{-1} ( y - H s(n) ) ]
instead of the original IHT iteration. The additionally introduced (H H^T)^{-1} term stabilizes the original IHT iteration and is tuning free (we provide convergence analysis of this new iteration in [3]). Of course the introduction of the (H H^T)^{-1} term adds additional computation burden and should be approximated somehow for large scale problems. But fortunately, for most important large scale CS problem (e.g. CT/MRI reconstruction, where the sampling ensemble is partial Fourier), (H H^T)^{-1} is identity. We also provide an acceleration of the modified IHT iteration in [2] and [3] by using double over-relaxations (DORE). DORE significantly speeds up convergence, as reported in [2] and [3].
The code for experiments of [2] is posed at http://home.eng.iastate.edu/~ald/DORE.htm

Thanks,

Kun

Reference

[1] K. Qiu and A. Dogandžić, "Variance-component based sparse signal reconstruction and model selection," IEEE Trans. Signal Processing, vol. 58, pp. 2935-2952, Jun. 2010.
[2] K. Qiu and A. Dogandžić, "Double overrelaxation thresholding methods for sparse signal reconstruction," Proc. 44th Annu. Conf. Inform. Sci. Syst., Princeton, NJ, Mar. 2010.
[3] K. Qiu and A. Dogandžić, "ECME thresholding methods for sparse signal reconstruction," Apr. 2010. [Online]. Available: arXiv.

Thanks Kun !


Vicente Malave sent me the following:
Hello Igor, I really enjoy your blog. I thought you might find this article interesting, showing that compressive sampling is employed in nature.

Thanks Vicente. I look forward when the paper will be available. In the meantime here is the paywalled version: Compressive sensing: A strategy for fluttering target discrimination employed by bats emitting broadband calls by Bertrand Fontaine, Herbert Peremans. The abstract reads:
When foraging, so-called FM-bats emit sequences of frequency modulated (FM) calls in order to detect, identify, and localize edible prey. Once a potential target has been detected, various call and call sequence parameters, such as frequency sweep, pulse duration, and inter pulse interval (IPI) vary. In this paper, the possible functions of the variation of the IPI are studied. In particular, it is conjectured that the IPI patterns are an adaptive behavior that optimizes the signal design parameters in order to improve information retrieval. Such an irregular sampling strategy would be useful whenever bats need to characterize signal modulation (e.g., the wing beat of an insect) using a call emission rate lower than the signal modulation of interest. This problem can be recast as extracting features, in this case the joint acoustic and modulation frequency representation, from signals sampled at frequencies well below the Nyquist cut-off frequency. To study the possibility of such target classification using a sub-Nyquist sampling scheme, results derived in the context of compressive sensing are used. Processing echoes collected from both rotating computer fans and fluttering locusts, it is shown that such a strategy would allow FM-bats to discriminate between targets based on their different fluttering rates.

If you recall Dirk Lorenz had a presentation in Dagsthul entitled Basis pursuit denoising: Exact test instances and exact recovery for ill-posed problems. It looks like we have the attendant paper here: Constructing test instances for Basis Pursuit Denoising by Dirk Lorenz. The abstract reads:
The number of available algorithms for the so-called Basis Pursuit Denoising problem (or the related LASSO-problem) is large and keeps growing. Similarly, the number of experiments to evaluate and compare these algorithms on different instances is growing.
In this note, we present a method to produce instances with exact solutions which is based on a simple observation which is related to the so called source condition from sparse regularization.
Xampling: Compressed Sensing of Analog Signals by Moshe Mishali, Yonina Eldar. The abstract reads:
Xampling generalizes compressed sensing (CS) to reduced-rate sampling of analog signals. A unified framework is introduced for low rate sampling and processing of signals lying in a union of subspaces. Xampling consists of two main blocks: Analog compression that narrows down the input bandwidth prior to sampling with commercial devices followed by a nonlinear algorithm that detects the input subspace prior to conventional signal processing. A variety of analog CS applications are reviewed within the unified Xampling framework including a general filter-bank scheme for sparse shift-invariant spaces, periodic nonuniform sampling and modulated wideband conversion for multiband communications with unknown carrier frequencies, acquisition techniques for finite rate of innovation signals with applications to medical and radar imaging, and random demodulation of sparse harmonic tones. A hardware-oriented viewpoint is advocated throughout, addressing practical constraints and exemplifying hardware realizations where relevant. It will appear as a chapter in a book on "Compressed Sensing: Theory and Applications" edited by Yonina Eldar and Gitta Kutyniok.
Finally, a new quantum mechanics paper related to compressive sensing/matrix completion issues: Universal low-rank matrix recovery from Pauli measurements by Yi-Kai Liu. The abstract reads:
We show that there exists a universal set of O(rd log^6 d) Pauli measurements, from which one can reconstruct any unknown matrix M of rank r and dimension d. In particular, M can be recovered by solving a convex program (minimizing the trace or nuclear norm). This has applications to compressed sensing methods for quantum state tomography. It improves on previous results in two respects: it uses a fixed set of Pauli measurements, independent of the unknown quantum state; and it implies nearly-optimal bounds on the error of the reconstructed density matrix. In addition, this method can be used to learn arbitrary (not necessarily low-rank) states, up to an error of O(1/r) in 2-norm. Similar results hold for measurements in any incoherent operator basis. Our main technical contribution is to show that the random Pauli sampling operator obeys the "restricted isometry property" (RIP) over the set of all low-rank matrices. Our proof uses Dudley's entropy bound for Gaussian processes, together with known bounds on covering numbers of the trace-norm ball.



Image Credit: NASA/JPL/Space Science Institute. W00066897.jpg was taken on March 14, 2011 and received on Earth March 15, 2011. The camera was pointing toward SATURN at approximately 2,434,617 kilometers away, and the image was taken using the CB2 and CL2 filters.

No comments:

Post a Comment