In this article we propose an algorithm, PARNES, for the basis pursuit denoise problem which approximately finds a minimum one-norm solution to an underdetermined least squares problem. PARNES, (1) combines what we think are the best features of currently available methods SPGL1 and NESTA, and (2) incorporates a new improvement that exhibits linear convergence under the assumption of the restricted isometry property (RIP). As with SPGL1, our approach 'probes the Pareto frontier' and determines a solution to the BPDN problem by exploiting its relation between the LASSO problem as given by their Pareto curve. As with NESTA we rely on the accelerated proximal gradient method proposed by Yu. Nesterov that takes a remarkable O((L/e)^1/2) iterations to come within e > 0 of the optimal value, where L is the Lipschitz constant of the gradient of the objective function. Furthermore we introduce an 'outer loop' that regularly updates the prox center. Nesterov's accelerated proximal gradient method then becomes the 'inner loop'. The restricted isometry property together with the Lipschitz differentiability of our objective function permits us to derive a condition for switching between the inner and outer loop in a provably optimal manner. A by-product of our approach is a new algorithm for the LASSO problem that also exhibits linear convergence under RIP.
The paper shows a comparison with some of the fastest code available (all of them, except for FISTA can be found on the reconstruction section of the Big Picture in Compressive Sensing). Lek-Heng tells me that an implementation of the code will be released as soon as the code is cleaned up and some improvements are made.
Also found on the internets: New Bounds for Restricted Isometry Constants by T. Tony Cai, Lie Wang, Guangwu Xu. The abstract reads:
In this paper we show that if the restricted isometry constant \delta k of the compressed sensing matrix satisfies \delta_k < deltak ="(">
This article considers sparse signal recovery in the presence of noise. A mutual incoherence condition which was previously used for exact recovery in the noiseless case is shown to be sufficient for stable recovery in the noisy case. Both bounded noise and Gaussian noise settings are considered. Furthermore, the condition is shown to be sharp. In addition, an oracle inequality is given under the mutual incoherence condition.
and finally the same authors have found a weaker condition than the traditional RIP in Shifting Inequality and Recovery of Sparse Signals by T. Tony Cai, Lie Wang, Guangwu Xu. The abstract reads:
In this paper we present a concise and coherent analysis of the constrained l1 minimization method for stable recovering of high-dimensional sparse signals both in the noiseless case and noisy case. The analysis is surprisingly simple and elementary, while leads to strong results. In particular, it is shown that the sparse recovery problem can be solved via l1 minimization under weaker conditions than what is known in the literature. A key technical tool is an elementary inequality, called Shifting Inequality, which, for a given nonnegative decreasing sequence, bounds the l2 norm of a subsequence in terms of the l1 norm of another subsequence by shifting the elements to the upper end.
In a different direction, we have: Super-Resolution and Reconstruction of Sparse Sub-Wavelength Images by Snir Gazit, Alexander Szameit, Yonina Eldar, Mordechai Segev. The abstract reads:
We use compressed sensing to demonstrate theoretically the reconstruction of sub-wavelength features from measured far-field, and provide experimental proof-of-concept. The methods can be applied to non-optical microscopes, provided the information is sparse.
As mentioned earlier, the OPTSPACE code can be found here.We consider the problem of reconstructing a low-rank matrix from a small subset of its entries. In this paper, we describe the implementation of an efficient algorithm called OptSpace, based on singular value decomposition followed by local manifold optimization, for solving the low-rank matrix completion problem. It has been shown that if the number of revealed entries is large enough, the output of singular value decomposition gives a good estimate for the original matrix, so that local optimization reconstructs the correct matrix with high probability. We present numerical results which show that this algorithm can reconstruct the low rank matrix exactly from a very small subset of its entries. We further study the robustness of the algorithm with respect to noise, and its performance on actual collaborative filtering datasets.
No comments:
Post a Comment