You may recall my profound insistence on the tools developed to take a stab at the null space property:
- On verifiable sufficient conditions for sparse signal recovery via L1 minimization by Anatoli Juditsky and Arkadi Nemirovski. The code implementating that solution is available at: http://sites.google.com/site/testsensingmatrices/
- Testing the Nullspace Property using Semidefinite Programming by Alexandre d'AspremontLaurent El Ghaoui where the NSProp implementation is now available at : http://www.princeton.edu/~aspremon/NSPcode.htm
both programs are prominently listed in the measurement matrix section of the Big Picture. Well, they did it again, and this time in Verifiable conditions of $\ell_1$-recovery of sparse signals with sign restrictions by Anatoli Juditsky, Fatma Kilinc Karzan and Arkadi Nemirovski. The abstract reads:
We propose necessary and sufficient conditions for a sensing matrix to be ``$s$-semigood'' -- to allow for exact $\ell_1$-recovery of sparse signals with at most $s$ nonzero entries under sign restrictions on part of the entries. We express the error bounds for imperfect $\ell_1$-recovery in terms of the characteristics underlying these conditions. Furthermore, we demonstrate that these characteristics, although difficult to evaluate, lead to verifiable sufficient conditions for exact sparse $\ell_1$-recovery and to efficiently computable upper bounds on those $s$ for which a given sensing matrix is $s$-semigood. We concentrate on the properties of proposed verifiable sufficient conditions of $s$-semigoodness and describe their limits of performance.
It should be mentioned that the characterizations (4), (5) and geometric characterizations of s-(semi)goodness of A from [6, 7] share an important drawback { they seemingly cannot be verified in a computationally efficient way. To the best of our knowledge, the only computationally tractable techniques available in the \classical" theory of Compressive Sensing which allow to certify s-(semi)goodness of a given sensing matrix are those based on the mutual incoherence... Clearly, the mutual incoherence can be easily computed even for large matrices. Unfortunately, it turns out that that the estimates of \level of (semi)goodness" of a sensing matrix based on mutual incoherence usually are too conservative, in particular, they are provably dominated by the verifiable Linear Programming-based sufficient conditions for s-goodness proposed in the companion paper [10] and based on Zhang's characterization of s-goodness (5). Another verifiable sufficient condition for s-goodness, which uses the Semidefinite Relaxation, has been recently proposed in [4].
I think this is new:
The paper pointed to a set of technical reports by Yin Zhang I did not know about:The Matching Pursuit algorithm for signal recovery has been rst introduced in [11] and is motivated by the desire to provide a reduced complexity alternative to the l_1-recovery problem. Several implementations of Matching Pursuit has been proposed in the Compressive Sensing literature (see, e.g., the review [1]). All of them are based on successive Euclidean projections of the signal and the corresponding performance results rely upon the bounds on mutual incoherence (A) of the sensing matrix. We are about to show that the verifiable sufficient conditions from the previous section can be used to construct a specific version of the Matching Pursuit algorithm which we refer to Non-Euclidean Matching Pursuit (NEMP) algorithm.
- A Simple Proof for Recoverability of L1-Minimization (II): the Nonnegativity Case Yin Zhang
- A Simple Proof for Recoverability of L1-Minimization: Go Over or Under? Yin Zhang
- Solution-recovery in L1-norm for non-square linear systems: deterministic conditions and open questions Yin Zhang
I also found the following two papers on the Rice repository page and elsewhere:
Average Case Analysis of Multichannel Sparse Recovery Using Convex Relaxation by Yonina Eldar, Holger Rauhut. The abstract reads:
In this paper, we consider recovery of jointly sparse multichannel signals from incomplete measurements. Several approaches have been developed to recover the unknown sparse vectors from the given observations, including thresholding, simultaneous orthogonal matching pursuit (SOMP), and convex relaxation based on a mixed matrix norm. Typically, worst-case analysis is carried out in order to analyze conditions under which the algorithms are able to recover any jointly sparse set of vectors. However, such an approach is not able to provide insights into why joint sparse recovery is superior to applying standard sparse reconstruction methods to each channel individually. Previous work considered an average case analysis of thresholding and SOMP by imposing a probability model on the measured signals. In this paper, our main focus is on analysis of convex relaxation techniques. In particular, we focus on the mixed l_2_1 approach to multichannel recovery. We show that under a very mild condition on the sparsity and on the dictionary characteristics, measured for example by the coherence, the probability of recovery failure decays exponentially in the number of channels. This demonstrates that most of the time, multichannel sparse recovery is indeed superior to single channel methods. Our probability bounds are valid and meaningful even for a small number of signals. Using the tools we develop to analyze the convex relaxation method, we also tighten the previous bounds for thresholding and SOMP.
The affine rank minimization problem, which consists of finding a matrix of minimum rank subject to linear equality constraints, has been proposed in many areas of engineering and science. A specific rank minimization problem is the matrix completion problem, in which we wish to recover a (low-rank) data matrix from incomplete samples of its entries. A recent convex relaxation of the rank minimization problem minimizes the nuclear norm instead of the rank of the matrix. Another possible model for the rank minimization problem is the nuclear norm regularized linear least squares problem. This regularized problem is a special case of an unconstrained nonsmooth convex optimization problem, in which the objective function is the sum of a convex smooth function with Lipschitz continuous gradient and a convex function on a set of matrices. In this paper, we propose an accelerated proximal gradient algorithm, which terminates in $O(1/\sqrt{\epsilon})$ iterations with an $\epsilon$-optimal solution, to solve this unconstrained nonsmooth convex optimization problem, and in particular, the nuclear norm regularized linear least squares problem. We report numerical results for solving large-scale randomly generated matrix completion problems. The numerical results suggest that our algorithm is efficient and robust in solving large-scale random matrix completion problems. In particular, we are able to solve random matrix completion problems with matrix dimensions up to $10^5$ each in less than {3.5 hours} on a modest PC.
Credit: NASA/JPL/University of Arizona, Volcanic Layers Exposed in Pit (ESP_012310_1715)
No comments:
Post a Comment