Pages

Monday, October 19, 2009

CS: KSVDS-Box and OMPS-Box, Simultaneous Sparse Approximation, Inferring Ranking using Constrained Sensing, Justice Pursuit

Another blogger who went to the OSA meeting on top of David Brady blogged about the meeting and his encounter with compressive sensing and ghost imaging :-). On top of the previous answers, Angshul Majumdar seems to think that a "very neat answer to his question" is the Sparse Signal Restoration course by Ivan Selesnick at: http://cnx.org/content/m32168/latest/

Rebounding on Lianlin Li's blog entry (English translation here) of this week-end that features the second reference of this entry namely Phase control algorithms for focusing light through turbid media by Ivo Vellekoop and Allard Mosk and discusses Sparse Bayesian Algorithm, I allso want to show two figures from that paper:

There is a discussion of three algorithms for focusing light through some random medium by using different set of configurations for the Spatial Light Modulator (SLM). The first set is equivalent to the usual raster mode, while the third one has a random permutation component to it which is not unlike what we have in Compressive Sensing. The convergence of these algorithms can be shown in the following figures:

One final note on this work is that it is more or less equivalent to how one used to solve coded aperture problems and this is why I think CS reconstruction algorithms might provide faster way of focusing light.

Ron Rubinstein has released the KSVDS-Box and OMPS-Box packages..." These packages implement the OMP and K-SVD algorithms for sparse dictionaries, as introduced in their paper Double Sparsity - Learning Sparse Dictionaries for Sparse Signal Approximation (see below). He also published the packages KSVD-Boxv13 and OMP-Box v10. These new versions reduce memory consumption, accelerate computation, and resolve a few minor bugs..."

  • OMP-Box v10 Implementation of the Batch-OMP and OMP-Cholesky algorithms for quick sparse-coding of large sets of signals.
  • OMPS-Box v1 Implementation of the Batch-OMP and OMP-Cholesky algorithms for sparse dictionaries.
  • KSVD-Box v13 Implementation of the K-SVD and Approximate K-SVD dictionary training algorithms, and the K-SVD Denoising algorithm. Requires OMP-Box v10.
  • KSVDS-Box v11 Implementation of the sparse K-SVD dictionary training algorithm and the sparse K-SVD Denoising algorithm. Requires OMPS-Box v1. The package is also available without demo volumes (less recommended) at KSVDS-Box v11-min.
The paper is : Double Sparsity: Learning Sparse Dictionaries for Sparse Signal Approximation by Ron Rubinstein, Michael Zibulevsky and Michael Elad. The abstract reads:
An efficient and flexible dictionary structure is proposed for sparse and redundant signal representation. The proposed sparse dictionary is based on a sparsity model of the dictionary atoms over a base dictionary, and takes the form D = \Phi A where \Phi is a fixed base dictionary and A is sparse. The sparse dictionary provides efficient forward and adjoint operators, has a compact representation, and can be effectively trained from given example data. In this, the sparse structure bridges the gap between implicit dictionaries, which have efficient implementations yet lack adaptability, and explicit dictionaries, which are fully adaptable but non-efficient and costly to deploy. In this paper we discuss the advantages of sparse dictionaries, and present an efficient algorithm for training them. We demonstrate the advantages of the proposed structure for 3-D image denoising.
Here are a set of papers I found on the interwebs: Simultaneous Sparse Approximation : insights and algorithms by Alain Rakotomamonjy. The abstract reads:
This paper addresses the problem of simultaneous sparse approximation of signals, given an overcomplete dictionary of elementary functions, with a joint sparsity profile induced by a ℓp − ℓq mixednorm. Our contributions are essentially two-fold i) making connections between such an approach and other methods available in the literature and ii) on providing algorithms for solving the problem with different values of p and q. At first, we introduce a simple algorithm for solving the multiple signals extension of the Basis Pursuit Denoising problem (p = 1 and q = 2). Then, we show that for general sparsity-inducing ℓp − ℓq mixed-norm penalty, this optimization problem is actually equivalent to an automatic relevance determination problem. From this insight, we derive an simple EM-like algorithm for problems with ℓ1 − ℓq2 penalty. For addressing approximation problem with non-convex penalty (p \lt 1), we propose an iterative reweighted Multiple-Basis Pursuit ; we trade the non-convexity of the problem against several resolutions of the convex multiple-basis pursuit problem. Relations between such a reweighted algorithm and the Multiple-Sparse Bayesian Learning are also highlighted. Experimental results show how our algorithms behave and how they compare to related approaches (such as CosAmp) for solving simultaneous sparse approximation problem.
and Inferring Ranking using Constrained Sensing by Srikanth Jagabathula and Devavrat Shah. The abstract reads:
We consider the problem of recovering a function over the space of permutations (or, the symmetric group) over n elements from a given partial set of its Fourier series coefficients. This problem naturally arises in several applications such as ranked elections, multi-object tracking, ranking systems and recommendation systems. This problem has been widely studied in the context of discrete-time functions in the recently popular compressive sensing literature; however, the existing approaches don’t naturally extend to our problem setup. Inspired by the work of Donoho and Stark [?] in the context of discrete-time functions, we focus on non-negative functions with a sparse support (support size \lt\lt domain size). Our recovery method is based on finding the sparsest solution (through ℓ0 optimization) that is consistent with the available information. As the main result, we derive sufficient conditions for the functions that can be recovered exactly from a partial set of Fourier coefficients through ℓ0 optimization. Under a natural random model for the generation of functions, we quantify the recoverability conditions by deriving bounds on the sparsity (support size) for which the function satisfies the sufficient conditions with a high probability as n → ∞. Since ℓ0 optimization is computationally hard, its convex relaxation, ℓ1 optimization, is typically considered; however, we show that ℓ1 optimization fails to recover a function generated using the random model with a high probability as n → ∞. In order to overcome this problem, we propose a novel iterative algorithm for the recovery of functions that satisfy the sufficient conditions. Finally, using an Information Theoretic framework, we study the necessity conditions for exact recovery to be possible.
The related shorter NIPS08 paper is here. From the paper:
...Assuming that the function is sparse, our approach to performing exact recovery is to find the function with the sparsest support that is consistent with the given partial information, henceforth referred to as ℓ0 optimization. This approach is often justified by the philosophy of Occam’s razor. However, as illustrated in 2.3.1, the sparsest solution is not always the correct solution. Thus, we derive sufficient conditions in terms of sparsity (support size) for functions that can be recovered through ℓ0 optimization. Furthermore, finding a function with the sparsest support through ℓ0 minimization is in general computationally hard. This problem is typically overcome by considering the convex relaxation of the ℓ0 optimization problem. However, as we show in Theorem 3.1, such a convex relaxation does not yield exact recovery in our case. Thus, we propose a simple iterative algorithm and prove that the algorithm performs exact recovery of functions that satisfy the sufficient conditions.
Our work can be considered as an extension of the compressive sensing literature to non-commutative groups. To the best of our knowledge, our work is the first to consider exact recovery of functions over non-commutative groups from a partial set of Fourier coefficients.

They are lots of interesting things, however further down, one can read the following:
The sparsest recovery approach of this paper is similar (in flavor) to the above stated work. However, the methods or approaches of the prior work do not apply. In a nutshell, in our setup too, we are interested in recovering a certain sparse vector x from data y = Ax. However, the corresponding matrix A is given rather than a design choice. Moreover, the matrix A is dependent on the structure of the space of permutations. An important development of the above stated work is the characterization of a class of sufficient conditions (on the structure of A) for recovery, collectively known as the Restricted Isoperimetric Property (RIP) (see [CRT06b, BGI+08]) of matrix A. However, such sufficient conditions trivially fail in our setup (see [JS08]). Therefore, a new approach is required.

Does null space property also fail in their set-up ? I guess that takes us back to the discussion we had last week.

And finally from the Rice Compressive Sensing repository, we have:

Exact signal recovery from sparsely corrupted measurements through the pursuit of justice by Jason Laska, Mark Davenport, and Richard Baraniuk. The abstract reads:
Compressive sensing provides a framework for recoveringsparse signals of length N from M \lt\lt N measurements. If the measurements contain noise bounded by \epsilon, then standard algorithms recover sparse signals with error at most C \epsilon However, these algorithms perform suboptimally when the measurement noise is also sparse. This can occur in practice due to shot noise, malfunctioning hardware, transmission errors, or narrowband interference. We demonstrate that a simple algorithm, which we dub Justice Pursuit (JP), can achieve exact recovery from measurements corrupted with sparse noise. The algorithm handles unbounded errors, has no input parameters, and is easily implemented via standard recovery techniques.

No comments:

Post a Comment