Sunday, July 12, 2009

CS: Different codes, Recovery of clustered sparse signals, Theory of CS, Stagewise Polytope Faces Pursuit, 2-D tomography

Dirk Lorenz just sent me an e-mail:
I just wanted to let you know that I did what I should have done months or years ago: I put m-files for l^p and especially l^1-minimization on my page

  • iter_thresh.m which implements the usual iterated soft thresholding for different values of p and also a sophisticated other algorithm which we called iterated hard thresholding (not to be confused with the algorithm proposed by Blumensath and Davies) . As presented in Iterated hard shrinkage for minimization problems with sparsity constraints by Kristian Bredies and Dirk A. Lorenz ).

  • ssn.m which implements the semismooth Newton method for l^1-minimization and which is incredibly fast for the cases in which it converges (unfortunately this is not always the case... convergence is indeed only local) ( As presented in A Semismooth Newton Method for Tikhonov Functionals with Sparsity Constraints by Roland Griesse and Dirk A. Lorenz).

  • ppp_l1.m which implements the projection proximal point method for l^1 minimization presented in Numerical Functional Analysis and Optimization).
  • Thank you Dirk !

    Angshul Majumdar also sent me the following:
    This is the code for solving problems of the form

    min ||x||_m,p s.t. y = Ax

    I have replaced the ||.||_2,p norm with the ||.||_m,p norm for the group sparsity problem.

    It uses the same Re-weighted Least Squares method but with a different weighing method.
    The jth index of the ith group is weighed by (||x(j)||_m)^(p-m) . |x(i,j)|^(m-2).

    I derived it using the FOCUSS methodology.
    The code is here: lmp_re_ls.m
    Thank you Angshul !

    Both of these codes are listed in the reconstruction section of the Big Picture in Compressive Sensing page.

    Here are few papers, the first one is an extension of previous papers from Rice aiming at reducing the number of measurements by using prior information such as the clustering of elements.



    We introduce a new signal model, called (K,C)-sparse, to capture K-sparse signals in N dimensions whose nonzero coefficients are contained within at most C clusters, with C \lt K \lt\lt N. In contrast to the existing work in the sparse approximation and compressive sensing literature on block sparsity, no prior knowledge of the locations and sizes of the clusters is assumed. We prove that O (K + C log(N/C))) random projections are sufficient for (K,C)-model sparse signal recovery based on subspace enumeration. We also provide a robust polynomial time recovery algorithm for (K,C)-model sparse signals with provable estimation guarantees.

    Other papers by Piotr Indyk recently include:


    and a tutorial on Streaming, Sketching and Sub-linear Space Algorithms.

    After the video of the presentation of the poster, here is the paper by Mark Plumbley and Marco Bevilacqua entitled Sparse Reconstruction for Compressed Sensing using Stagewise Polytope Faces Pursuit. The abstract reads:
    Compressed sensing, also known as compressive sampling, is an approach to the measurement of signals which have a sparse representation, that can reduce the number of measurements that are needed to reconstruct the signal. The signal reconstruction part requires efficient methods to perform sparse reconstruction, such as those based on linear programming. In this paper we present a method for sparse reconstruction which is an extension of our earlier Polytope Faces Pursuit algorithm, based on the polytope geometry of the dual linear program. The new algorithm adds several basis vectors at each stage, in a similar way to the recent Stagewise Orthogonal Matching Pursuit (StOMP) algorithm. We demonstrate the application of the algorithm to some standard compressed sensing problems.
    Yin Zhang has updated a paper with the previous title: "On Theory of Compressive Sensing via L_1-Minimization: Simple Derivations and Extensions", it is now Theory of Compressive Sensing via L_1-Minimization: A Non-RIP Analysis and Extensions. This non-RIP analysis has been mentioned before. The abstract reads;

    Compressive sensing (CS) is an emerging methodology in computational signal processing that has recently attracted intensive research activities. At present, the basic CS theory includes recoverability and stability: the former quantifies the central fact that a sparse signal of length n can be exactly recovered from far fewer than n measurements via l_1-minimization or other recovery techniques, while the latter speci es the stability of a recovery technique in the presence of measurement errors and inexact sparsity. So far, most analyses in CS rely heavily on the Restricted Isometry Property (RIP) for matrices. In this paper, we present an alternative, non-RIP analysis for CS via l_1-minimization. Our purpose is three-fold: (a) to introduce an elementary and RIP-free treatment of the basic CS theory; (b) to extend the current recoverability and stability results so that prior knowledge can be utilized to enhance recovery via l_1-minimization; and (c) to substantiate a property called uniform recoverability of l_1-minimization; that is, for almost all random measurement matrices recoverability is asymptotically identical. With the aid of two classic results, the non-RIP approach enables us to quickly derive from scratch all basic results for the extended theory.

    Finally, here is a paper that has nothing to do with CS but that I found interesting to read:

    Computerized Tomography (CT) is a standard method for obtaining internal structure of objects from their projection images. While CT reconstruction requires the knowledge of the imaging directions, there are some situations in which the imaging directions are unknown, for example, when imaging a moving object. It is therefore desirable to design a reconstruction method from projection images taken at unknown directions. Recently, it was shown that the imaging directions can be obtained by the diffusion map framework. Another difficulty arises from the fact that projections are often contaminated by noise, practically limiting all current methods, including the diffusion map approach. In this paper, we introduce two denoising steps that allow reconstructions at much lower signal-to-noise ratios (SNR) when combined with the diffusion map framework. The first denoising step consists of using the singular value decomposition (SVD) in order to find an adaptive basis for the projection data set, leading to improved similarities between different projections. In the second step, we denoise the graph of similarities using the Jaccard index, which is a widely used measure in network analysis. Using this combination of SVD, Jaccard index and diffusion map, we are able to reconstruct the 2-D Shepp-Logan phantom from simulative noisy projections at SNRs well below their currently reported threshold values. Although the focus of this paper is the 2-D CT reconstruction problem, we believe that the combination of SVD, Jaccard index graph denoising and diffusion maps is potentially useful in other signal processing and image analysis applications.

    Credit: NASA / JPL / SSI, Ghostly Fingers of Encleadus via the planetary society blog.

    No comments:

    Printfriendly