Several papers appeared on my radar screen most of them will be featured in the upcoming SPARS'09 conference:
Distributed algorithms for basis pursuit (also here) by João F. C. Mota, João M. F. Xavier, Pedro M. Q. Aguiar, Markus Püschel. The abstract reads:
Distributed algorithms for basis pursuit (also here) by João F. C. Mota, João M. F. Xavier, Pedro M. Q. Aguiar, Markus Püschel. The abstract reads:
The Basis Pursuit (BP) problem consists in finding a least l1 norm solution of the underdetermined linear system Ax = b. It arises in many areas of electrical engineering and applied mathematics. Applications include signal compression and modeling, estimation, fitting, and compressed sensing. In this paper, we explore methods for solving the BP in a distributed environment, i.e., when the computational resources and the matrix A are distributed over several interconnected nodes. Special instances of this distributed framework include sensor networks and distributed memory and/or processor platforms. We consider two distribution paradigms: either the columns or the rows of A are distributed across the nodes. The several algorithms that we present impose distinct requirements on the degree of connectivity of the network and the per-node computational complexity.
Compressed sensing for radio interferometry : prior-enhanced Basis Pursuit imaging techniques
by Yves Wiaux, Laurent Jacques, Gilles Puy, Anna Scaife, Pierre Vandergheynst. The abstract reads:
by Yves Wiaux, Laurent Jacques, Gilles Puy, Anna Scaife, Pierre Vandergheynst. The abstract reads:
We propose and assess the performance of new imaging techniques for radio interferometry that rely on the versatility of the compressed sensing framework to account for prior information on the signals. The present manuscript represents a summary of recent work.
Solving Sparse Linear Inverse Problems: Analysis of Reweighted l1 and l2 Methods by David Wipf , Srikantan Nagarajan. The abstract reads:
A variety of practical methods have recently been introduced for finding maximally sparse representations from overcomplete dictionaries, a central computational task in compressed sensing and source localization applications as well as numerous others. Many of the underlying algorithms rely on iterative reweighting schemes that produce more focal estimates as optimization progresses. Two such variants are iterative reweighted l1 and l2 minimization; however, some properties related to convergence and sparse estimation, as well as possible generalizations, are still not clearly understood or fully exploited. In this paper, we make the distinction between separable and nonseparable iterative reweighting algorithms. The vast majority of existing methods are separable, meaning the weighting of a given coefficient at each iteration is only a function of that individual coefficient from the previous iteration (as opposed to dependency on all coefficients). We examine two such separable reweighting schemes: an l2 method from Chartand and Yin (2008) and an l1 approach from Cand`es et al. (2008), elaborating on convergence results and explicit connections between them. We then explore an interesting non-separable variant that can be implemented via either l2 or l1 reweighting and show several desirable properties relevant to sparse recovery. For the former, we show a direct connection with Chartrand and Yin's approach. For the latter, we demonstrate two desirable properties: (i) each iteration can only improve the sparsity and (ii), for any dictionary and sparsity profile, there will always exist cases where non-separable l1 reweighting improves over standard l1 minimization.
Stability of l1 minimisation in compressed sensing by Przemyslaw Wojtaszczyk. The abstract reads:
We discuss known results (c.f. [16, 6]) about stability of l1 minimisation (denoted \Delta_1) with respect to the measurement error and how those results depend on the measurement matrix \Phi. Then we produce a large class of measurement matrices \Phi for which we can apply results from [16] so we have estimate
|| \Delta_1(\Phi(x) + r) − x||_2 \le C(|||r|||_2 + k^(−1/2) \sigma^1_k(x)). (1)
We conclude with a modification of l1 minimisation which gives (1) for most random measurement matrices considered in compressed sensing literature. We also discuss stability of instance optimality in probability.
Finding a point in the relative interior of a polyhedron, with applications to compressed sensing by Coralia Cartis , Gould Nicholas I. M. . The abstract reads:
Consider a system of finitely many equalities and inequalities that depend linearly on N variables. We propose an algorithm that provably finds a point in the relative interior of the polyhedron described by these constraints, thus allowing the identification of the affine dimension of this set (often smaller than N). It can therefore be employed to find a starting point for the class of interior-point methods for linear programming, and also as a preprocessor for the latter problem class that removes superfluous or implied constraints, with strong guarantees of convergence. In particular, it may be used to solve the feasibility problems that occur in sparse approximation when prior information is included (Donoho & Tanner, 2008).
Recovery of Non-Negative Signals from Compressively Sampled Observations Via Non-Negative Quadratic Programming by Paul O’Grady and Scott Rickard . The abstract reads:
You may remember we had a previous discussion with Paul O’Grady on this subject.The new emerging theory of Compressive Sampling has demonstrated that by exploiting the structure of a signal, it is possible to sample a signal below the Nyquist rate and achieve perfect reconstruction. In this paper, we consider a special case of Compressive Sampling where the uncompressed signal is non-negative, and propose an extension of Non-negative Quadratic Programming - which utilises Iteratively Reweighted Least Squares - for the recovery of non-negative minimum lp-norm solutions, 0 \le p \le 1. Furthermore, we investigate signal recovery performance where the sampling matrix has entries drawn from a Gaussian distribution with decreasing number of negative values, and demonstrate that - unlike standard Compressive Sampling - the standard Gaussian distribution is unsuitable for this special case.
Also of related interest we have:
Automatic Relevance Determination in Nonnegative Matrix Factorization by Vincent Yan Fu Tan, Cédric Févotte. The abstract reads:
Nonnegative matrix factorization (NMF) has become a popular technique for data analysis and dimensionality reduction. However, it is often assumed that the number of latent dimensions (or components) is given. In practice, one must choose a suitable value depending on the data and/or setting. In this paper, we address this important issue by using a Bayesian approach to estimate the latent dimensionality, or equivalently, select the model order. This is achieved via automatic relevance determination (ARD), a technique that has been employed in Bayesian PCA and sparse Bayesian learning. We show via experiments on synthetic data that our technique is able to recover the correct number of components, while it is also able to recover an effective number of components from real datasets such as the MIT CBCL datase
A Greedy Algorithm for a Sparse Scalet Decomposition by Albert Bijaoui. The abstract reads:
Sparse decompositions were mainly developed to optimize the signal or the image compression. The sparsity was first obtained by a coefficient thresholding. The matching pursuit (MP) algorithms were implemented to extract the optimal patterns from a given dictionary. They carried out a new insight on the sparse representations. In this communication, this way is followed. It takes into account the goal to obtain a sparse multiscale decomposition with the different constraints: i/ to get a sparse representation with patterns looking like to Gaussian functions, ii/ to be able to decompose into patterns with only positive amplitudes, iii/ to get a representation from a translated and dilated pattern, iv/ to constrain the representation by a threshold, v/ to separate the sparse signal from a smooth baseline. Different greedy algorithms were built from the use of redundant wavelet transforms (pyramidal and `a trous ones), for 1D signals and 2D images. Experimentations on astronomical images allow one a gain of about two in sparsity compared to a classical DWT thresholding. A fine denoising is obtained. The results do not display any wavy artifacts. This decomposition is an efficient tool for astronomical image analysis.
Image Credit: NASA/JPL/Space Science Institute, Saturn taken by Cassini on March 10, 2009.
No comments:
Post a Comment