Following upon yesterday's paper on Modified-CS applied to slowly varying sparse signals, here is a paper on a competing approach: Exploiting Correlation in Sparse Signal Recovery Problems: Multiple Measurement Vectors, Block Sparsity, and Time-Varying Sparsity by Zhilin Zhang, Bhaskar D. Rao. The abstract reads:
A trend in compressed sensing (CS) is to exploit structure for improved reconstruction performance. In the basic CS model, exploiting the clustering structure among nonzero elements in the solution vector has drawn much attention, and many algorithms have been proposed. However, few algorithms explicitly consider correlation within a cluster. Meanwhile, in the multiple measurement vector (MMV) model correlation among multiple solution vectors is largely ignored. Although several recently developed algorithms consider the exploitation of the correlation, these algorithms need to know a priori the correlation structure, thus limiting their effectiveness in practical problems.Recently, we developed a sparse Bayesian learning (SBL) algorithm, namely T-SBL, and its variants, which adaptively learn the correlation structure and exploit such correlation information to significantly improve reconstruction performance. Here we establish their connections to other popular algorithms, such as the group Lasso, iterative reweighted $\ell_1$ and $\ell_2$ algorithms, and algorithms for time-varying sparsity. We also provide strategies to improve these existing algorithms.
and here three remaining papers of interest:
Proximal methods for minimizing the sum of a convex function and a composite function by Quoc Tran Dinh, Moritz Diehl. The abstract reads:
This paper extends the algorithm schemes proposed in \cite{Nesterov2007a} and \cite{Nesterov2007b} to the minimization of the sum of a composite objective function and a convex function. Two proximal point-type schemes are provided and their global convergence is investigated. The worst case complexity bound is also estimated under certain Lipschitz conditions and nondegeneratedness. The algorithm is then accelerated to get a faster convergence rate for the strongly convex case.
Finding Dense Clusters via "Low Rank + Sparse" Decomposition by Samet Oymak, Baback Hassibi. The abstract reads:
Finding "densely connected clusters" in a graph is in general an important and well studied problem in the literature \cite{Schaeffer}. It has various applications in pattern recognition, social networking and data mining \cite{Duda,Mishra}. Recently, Ames and Vavasis have suggested a novel method for finding cliques in a graph by using convex optimization over the adjacency matrix of the graph \cite{Ames, Ames2}. Also, there has been recent advances in decomposing a given matrix into its "low rank" and "sparse" components \cite{Candes, Chandra}. In this paper, inspired by these results, we view "densely connected clusters" as imperfect cliques, where imperfections correspond missing edges, which are relatively sparse. We analyze the problem in a probabilistic setting and aim to detect disjointly planted clusters. Our main result basically suggests that, one can find \emph{dense} clusters in a graph, as long as the clusters are sufficiently large. We conclude by discussing possible extensions and future research directions.
On State Estimation with Bad Data Detection by Weiyu Xu, Meng Wang, Ao Tang. The abstract reads:
In this paper, we consider the problem of state estimation through observations possibly corrupted with both bad data and additive observation noises. A mixed $\ell_1$ and $\ell_2$ convex programming is used to separate both sparse bad data and additive noises from the observations. Through using the almost Euclidean property for a linear subspace, we derive a new performance bound for the state estimation error under sparse bad data and additive observation noises. Our main contribution is to provide sharp bounds on the almost Euclidean property of a linear subspace, using the "escape-through-a-mesh" theorem from geometric functional analysis. We also propose and numerically evaluate an iterative convex programming approach to performing bad data detections in nonlinear electrical power networks problems.
Credit: NASA, photograph taken the expedition 27 crew from the international Space Station.
No comments:
Post a Comment