Pages

Friday, April 15, 2011

CS: Compressive Sensing and Related Matter on ArXiv this week.

Here are the papers I have not covered this week, enjoy! By the way, this the 996th blog entry on compressive sensing so far.

An algorithm based on compressive sensing (CS) is proposed for synthetic aperture radar (SAR) imaging of moving targets. The received SAR echo is decomposed into the sum of basis sub-signals, which are generated by discretizing the target spatial domain and velocity domain and synthesizing the SAR received data for every discretized spatial position and velocity candidate. In this way, the SAR imaging problem is converted into sub-signal selection problem. In the case that moving targets are sparsely distributed in the observed scene, their reflectivities, positions and velocities can be obtained by using the CS technique. It is shown that, compared with traditional algorithms, the target image obtained by the proposed algorithm has higher resolution and lower side-lobe while the required number of measurements can be an order of magnitude less than that by sampling at Nyquist sampling rate. Moreover, multiple targets with different speeds can be imaged simultaneously, so the proposed algorithm has higher efficiency.


We improve existing results in the field of compressed sensing and matrix completion when sampled data may be grossly corrupted. We introduce three new theorems. 1) In compressed sensing, we show that if the m \times n sensing matrix has independent Gaussian entries, then one can recover a sparse signal x exactly by tractable \ell1 minimimization even if a positive fraction of the measurements are arbitrarily corrupted, provided the number of nonzero entries in x is O(m/(log(n/m) + 1)). 2) In the very general sensing model introduced in "A probabilistic and RIPless theory of compressed sensing" by Candes and Plan, and assuming a positive fraction of corrupted measurements, exact recovery still holds if the signal now has O(m/(log^2 n)) nonzero entries. 3) Finally, we prove that one can recover an n \times n low-rank matrix from m corrupted sampled entries by tractable optimization provided the rank is on the order of O(m/(n log^2 n)); again, this holds when there is a positive fraction of corrupted samples.

In this paper we explore how concepts of high-dimensional data compression via random projections onto lower-dimensional spaces can be applied for tractable simulation of certain dynamical systems modeling complex interactions. In such systems, one has to deal with a large number of agents (typically millions) in spaces of parameters describing each agent of high dimension (thousands or more). Even with today's powerful computers, numerical simulations of such systems are prohibitively expensive. We propose an approach for the simulation of dynamical systems governed by functions of adjacency matrices in high dimension, by random projections via Johnson-Lindenstrauss embeddings, and recovery by compressed sensing techniques. We show how these concepts can be generalized to work for associated kinetic equations, by addressing the phenomenon of the delayed curse of dimension, known in information-based complexity for optimal numerical integration problems in high dimensions.

In this work, we obtain sufficient conditions for the "stability" of our recently proposed algorithms, Least Squares Compressive Sensing residual (LS-CS) and modified-CS, for recursively reconstructing sparse signal sequences from noisy measurements. By "stability" we mean that the number of misses from the current support estimate and the number of extras in it remain bounded by a time-invariant value at all times. We show that, for a signal model with fixed signal power and support set size; support set changes allowed at every time; and gradual coefficient magnitude increase/decrease, "stability" holds under mild assumptions -- bounded noise, high enough minimum nonzero coefficient magnitude increase rate, and large enough number of measurements at every time. A direct corollary is that the reconstruction error is also bounded by a time-invariant value at all times. If the support set of the sparse signal sequence changes slowly over time, our results hold under weaker assumptions than what simple compressive sensing (CS) needs for the same error bound. Also, our support error bounds are small compared to the support size. Our discussion is backed up by Monte Carlo simulation based comparisons.

We consider a problem of estimating a sparse group of sparse normal mean vectors. The proposed approach is based on penalized likelihood estimation with complexity penalties on the number of nonzero mean vectors and the numbers of their "significant" components, and can be performed by a computationally fast algorithm. The resulting estimators are developed within Bayesian framework and can be viewed as MAP estimators. We establish the oracle inequality for them and adaptive minimaxity over a wide range of sparse and dense settings. The presented short simulation study demonstrates the efficiency of the proposed approach that successfully competes with the recently developed sparse group lasso estimator.


A wide class of regularization problems in machine learning and statistics employ a regularization term which is obtained by composing a simple convex function \omega with a linear transformation. This setting includes Group Lasso methods, the Fused Lasso and other total variation methods, multi-task learning methods and many more. In this paper, we present a general approach for computing the proximity operator of this class of regularizers, under the assumption that the proximity operator of the function \omega is known in advance. Our approach builds on a recent line of research on optimal first order optimization methods and uses fixed point iterations for numerically computing the proximity operator. It is more general than current approaches and, as we show with numerical simulations, computationally more efficient than available first order methods which do not achieve the optimal rate. In particular, our method outperforms state of the art O(1/T) methods for overlapping Group Lasso and matches optimal O(1/T^2) methods for the Fused Lasso and tree structured Group Lasso.

In this paper we consider the trace regression model. Assume that we observe a small set of entries or linear combinations of entries of an unknown matrix $A_0$ corrupted by noise. We propose a new rank penalized estimator of $A_0$. For this estimator we establish general oracle inequality for the prediction error both in probability and in expectation. We also prove upper bounds for the rank of our estimator. Then we apply our general results to the problem of matrix completion when our estimator has a particularly simple form: it is obtained by hard thresholding of the singular values of a matrix constructed from the observations.


We have systematically studied the optimal real-space sampling of atomic pair distribution data by comparing refinement results from oversampled and resampled data. Based on nickel and a complex perovskite system, we demonstrate that the optimal sampling is bounded by the Nyquist interval described by the Nyquist-Shannon sampling theorem. Near this sampling interval, the data points in the PDF are minimally correlated, which results in more reliable uncertainty prediction. Furthermore, refinements using sparsely sampled data may run many times faster than using oversampled data. This investigation establishes a theoretically sound limit on the amount of information contained in the PDF, which has ramifications towards how PDF data are modeled.


This paper considers the recovery of a low-rank matrix from an observed version that simultaneously contains both (a) erasures most entries are not observed, and (b) errors values at a constant fraction of (unknown) locations are arbitrarily corrupted. We provide a new unified performance guarantee on when a (natural) recently proposed method, based on convex optimization, succeeds in exact recovery. Our result allows for the simultaneous presence of random and deterministic components in both the error and erasure patterns. On the one hand, corollaries obtained by specializing this one single result in different ways recovers (upto poly-log factors) all the existing works in matrix completion, and sparse and low-rank matrix recovery. On the other hand, our results also provide the {\em first guarantees} for (a) deterministic matrix completion, and (b) recovery when we observe a vanishing fraction of entries of a corrupted matrix

Nonparametric methods are widely applicable to statistical inference problems, since they rely on a few modeling assumptions. In this context, the fresh look advocated here permeates benefits from variable selection and compressive sampling, to robustify nonparametric regression against outliers - that is, data markedly deviating from the postulated models. A variational counterpart to least-trimmed squares regression is shown closely related to an L0-(pseudo)norm-regularized estimator, that encourages sparsity in a vector explicitly modeling the outliers. This connection suggests efficient solvers based on convex relaxation, which lead naturally to a variational M-type estimator equivalent to the least-absolute shrinkage and selection operator (Lasso). Outliers are identified by judiciously tuning regularization parameters, which amounts to controlling the sparsity of the outlier vector along the whole robustification path of Lasso solutions. Reduced bias and enhanced generalization capability are attractive features of an improved estimator obtained after replacing the L0-(pseudo)norm with a nonconvex surrogate. The novel robust spline-based smoother is adopted to cleanse load curve data, a key task aiding operational decisions in the envisioned smart grid system. Computer simulations and tests on real load curve data corroborate the effectiveness of the novel sparsity-controlling robust estimators.


We propose and analyze an extremely fast, efficient, and simple method for solving the problem:min{parallel to u parallel to(1) : Au = f, u is an element of R-n}.This method was first described in [J. Darbon and S. Osher, preprint, 2007], with more details in [W. Yin, S. Osher, D. Goldfarb and J. Darbon, SIAM J. Imaging Sciences, 1(1), 143-168, 2008] and rigorous theory given in [J. Cai, S. Osher and Z. Shen, Math. Comp., to appear, 2008, see also UCLA CAM Report 08-06] and [J. Cai, S. Osher and Z. Shen, UCLA CAM Report, 08-52, 2008]. The motivation was compressive sensing, which now has a vast and exciting history, which seems to have started with Candes, et. al. [E. Candes, J. Romberg and T. Tao, 52(2), 489-509, 2006] and Donoho, [D. L. Donoho, IEEE Trans. Inform. Theory, 52, 1289-1306, 2006]. See [W. Yin, S. Osher, D. Goldfarb and J. Darbon, SIAM J. Imaging Sciences 1(1), 143-168, 2008] and [J. Cai, S. Osher and Z. Shen, Math. Comp., to appear, 2008, see also UCLA CAM Report, 08-06] and [J. Cai, S. Osher and Z. Shen, UCLA CAM Report, 08-52, 2008] for a large set of references. Our method introduces an improvement called "kicking" of the very efficient method of [J. Darbon and S. Osher, preprint, 2007] and [W. Yin, S. Osher, D. Goldfarb and J. Darbon, SIAM J. Imaging Sciences, 1(1), 143-168, 2008] and also applies it to the problem of denoising of undersampled signals. The use of Bregman iteration for denoising of images began in [S. Osher, M. Burger, D. Goldfarb, J. Xu and W. Yin, Multiscale Model. Simul, 4(2), 460-489, 2005] and led to improved results for total variation based methods. Here we apply it to denoise signals, especially essentially sparse signals, which might even be undersampled.

Recently, there has been increasing interest in recovering sparse representation of signals from a union of subspaces. We consider dictionaries that consist of multiple blocks where the atoms in each block are drawn from a linear subspace. Given a signal, which lives in the direct sum of a few subspaces, we study the problem of finding a block-sparse representation of the signal, i.e. a representation that uses the minimum number of blocks of the dictionary. Unlike existing results, we do not restrict the number of atoms in each block of the dictionary to be equal to the dimension of the corresponding subspace. Instead, motivated by signal/image processing and computer vision problems such as face recognition and motion segmentation, we allow for arbitrary number of atoms in each block, which can far exceed the dimension of the underlying subspace. To find a block-sparse representation of a signal, we consider two classes of non-convex programs which are based on minimizing a mixed $\ell_q/\ell_0$ quasi-norm ($q \geq 1$) and consider their convex $\ell_q/\ell_1$ relaxations. The first class of optimization programs directly penalizes the norm of the coefficient blocks, while the second class of optimization programs penalizes the norm of the reconstructed vectors from the blocks of the dictionary. For each class of convex programs, we provide conditions based on the introduced mutual/cumulative subspace coherence of a given dictionary under which it is equivalent to the original non-convex formulation. We evaluate the performance of the two families of convex programs through simulations as well as real experiments on face recognition. We show that treating the face recognition problem as a block-sparse recovery problem and using the appropriate class of convex programs can improve the state-of-the-art face recognition results by 10% with only 25% of the training data.


Recently the theory of widths of Kolmogorov-Gelfand has received a great deal of interest due to its close relationship with the newly born area of Compressed Sensing. It has been realized that widths reflect properly the sparsity of the data in Signal Processing. However fundamental problems of the theory of widths in multidimensional Theory of Functions remain untouched, as well as analogous problems in the theory of multidimensional Signal Analysis. In the present paper we provide a multidimensional generalization of the original result of Kolmogorov by introducing a new hierarchy of infinite-dimensional spaces based on solutions of higher order elliptic equation
.


Image Credit: NASA/JPL/Space Science Institute
W00066958.jpg was taken on March 14, 2011 and received on Earth March 15, 2011. The camera was pointing toward SATURN at approximately 2,500,071 kilometers away, and the image was taken using the CB2 and CL2 filters.

No comments:

Post a Comment