Pages

Sunday, August 22, 2010

CS: The real long post of the week.

Two blogs mentioned compressed sensing this past week:
I was reading the following piece on how Kodak engineers produced the first digital camera. I liked the ending of that article as it parallels some thoughts I sometimes have on compressed sensing (and I am sure I am not the only one)
Many developments have happened between this early work and today. Personal computers, the Internet, wide bandwidth connections and personal desktop photographic printing are just a few of these. It is funny now to look back on this project and realize that we were not really thinking of this as the world’s first digital camera. We were looking at it as a distant possibility. Maybe a line from the technical report written at the time sums it up best:

“The camera described in this report represents a first attempt demonstrating a photographic system which may, with improvements in technology, substantially impact the way pictures will be taken in the future.”

But in reality, we had no idea
Of interest, here is a way to Build Your Own 3D Display by Matthew Hirsch, Douglas Lanman. There was a PR piece in EE Times entitled: Technion: Compressed sensing ups data acquisition resolution, cuts cost and a paper in the SPIE entitled Compressive light-field imaging by Amit Ashok, Mark A. Neifeld with the following short abstract:
Compressive light-field imagers employ fewer photon-efficient measurements, enabling higher-resolution reconstruction than is possible using their traditional counterparts.
So without much wait, here is the rest of a long list of papers of interest this week:

I just found a thesis in (mostly) Portuguese on Compressive Sensing entitled: Compressive Sensing, Novos Paradigmas para Aquisição e Compressão de Imagens by Adriana Schulz.

I could not grab the text version of this paper but here it is: :The Benefit of Group Sparsity

Accuracy guarantees for ℓ1-recovery by Anatoli Juditsky, Fatma Kilinc Karzan, Arkadi Nemirovski.The abstract reads:
We propose new methods of recovery of sparse signals from noisy observation based on L1-minimization. They are closely related to the well-known techniques such as Lasso and Dantzig Selector. However, these estimators come with {\em efficiently verifiable guaranties of performance}. By optimizing these bounds with respect to the method parameters we are able to construct the estimators which possess better statistical properties than the commonly used ones. We also provide an oracle inequality to justify the proposed algorithms and show how the estimates can be computed using the Basis Pursuit algorithm.

Generalized Assorted Pixel Camera: Postcapture Control of Resolution, Dynamic Range, and Spectrum by Fumihito Yasuma, Tomoo Mitsunaga, Daisuke Iso, and Shree K. Nayar. The abstract reads:
We propose the concept of a generalized assorted pixel (GAP) camera, which enables the user to capture a single image of a scene and, after the fact, control the tradeoff between spatial resolution, dynamic range and spectral detail. The GAP camera uses a complex array (or mosaic) of color filters. A major problem with using such an array is that the captured image is severely under-sampled for at least some of the filter types. This leads to reconstructed images with strong aliasing. We make four contributions in this paper: 1) we present a comprehensive optimization method to arrive at the spatial and spectral layout of the color filter array of a GAP camera. 2) We develop a novel algorithm for reconstructing the under-sampled channels of the image while minimizing aliasing artifacts. 3)We demonstrate how the user can capture a single image and then control the tradeoff of spatial resolution to generate a variety of images, including monochrome, high dynamic range (HDR) monochrome, RGB, HDR RGB, and multispectral
images. 4) Finally, the performance of our GAP camera has been verified using extensive simulations that use multispectral images of real world scenes. A large database of these multispectral images has been made available at http://www1.cs.columbia.edu/CAVE/projects/gap_camera/ for use by the research community.

MULTICHANNEL-COMPRESSIVE ESTIMATION OF DOUBLY SELECTIVE CHANNELS IN MIMO-OFDM SYSTEMS: EXPLOITING AND ENHANCING JOINT SPARSITY by Daniel Eiwen, Georg Taubock, Franz Hlawatsch, Holger Rauhut, and Nicolai Czink. The abstract reads:
We propose a compressive estimator of doubly selective channels within pulse-shaping multicarrier MIMO systems (including MIMOOFDM as a special case). The use of multichannel compressed sensing exploits the joint sparsity of the MIMO channel for improved
performance. We also propose a multichannel basis optimization for enhancing joint sparsity. Simulation results demonstrate significant advantages over channel-by-channel compressive estimation.
Compressed Meter Reading for Delay-sensitive and Secure Load Report in Smart Grid by Husheng Li, Rukun Mao, Lifeng Lai and Robert. C. Qiu. The abstract reads:.
It is a key task in smart grid to send the readings of smart meters to an access point (AP) in a wireless manner. The requirements of scalability, realtimeness and security make the wireless meter reading highly challenging. On assuming that the number of smart meters is large and the data burst is sparse, i.e., only a small fraction of the smart meters are reporting their power loads at the same time, the technique of compressed sensing is applied for the wireless meter reading. The distinguishing feature of the compressed meter reading is that the active smart meters are allowed to transmit simultaneously and the AP is able to distinguish the reports from different
smart meters. The data sparsity solves the problem of scalability. The simultaneous access results in uniform delays, in contrast to the possible large delay in carrier sensing multiple access
(CSMA) technique. The random sequence used in the compressed sensing enhances the privacy and integrity of the meter reading. The validity of the proposed scheme is then demonstrated by
numerical simulations.
An abstract of interest: Sparse High Dimensional Models in Economics by Jianqing Fan, Jinchi Lv, Lei Qi. The abstract reads:
This paper reviews the literature on sparse high dimensional models and discusses some applications in economics and finance. Recent developments of theory, methods, and implementations in penalized least squares and penalized likelihood methods are highlighted. These variable selection methods are proved to be effective in high dimensional sparse modeling. The limits of dimensionality that regularization methods can handle, the role of penalty functions, and their statistical properties are detailed. Some recent advances in ultra-high dimensional sparse modeling are also briefly discussed.

This paper studies the problem of exact localization of multiple objects with noisy data. The crux of the proposed approach consists of random illumination. Two recovery methods are analyzed: the Lasso and the One-Step Thresholding (OST). For independent random probes, it is shown that both recovery methods can localize exactly $s=\cO(m)$, up to a logarithmic factor, objects where $m$ is the number of data. Moreover, when the number of random probes is large the Lasso with random illumination has a performance guarantee for superresolution, beating the Rayleigh resolution limit. Numerical evidence confirms the predictions and indicates that the performance of the Lasso is superior to that of the OST for the proposed set-up with random illumination.
Solving linear regression problems based on the total least-squares (TLS) criterion has well-documented merits in various applications, where perturbations appear both in the data vector as well as in the regression matrix. However, existing TLS approaches do not account for sparsity possibly present in the unknown vector of regression coefficients. On the other hand, sparsity is the key attribute exploited by modern compressive sampling and variable selection approaches to linear regression, which include noise in the data, but do not account for perturbations in the regression matrix. The present paper fills this gap by formulating and solving TLS optimization problems under sparsity constraints. Near-optimum and reduced-complexity suboptimum sparse (S-) TLS algorithms are developed to address the perturbed compressive sampling (and the related dictionary learning) challenge, when there is a mismatch between the true and adopted bases over which the unknown vector is sparse. The novel S-TLS schemes also allow for perturbations in the regression matrix of the least-absolute selection and shrinkage selection operator (Lasso), and endow TLS approaches with ability to cope with sparse, under-determined ``errors-in-variables'' models. Interesting generalizations can further exploit prior knowledge on the perturbations to obtain novel weighted and structured S-TLS solvers. Analysis and simulations demonstrate the practical impact of S-TLS in calibrating the mismatch effects of contemporary grid-based approaches to cognitive radio sensing, and robust direction-of-arrival estimation using antenna arrays.

Two-way relay network (TWRN) was introduced to realize high-data rate transmission over the wireless frequency-selective channel. However, TWRC requires the knowledge of channel state information (CSI) not only for coherent data detection but also for the self-data removal. This is partial accomplished by training sequence-based linear channel estimation. However, conventional linear estimation techniques neglect anticipated sparsity of multipath channel. Unlike the previous methods, we propose a compressive channel estimation method which exploit the sparse structure and provide significant improvements in MSE performance when compared with traditional LSbased linear channel probing strategies. Simulation results confirm the proposed methods.
We consider the problem of learning a coefficient vector $x_0\in\reals^N$ from noisy linear observation $y=Ax_0+w\in\reals^n$. In many contexts (ranging from model selection to image processing) it is desirable to construct a sparse estimator $\hx$. In this case, a popular approach consists in solving an $\ell_1$-penalized least squares problem known as the LASSO or Basis Pursuit DeNoising (BPDN).
For sequences of matrices $A$ of increasing dimensions, with independent gaussian entries, we prove that the normalized risk of the LASSO converges to a limit, and we obtain an explicit expression for this limit. Our result is the first rigorous derivation of an explicit formula for the asymptotic mean square error of the LASSO for random instances. The proof technique is based on the analysis of AMP, a recently developed efficient algorithm, that is inspired from graphical models ideas.

We consider the problem of simultaneous variable selection and constant coefficient identification in high-dimensional varying coefficient models based on B-spline basis expansion. Both objectives can be considered as some type of model selection problems and we show that they can be achieved by a double shrinkage strategy. We apply the adaptive group Lasso penalty in models involving a diverging number of covariates, which can be much larger than the sample size, but we assume the number of relevant variables is smaller than the sample size via model sparsity. Such so-called ultra-high dimensional settings are especially challenging in semiparametric models as we consider here and has not been dealt with before. Under suitable conditions, we show that consistency in terms of both variable selection and constant coefficient identification can be achieved, as well as the oracle property of the constant coefficients. Even in the case that the zero and constant coefficients are known a priori, our results appear to be new in that it reduces to semivarying coefficient models (a.k.a. partially linear varying coefficient models) with a diverging number of covariates. We also theoretically demonstrate the consistency of a semiparametric BIC-type criterion in this high-dimensional context, extending several previous results. The finite sample behavior of the estimator is evaluated by some Monte Carlo studies.
GROUP SPARSITY METHODS FOR COMPRESSIVE CHANNEL ESTIMATION IN DOUBLY DISPERSIVE MULTICARRIER SYSTEMS by Daniel Eiwen, Georg Taubock, Franz Hlawatsch and Hans Georg Feichting. The abstract reads:
We propose advanced compressive estimators of doubly dispersive channels withinmulticarrier communication systems (including classical OFDM systems). The performance of compressive channel estimation has been shown to be limited by leakage components impairing the channel’s effective delay-Doppler sparsity. We demonstrate a group sparse structure of these leakage components and apply recently proposed recovery techniques for group sparse signals. We also present a basis optimization method for enhancing group sparsity. Statistical knowledge about the channel can be incorporated in the basis optimization if available. The proposed estimators outperform existing compressive estimators with respect to estimation accuracy and, in one instance, also computational complexity.

For consistency (even oracle properties) of estimation and model prediction, almost all existing methods of variable/feature selection critically depend on sparsity of models. However, for ``large $p$ and small $n$" models sparsity assumption is hard to check and particularly, when this assumption is violated, the consistency of all existing estimations is usually impossible because working models selected by existing methods such as the LASSO and the Dantzig selector are usually biased. To attack this problem, we in this paper propose adaptive post-Dantzig estimation and model prediction. Here the adaptability means that the consistency based on the newly proposed method is adaptive to non-sparsity of model, choice of shrinkage tuning parameter and dimension of predictor vector. The idea is that after a sub-model as a working model is determined by the Dantzig selector, we construct a globally unbiased sub-model by choosing suitable instrumental variables and nonparametric adjustment. The new estimation of the parameters in the sub-model can be of the asymptotic normality. The consistent estimator, together with the selected sub-model and adjusted model, improves model predictions. Simulation studies show that the new approach has the significant improvement of estimation and prediction accuracies over the Gaussian Dantzig selector and other classical methods have.

In this paper, motivated by network inference and tomography applications, we study the problem of compressive sensing for sparse signal vectors over graphs. In particular, we are interested in recovering sparse vectors representing the properties of the edges from a graph. Unlike existing compressive sensing results, the collective additive measurements we are allowed to take must follow connected paths over the underlying graph. For a sufficiently connected graph with $n$ nodes, it is shown that, using $O(k \log(n))$ path measurements, we are able to recover any $k$-sparse link vector (with no more than $k$ nonzero elements), even though the measurements have to follow the graph path constraints. We further show that the computationally efficient $\ell_1$ minimization can provide theoretical guarantees for inferring such $k$-sparse vectors with $O(k \log(n))$ path measurements from the graph.
Optimized Projection Matrix for Compressive Sensing by Jianping Xu, Yiming Pi, and Zongjie Cao. The abstract reads:
Compressive sensing (CS) is mainly concerned with low-coherence pairs, since the number of samples needed to recover the signal is proportional to the mutual coherence between projection matrix and sparsifying matrix. Until now, papers on CS always assume the projection matrix to be a random matrix. In this paper, aiming at minimizing the mutual coherence, a method is proposed to optimize the projection matrix. This method is based on equiangular tight frame (ETF) design because an ETF has minimum coherence. It is impossible to solve the problem exactly because of the complexity. Therefore, an alternating minimization type method is used to find a feasible solution. The optimally designed projection matrix can further reduce the necessary number of samples for recovery or improve the recovery accuracy. The proposed method demonstrates better performance than conventional optimization methods, which brings benefits to both basis pursuit and orthogonal matching pursuit.
\ell_p-\ell_q penalty for sparse linear and multiple kernel multi-task learning by Alain Rakotomamonjy, Remi Flamary, Gilles Gasso, Stephane Canu. The abstract reads:
Recently, there has been a lot of interest around multi-task learning (MTL) problem with the constraints that tasks should share a common sparsity profile. Such a problem can be addressed through a regularization framework where the regularizer induces a joint-sparsity pattern between task decision functions. We follow this principled framework and focus on `p−`q (with 0 \le p \le 1 and 1 \le q \le 2) mixed-norms as sparsity inducing penalties. Our motivation for addressing such a larger class of penalty is to adapt the penalty to a problem at hand leading thus to better performances and better sparsity pattern. For solving the problem in the general multiple kernel case, we first derive a variational formulation of the `1 − `q penalty which helps up in proposing an alternate optimization algorithm. Although very simple, the latter algorithm provably converges to the global minimum of the `1 − `q penalized problem. For the linear case, we extend existing works considering accelerated proximal gradient to this penalty. Our contribution in this context is to provide an efficient scheme for computing the `1−`q proximal operator. Then, for the more general case when 0 \;t p \lt 1, we solve the resulting non-convex problem through a majorization-minimization approach. The resulting algorithm is an iterative scheme which, at each iteration, solves a weighted `1 − `q sparse MTL problem. Empirical evidences from toy dataset and real-word datasets dealing with BCI single trial EEG classification and protein subcellular localization show the benefit of the proposed approaches and algorithms.
This is a tutorial on some basic non-asymptotic methods and concepts in random matrix theory. The reader will learn several tools for the analysis of the extreme singular values of random matrices with independent rows or columns. Many of these methods sprung  from the development of geometric functional analysis in the 1970-2000's. They have applications in several elds, most notably in theoretical computer science, statistics and signal processing. A few basic applications are covered in this text, particularly for the problem of estimating covariance matrices in statistics and for validating probabilistic constructions of measurement matrices in compressed sensing. These notes are written particularly for graduate students and beginning researchers in different areas, including functional analysts, probabilists, theoretical statisticians, electrical engineers, and theoretical computer scientists.
Roman would be most grateful for your comments and corrections! Please do send them to him until December 31, 2010.

ACCELERATED BLOCK-COORDINATE RELAXATION FOR REGULARIZED OPTIMIZATION by  STEPHEN J. WRIGHT. The abstract reads:
We discuss minimization of a smooth function to which is added a regularization function that induces structure in the solution. A block-coordinate relaxation approach with proximal linearized subproblems yields convergence to stationary points, while identification of the optimal manifold (under a nondegeneracy condition) allows acceleration techniques to be applied on a reduced space. The work is motivated by experience with an algorithm for regularized logistic regression, and computational results for the algorithm on problems of this type are presented

We also found several abstracts without the full presentation or paper:

Restricted eigenvalue properties for correlated Gaussian designs by Garvesh Raskutti, Martin J.Wainwright, Bin Yu. The abstract reads:
Methods based on ℓ1-relaxation, such as basis pursuit and the Lasso, are very popular for sparse regression in high dimensions. The conditions for success of these methods are now well-understood: (1) exact recovery in the noiseless setting is possible if and only if the design matrixX satisfies the restricted nullspace property, and (2) the squared ℓ2-error of a Lasso estimate decays at the minimax optimal rate k log p n , where k is the sparsity of the p-dimensional regression problem with additive Gaussian noise, whenever the design satisfies a restricted eigenvalue condition. The key issue is thus to determine when the design matrix X satisfies these desirable properties. Thus far, there have been numerous results showing that the restricted isometry property, which implies both the restricted nullspace and eigenvalue conditions, is satisfied when all entries of X are independent and identically distributed (i.i.d.), or the rows are unitary. This paper proves directly that the restricted nullspace and eigenvalue conditions hold with high probability for quite general classes of Gaussian matrices for which the predictors may be highly dependent, and hence restricted isometry conditions can be violated with high probability. In this way, our results extend the attractive theoretical guarantees on ℓ1-relaxations to a much broader class of problems than the case of completely independent or unitary designs.
Speaker: Vivek Goyal (MIT). Abstract
The conventional emphases in data acquisition are the density and accuracy of measurements. Compressed sensing has reminded us of the importance of modeling and has brought sparsity- and compressibility-based models to the fore. Furthermore, the variety of computations that can be performed with the same data can give a broad trade-off between computational cost and performance.
This talk will review results on sparse signal support recovery and sparse signal estimation in a non-Bayesian setting. Full support recovery in the presence of noise is difficult, and the results for estimation are quantitatively pessimistic. The replica method, a non-rigorous but successful technique from statistical physics, is used for asymptotic analysis in a Bayesian setting. This can be applied to many estimators used in compressed sensing, including basis pursuit, lasso, linear estimation with thresholding, and zero norm-regularized estimation. It reduces the analysis of compressed sensing to the study of scalar estimation problems and gives results that are more encouraging for compressed sensing.

IMRT and VMAT Inverse Planning with Compressed Sensing Techniques by L Xing, L Zhu, B Meng, S Boyd. The abstract reads:

The classical Shannon‐Nyquist sampling theorem specifies that to avoid losing information when capturing a signal one must sample at least two times faster than the signal bandwidth. In many medical applications the Nyquist rate may be either impractical or too expensive to be realized practically. Compressed sensing provides a practically valuable approach for finding optimal solutions with under‐ sampled data. In this talk I will summarize our recent work on using compressed sensing for IMRT and VMAT inverse planning. We show that effective utilization of prior knowledge of the systems through compressed sensing can greatly reduce the required number of measurement samples determined by the Shannon‐Nyquist theorem and leads to significantly improved IMRT/VMAT plans.Compressed sensing has significant interactions and bearings on fields of radiation oncology and medical imaging.



Image Credit: NASA/JPL/Space Science Institute
N00161116.jpg was taken on August 14, 2010 and received on Earth August 15, 2010. The camera was pointing toward SATURN-RINGS at approximately 190,404 kilometers away, and the image was taken using the CL1 and CL2 filters. This image has not been validated or calibrated. A validated/calibrated image will be archived with the NASA Planetary Data System in 2011.

No comments:

Post a Comment