On the left is a photo of the last Rendezvous Pitch Maneuver (the back flip) ever to be performed by Endeavour.... Sigh. Anyway, today we have a long entry. First.here is a summary 8f workshop day 2 and day 3

Now from the arxiv, the Rice repository and other sources, enjoy!

Xampling at the Rate of Innovation by Tomer Michaeli, Yonina Eldar. The abstract reads:

We address the problem of recovering signals from samples taken at their rate of innovation. Our only assumption is that the sampling system is such that the parameters defining the signal can be stably determined from the samples, a condition that lies at the heart of every sampling theorem. Consequently, our analysis subsumes previously studied nonlinear acquisition devices and nonlinear signal classes. In particular, we do not restrict attention to memoryless nonlinear distortions or to union-of-subspace signal models. This allows treatment of various finite-rate-of-innovation (FRI) signals that were not studied to date, including, for example, frequency shift keying (FSK) transmissions. Our strategy relies on minimizing the error between the measured samples and those corresponding to our signal estimate. This least-squares (LS) objective is generally non-convex and might possess many local minima. Nevertheless, we prove that under the stability hypothesis, any optimization method designed to trap a stationary point of the LS criterion necessarily converges to the true solution. We demonstrate our approach in the context of recovering finite-duration and periodic pulse streams in settings that were not previously treated. Furthermore, in situations for which other algorithms are applicable, we show that our method is preferable in terms of noise robustness.From the mostly quantum blog: Efficient Measurement of Quantum Dynamics via Compressive Sensing by A. Shabani, R. L. Kosut, M. Mohseni, H. Rabitz1, M. A. Broome4, M. P. Almeida, A. Fedrizzi, and A. G. White. The abstract reads:

The resources required to characterize the dynamics of engineered quantum systems—such as quantum computers and quantum sensors—grow exponentially with system size. Here we adapt techniques from compressive sensing to exponentially reduce the experimental configurations required for quantum process tomography. Our method is applicable to processes that are nearly sparse in a certain basis and can be implemented using only single-body preparations and measurements. We perform efficient, high-fidelity estimation of process matrices of a photonic two-qubit logic gate. The database is obtained under various decoherence strengths. Our technique is both accurate and noise robust, thus removing a key roadblock to the development and scaling of quantum technologies.

NUMERICAL EVALUATION OF SAMPLING BOUNDS FOR NEAR-OPTIMAL RECONSTRUCTION IN COMPRESSED SENSING by Yoann Le Montagner, Marcio Marim, Elsa Angelini, Jean-Christophe Olivo-Marin. The abstract reads:

In this paper, we propose an empirical review of the conditions under which the compressed sensing framework allows to achieve exact image reconstruction. After a short presentation of the theoretical results related to this subject, we investigate the relevance and the limits of these theoretical results through several numerical reconstructions of some bench signals. In particular, we discuss the quantitative and qualitative artifacts that affect the reconstructed signal when reducing the number of measurements in the Fourier domain. Finally, we conclude our study by extending our results to some real microscopic images.

Compressed Sensing with coherent tight frames via $l_q$-minimization for $0\lt q\leq1$ by Li Song, Lin Junhong. The abstract reads:

Our aim of this article is to reconstruct a signal from undersampled data in the situation that the signal is sparse in terms of a tight frame. We present a condition, which is independent of the coherence of the tight frame, to guarantee accurate recovery of signals which are sparse in the tight frame, from undersampled data with minimal $l_1$-norm of transform coefficients. This improves the result in [1]. Also, the $l_q$-minimization $(0 q 1)$ approaches are introduced. We show that under a suitable condition, there exists a value $q_0\in(0,1]$ such that for any $q\in(0,q_0)$, each solution of the $l_q$-minimization is approximately well to the true signal. In particular, when the tight frame is an identity matrix or an orthonormal basis, all results obtained in this paper appeared in [13] and [26]

Regression Performance of Group Lasso for Arbitrary Design Matrices by Marco F. Duarte, Waheed Bajwa, and Robert Calderbank. The abstract reads:

In many linear regression problems, explanatory variables are activated in groups or clusters; group lasso has been proposed for regression in such cases. This paper studies the nonasymptotic regression performance of group lasso using `1/`2 regularization for arbitrary (random or deterministic) design matrices. In particular, the paper establishes under a statistical prior on the set of nonzero coefﬁcients that the `1/`2 group lasso has a near-optimal regression error for all but a vanishingly small set of models. The analysis in the paper relies on three easily computable metrics of the design matrix – coherence, block coherence, and spectral norm. Remarkably, under certain conditions on these metrics, the `1/`2 group lasso can perform near-ideal regression even if the model order scales almost linearly with the number of rows of the design matrix. This is in stark contrast with prior work on the regression performance of the `1/`2 group lasso that only provides linear scaling of the model order for the case of random design matrices.

The attendant slides are here.

New Wavelet Coefficients Raster Scannings for Deterministic Compressive Imaging by Marco F. Duarte, Sina Jafarpour, and Robert Calderbank. The abstract reads:

The Delsarte-Goethals frame has been proposed for deterministic compressive sensing of sparse and compressible signals. Its performance in compressive imaging applications falls short of that obtained for arbitrary sparse vectors. Prior work has proposed specially tailored signal recovery algorithms that partition the recovery of the input vector into clustered and unclustered portions. In this paper we present a formal analysis of the Delsarte-Goethals frame that shows that its hampered performance in compressive imaging is due to the presence of clustered sparse vectors in its null space. Such a clustered structure is characteristic in commonly-used raster scanning of 2-D wavelet representations. We also show that a simple randomized raster scanning of the wavelet coefﬁcient vector can remedy these shortcomings, allowing the use of standard recovery algorithms. Additionally, we propose an alternative deterministic raster scanning that achieves similar recovery performance. Experimental results conﬁrm the improvements in recovery performance for both the noiseless and noisy measurement regimes.

The attendant slides are here.

Lassoing Line Outages in the Smart Power Grid by Hao Zhu, Georgios B. Giannakis. The abstract reads:

Fast and accurate unveiling of power line outages is of paramount importance not only for preventing faults that may lead to blackouts, but also for routine monitoring and control tasks of the smart grid, including state estimation and optimal power flow. Existing approaches are either challenged by the \emph{combinatorial complexity} issues involved, and are thus limited to identifying single- and double-line outages; or, they invoke less pragmatic assumptions such as \emph{conditionally independent} phasor angle measurements available across the grid. Using only a subset of voltage phasor angle data, the present paper develops a near real-time algorithm for identifying multiple line outages at the affordable complexity of solving a quadratic program via block coordinate descent iterations. The novel approach relies on reformulating the DC linear power flow model as a \emph{sparse} overcomplete expansion, and leveraging contemporary advances in compressive sampling and variable selection using the least-absolute shrinkage and selection operator (Lasso). Analysis and simulated tests on the standard IEEE 118-bus system confirm the effectiveness of lassoing line changes in the smart power grid.

Compressive Sensing for Over-the-Air Ultrasound by Petros Boufounos. The abstract reads:

The advent of Compressive Sensing has provided significant mathematical tools to enhance the sensing capabilities of hardware devices. In this paper we apply Compressive Sensing to improve over-the-air ultrasonic sensing capabilities. We demonstrate that using an appropriate scene model it is possible to pose three-dimensional surface reconstruction of a scene as a sparse recovery problem. By transmitting incoherent wideband ultrasonic pulses and receiving their reflections a sensor array can sense the scene and reconstruct it using standard CS reconstruction algorithms. We further demonstrate that it possible to construct virtual arrays that exploit the sensorsâ€™ motion. Thus we can obtain three-dimensional scene reconstruction using a linear mobile array.

Robust Support Recovery Using Sparse Compressive Sensing Matrices by Jarvis Haupt and Richard Baraniuk. The abstract reads:

This paper considers the task of recovering the support of a sparse, high-dimensional vector from a small number of measurements. The procedure proposed here, which we call the Sign-Sketch procedure, is shown to be a robust recovery method in settings where the measurements are corrupted by various forms of uncertainty, including additive Gaussian noise and (possibly unbounded) outliers, and even subsequent quantization of the measurements to a single bit. The Sign-Sketch procedure employs sparse random measurement matrices, and utilizes a computationally efﬁcient support recovery procedure that is a variation of a technique from the sketching literature. We show here that O(max fk log(n k), k log kg) non-adaptive linear measurements sufﬁce to recover the support of any unknown n-dimensional vector having no more than k nonzero entries, and that our proposed procedure requires at most O(n log n) total operations for both acquisition and inference

Single-channel and Multi-channel Sinusoidal Audio Coding Using Compressed Sensing by Anthony Grifﬁn, Toni Hirvonen, Christos Tzagkarakis, Athanasios Mouchtaris, and Panagiotis Tsakalides. The abstract reads:

Compressed sensing (CS) samples signals at a much lower rate than the Nyquist rate if they are sparse in some basis. In this paper, the CS methodology is applied to sinusoidallymodeled audio signals. As this model is sparse by deﬁnition in the frequency domain (being equal to the sum of a small number of sinusoids), we investigate whether CS can be used to encode audio signals at low bitrates. In contrast to encoding the sinusoidal parameters (amplitude, frequency, phase) as current state-of-the-art methods do, we propose encoding few randomly selected samples of the time-domain description of the sinusoidal component (per signal segment). The potential of applying compressed sensing both to single-channel and multichannel audio coding is examined. The listening test results are encouraging, indicating that the proposed approach can achieve comparable performance to that of state-of-the-art methods. Given that CS can lead to novel coding systems where the sampling and compression operations are combined into one low complexity step, the proposed methodology can be considered as an important step towards applying the CS framework to audio coding applications

L_1 minimisation with noisy data by P. Wojtaszczyk. The abstract reads:

Compressed sensing aims at recovering a sparse signal x 2 RN from few nonadaptive, linear measurements (x) given by a measurement matrix . One of the fundamental recovery algorithms is an l_1 minimisation. In this paper we investigate the situation when our measurement (x) is contaminated by arbitrary noise under the assumption that the matrix satisfies the restricted isometry property. This complements results from [4] and [8].

Compressive Spectral Estimation Can Lead to Improved Resolution/Complexity Tradeoffs by Michael A. Lexa, Mike E. Davies, John S. Thompson. The summary reads:

Compressed sensing (CS) has primarily two modes of acquiring measurements of sparse signals. One is by taking inner product measurements described by an underdetermined linear system of equations y = Ax, where y ∈ Rm represents the measurements gathered about a sparse signal x ∈ Rn of interest. In this setting, the matrix A ∈ Rm×n is chosen to possess a particular property, namely the restricted isometry property, and the measurements are acquired by computing inner products between x and the rows of A. Alternatively, one can acquire CS measurements by sampling x at random locations (random point evaluations). In this case, an underdetermined linear system also relates the measurements to a higher dimensional representation, but the measurements are acquired differently—random samples are not acquired as inner products.

A Compressed Sensing Wire-Tap Channel by Galen Reeves, Naveen Goela, Nebojsa Milosavljevic, Michael Gastpar. The abstract reads:

A multiplicative Gaussian wire-tap channel inspired by compressed sensing is studied. Lower and upper bounds on the secrecy capacity are derived, and shown to be relatively tight in the large system limit for a large class of compressed sensing matrices. Surprisingly, it is shown that the secrecy capacity of this channel is nearly equal to the capacity without any secrecy constraint provided that the channel of the eavesdropper is strictly worse than the channel of the intended receiver. In other words, the eavesdropper can see almost everything and yet learn almost nothing. This behavior, which contrasts sharply with that of many commonly studied wiretap channels, is made possible by the fact that a small number of linear projections can make a crucial difference in the ability to estimate sparse vectors.

The LASSO for generic design matrices as a function of the relaxation parameter by Stephane Chretien, Sebastien Darses. The abstract reads:

The LASSO is a variable subset selection procedure in statistical linear regression based on $\ell_1$ penalization of the least-squares operator. Its behavior crucially depends, both in practice and in theory, on the ratio between the fidelity term and the penalty term. We provide a detailed analysis of the fidelity vs. penalty ratio as a function of the relaxation parameter. Our study is based on a general position condition on the design matrix which holds with probability one for most experimental models. Along the way, the proofs of some well known basic properties of the LASSO are provided from this new generic point of view.

Online Sparse System Identiﬁcation and Signal Reconstruction using Projections onto Weighted ℓ1 Balls by Yannis Kopsinis, Konstantinos Slavakis, and Sergios Theodoridis. The abstract reads:

This paper presents a novel projection-based adaptive algorithm for sparse signal and system identiﬁcation. The sequentially observed data are used to generate an equivalent sequence of closed convex sets, namely hyperslabs. Each hyperslab is the geometric equivalent of a cost criterion, that quantiﬁes “data mismatch”. Sparsity is imposed by the introduction of appropriately designed weighted ℓ1 balls and the related projection operator is also derived. The algorithm develops around projections onto the sequence of the generated hyperslabs as well as the weighted ℓ1 balls. The resulting scheme exhibits linear dependence, with respect to the unknown system’s order, on the number of multiplications/additions and an O(L log2 L) dependence on sorting operations, where L is the length of the system/signal to be estimated. Numerical results are also given to validate the performance of the proposed method against the LASSO algorithm and two very recently developed adaptive sparse schemes that fuse arguments from the LMS / RLS adaptation mechanisms with those imposed by the lasso rational.

Message Passing Aided Least Square Recovery for Compressive Sensing by Jaewook Kang, Heung-No Lee and Kiseon Kim. The introduction reads:

Compressive sensing (CS) have got attention as a promising signal processing technique to reduce information rate of sparse signals [1]. One line of CS related researches are to devise low complexity recovery algorithms since the conventional L1-norm based recovery algorithms still have high computational complexity for practical applications. Recently, a few researchers have made an attempt to apply probabilistic message passing (PMP) ideas to CS recovery [2], [3] since PMP has provided a successful solution for low complexity decoding while showing suboptimal performance in channel coding problems, such as low-density parity check codes [4]. Motivated by such previous works, in this paper, we propose a new least square estimation (LSE) based CS recovery algorithm by applying PMP, called PMP-LSE. It is well known that CS recovery is basically an underdetermined system and it can be reformed as an overdetermined system with the support set information (SI). Therefore, in the proposed algorithm, PMP undertakes to ﬁnd the SI of the signal to reform the recovery to an overdetermined case, and then LSE completes the recovery using the SI. Mainly, PMPLSE has two strong beneﬁts. First, PMP-LSE shows outstanding performance with noisy measurements by removing the noise effect from elements belonging to the non-support set. Second, PMP-LSE prevents the recovery from diverging. Under certain conditions, PMP based algorithms fails in the recovery due to divergence caused by a large number of iterations. In the algorithm, however, the possibility of the divergence highly decreases since PMP is only used to search the SI with a few iterations.

l_0 Minimization for Wavelet Frame Based Image Restoration by Yong Zhang, Bin Dong, Zhaosong Lu. The abstract reads:

The theory of (tight) wavelet frames has been extensively studied in the past twenty years and they are currently widely used for image restoration and other image processing and analysis problems. The success of wavelet frame based models, including balanced approach [20, 8] and analysis based approach [13, 32, 49], is due to their capability of sparsely approximating piecewise smooth functions like images. Motivated by the balanced approach and analysis based approach, we shall propose a wavelet frame based l_0 minimization model, where the 0 of the frame coeﬃcients are penalized. We adapt the penalty decomposition (PD) method of [40] to solve the proposed optimization problem. Numerical results showed that the proposed model solved by the PD method can generate images with better quality than those obtained by either analysis based approach or balanced approach in terms of restoring sharp features as well as maintaining smoothness of the recovered images. Some convergence analysis of the PD method will also be provided.

GROUP SPARSE OPTIMIZATION BY ALTERNATING DIRECTION METHOD by WEI DENG, WOTAO YIN, AND YIN ZHANG. The abstract reads:

This paper proposes efficient algorithms for group sparse optimization with mixed `2;1-regularization, which arises from the reconstruction of group sparse signals in compressive sensing, and the group Lasso problem in statistics and machine learning. It is known that encoding the group information in addition to sparsity will lead to better signal recovery/feature selection. The `2;1-regularization promotes group sparsity, but the resulting problem, due to the mixed-norm structure and possible grouping irregularity, is considered more di cult to solve than the conventional `1-regularized problem. Our approach is based on a variable splitting strategy and the classic alternating direction method (ADM). Two algorithms are presented, one derived from the primal and the other from the dual of the `2;1-regularized problem. The convergence of the proposed algorithms is guaranteed by the existing ADM theory. General group configurations such as overlapping groups and incomplete covers can be easily handled by our approach. Computational results show that on random problems the proposed ADM algorithms exhibit good efficiency, and strong stability and robustness.

WEIGHTED FIDELITY IN NON-UNIFORMLY QUANTIZED COMPRESSED SENSING by Laurent Jacques, David Kenric Hammond, Jalal Fadili. The abstract reads:

Following the Compressed Sensing (CS) paradigm, this paper studies the problem of recovering sparse or compressible signals from (scalar) non-uniformly quantized measurements. We show that a simple adaptation of the Basis Pursuit DeQuantizer introduced earlier, that is, a sign sensitive weighting of their `p-norm ﬁdelity constraint, yields good SNR improvements in the signal reconstruction. As a good indication of this improvement origin, we prove theoretically that a similar decoder, using a particular side-position-to-level oracle, displays a reduction of the reconstruction error when both the number of measurements and the moment p of the constraint increase. This follows the oversampling principle underlined in our previous study for uniformly quantized CS, with an additional gain provided by the non-uniform quantization. We conclude this paper by showing the efﬁciency of the approach on 1-D and 2-D signal examples

The following are abstracts or articles behind a firewall or secured pdfs!!! Some people really don't want to share their insights.

- A Sparse Dual Frame Approach to Compressed Sensing with General Frames by Shidong Li
- Sigma-Delta Quantization for Compressed Sensing Measurements by Alexander Powell
- Sparsity-promoting recovery from simultaneous data: a compressive sensing approach by Haneet Wason, Felix J. Herrmann, and Tim T.Y. Lin.
- http://anhnt.info/uploads/PCS_2010_proceeding/pdf/p90_P1-18.pdf (this pdf is secured!!!)
- Extensions of Compressed Imaging: Flying Sensor, Coded Mask, and Fast Decoding
- MEASUREMENT MATRICES FOR COMPRESSIVE SENSING VIA COLUMN REPLACEMENT by CHARLES J. COLBOURN

A new version of the following paper showed up on my radar screen:

- Reconciling Compressive Sampling Systems for Spectrally-sparse Continuous-time Signals by Michael A. Lexa, Mike E. Davies, John S. Thompson

A poster:

- Look Ahead Orthogonal Matching Pursuit by Saikat Chatterjee, Dennis Sundman, Mikael Skoglund

Finally, here is a presentation in Germany to be taken in the context of the recent entry on rank minimization:

At HCM Uni Bonn, 2.15 - 3.45pm, Mai 26, 2011, LWK 0.006

Extending the idea of compressive sensing to tensors by Zeljka Stojanac (Bonn), Hausdor f Center for Mathematics, University of Bonn

Compressive Sensing has become a major tool in science, especially in the past decade. Because of its many applications, the extension of the idea to the matrix case has already been done. It has been shown that finding the sparsest solution corresponds to nding the matrix of minimal rank and l1-minimization corresponds to the nuclear norm minimization. However, the extension to the tensor case has not been done yet. At the beginning of the seminar we will give a short introduction to tensors. Then, we will introduce several tensor decompositions (and corresponding notions of rank) that naturally arise and show some of their properties. We will also try to give an intuition as to why they could be used in Compressive Sensing.

Credit: NASA, Endeavour's last Rendezvous Pitch Maneuver (RPM)

## No comments:

Post a Comment