Today is the jumbo sized edition as I am catching up for the past two weeks. There is a gem somewhere for you. Don't pay attention to a particular order, it's a mess. Enjoy!
First, as discovered by Jordan earlier, it looks like we have an instance of compressive clustering. Here is the paper: Active Clustering: Robust and Efficient Hierarchical Clustering using Adaptively Selected Similarities by Brian Eriksson, Gautam Dasarathy, Aarti Singh, Robert Nowak. The abstract reads:
Robust Lower Bounds for Communication and Stream Computation by Amit Chakrabarti, Graham Cormode, Andrew McGregor. (the slides are here). The abstract reads:
Filtering via a Reference Set by Ali Haddad, Dan Kushnir, Ronald Coifman. The abstract reads:
Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions by Alekh Agarwal, Sahand Negahban, Martin J. Wainwright. The abstract reads:
Learning pattern transformation manifolds with parametric atom selection by Vural, Elif ; Frossard, Pascal. The abstract reads:
Exemplar-based sparse representations for noise robust automatic speech recognition by Jort F. Gemmeke, Tuomas Virtanen, Antti Hurmalainen. The abstract reads:
Data Separation by Sparse Representations by Gitta Kutyniok. The abstract reads:
Optimal Quantization for Compressive Sensing under Message Passing Reconstruction by Ulugbek Kamilov, Vivek K Goyal, Sundeep Rangan. The abstract reads:
Measurement Matrix Design for Compressive Sensing Based MIMO Radar by Yao Yu, Athina P. Petropulu, H.Vincent. Poor. The abstract reads:
Multichannel Sampling with Unknown Gains and Offsets: A Fast Reconstruction Algorithmby Yue M. Lu, and Martin Vetterli.. The abstract reads:
LEARNING SPARSE SYSTEMS AT SUB-NYQUIST RATES: A FREQUENCY-DOMAIN APPROACH by Martin McCormick, Yue M. Lu, and Martin Vetterli. The abstract reads:
Occlusion Detection and Motion Estimation via Convex Optimization by Michalis Raptis, Alper Ayvaci and Stefano Soatto. The abstract reads:
LOCALIZING POINT SOURCES IN DIFFUSION FIELDS FROM SPATIOTEMPORAL SAMPLES by Yue M. Lu,, Pier Luigi Dragotti and Martin Vetterli The abstract reads:
Interest Zone Matrix Approximation by Gil Shabat and Amir Averbuch. The abstract reads:
Patch-to-Tensor Embedding by Guy Wolf Moshe Salhov Amir Averbuch. The abstract reads:
Cooperative Wideband Spectrum Sensing for the Centralized Cognitive Radio Network by Peng Zhang, Robert Qiu. The abstract reads:
Subspace Expanders and Matrix Rank Minimization by Amin Khajehnejad, Samet Oymak, Babak Hassibi. The abstract reads:
Improved RIP Analysis of Orthogonal Matching Pursuit by Ray Maleh. The abstract reads:
Structure-Aware Sampling: Flexible and Accurate Summarization by Edith Cohen, Graham Cormode, Nick Duffield. The abstract reads:
Sparse Bayesian Methods for Low-Rank Matrix Estimation by S. Derin Babacan, Martin Luessi, Rafael Molina, Aggelos K. Katsaggelos. The abstract reads:
An Adaptive Inverse Scale Space Method for Compressed Sensing by Martin Burger, Michael Moller, Martin Benning, Stanley Osher. The abstract reads:
RECONSTRUCTING AND SEGMENTING HYPERSPECTRAL IMAGES FROM COMPRESSED MEASUREMENTS by Qiang Zhang, Robert Plemmons, David Kittle, David Brady, Sudhakar Prasad. The abstract reads:
Belief propagation for joint sparse recovery by Jongmin Kim, Woohyuk Chang, Bangchul Jung, Dror Baron, Jong Chul Ye. The abstract reads:
Density Evolution Analysis of Node-Based Verification-Based Algorithms in Compressed Sensing by Yaser Eftekhari, Anoosheh Heidarzadeh, Amir H. Banihashemi, Ioannis Lambadaris. The abstract reads:
Phase Transition in Limiting Distributions of Coherence of High-Dimensional Random Matrices by Tony Cai, Tiefeng Jiang. The abstract reads:
Limiting Laws of Coherence of Random Matrices with Applications to Testing Covariance Structure and Construction of Compressed Sensing Matrices by Tony Cai, Tiefeng Jiang. The abstract reads:
Modified Orthogonal Matching Pursuit Algorithm for Cognitive Radio Wideband Spectrum Sensing by Peng Zhang, Robert Qiu. The abstract reads:
Bounds on the Reconstruction of Sparse Signal Ensembles from Distributed Measurements by Marco F. Duarte, Michael B. Wakin, Dror Baron, Shriram Sarvotham, Richard G. Baraniuk. The abstract reads:
A Novel Sparsity-Promoted Approach to State Estimation from Dynamic Observation by Lianlin Li, Behnam. Jafarpour. The abstract reads:
And finally, just the abstract : SUBTYPING OF LEUKEMIA WITH GENE EXPRESSION ANALYSIS USING COMPRESSIVE SENSING METHOD by Tang W, Cao H, Wang Y. The abstract reads:
First, as discovered by Jordan earlier, it looks like we have an instance of compressive clustering. Here is the paper: Active Clustering: Robust and Efficient Hierarchical Clustering using Adaptively Selected Similarities by Brian Eriksson, Gautam Dasarathy, Aarti Singh, Robert Nowak. The abstract reads:
Hierarchical clustering based on pairwise similarities is a common tool used in a broad range of scientific applications. However, in many problems it may be expensive to obtain or compute similarities between the items to be clustered. This paper investigates the hierarchical clustering of N items based on a small subset of pairwise similarities, significantly less than the complete set of N(N-1)/2 similarities. First, we show that if the intracluster similarities exceed intercluster similarities, then it is possible to correctly determine the hierarchical clustering from as few as 3N log N similarities. We demonstrate this order of magnitude savings in the number of pairwise similarities necessitates sequentially selecting which similarities to obtain in an adaptive fashion, rather than picking them at random. We then propose an active clustering method that is robust to a limited fraction of anomalous similarities, and show how even in the presence of these noisy similarity values we can resolve the hierarchical clustering using only O(N log^2 N) pairwise similarities.
Target Detection Performance Bounds in Compressive Spectral Imaging by Kalyani Krishnamurthy, Rebecca Willett, and Maxim Raginsky. The abstract reads:
This paper describes computationally efficient approaches and associated theoretical performance guarantees for the detection of known spectral targets and spectral anomalies from few projection measurements of spectral images. The proposed approaches accommodate spectra of different signal strengths contaminated by a colored Gaussian background, and perform detection without reconstructing the spectral image from the observations. The theoretical performance bounds of= the target detector highlight fundamental tradeoffs among the number of measurements collected, amount of background signal present, signal-to-noise ratio, and similarity among potential targets in a known spectral dictionary. The anomaly detector is designed to control the number of false discoveries below a desired level and can be adapted to uncertainties in the user’s knowledge of the spectral dictionary. Unlike approaches based on the principles of compressed sensing, the proposed approach does not depend on a known sparse representation of targets; rather, the theoretical performance bounds exploit the structure of a known dictionary of targets and the distance preservation property of the measurement matrix. Simulation experiments illustrate the practicality and effectiveness of the proposed approaches
SHORT AND SMOOTH SAMPLING TRAJECTORIES FOR COMPRESSED SENSING by Rebecca Willett. The abstract reads:
This paper explores a novel setting for compressed sensing (CS) in which the sampling trajectory length is a critical bottleneck and must be minimized subject to constraints on the desired reconstruction accuracy. In contrast to the existing CS literature, where the focus is on reducing the number of measurements, this contribution describes a short and smooth sampling trajectory guaranteed to satisfy the Restricted Isometry Property if the underlying signal is sparse in an appropriate basis. A na¨ıve path based on randomly choosing a collection of sample locations and using a Traveling Salesman Problem solver to choose a “short” trajectory is shown to be dramatically longer than the nearly straight and very smooth path proposed in this paper. Theoretical justification for the proposed path is presented, and applications to MRI, electromagnetics, and ecosystem monitoring are discussed.
Cooperative Wideband Spectrum Sensing for the Centralized Cognitive Radio Network by Peng Zhang, Robert Qiu. The abstract reads:
Various primary user (PU) radios have been allocated into fixed frequency bands in the whole spectrum. A cognitive radio network (CRN) should be able to perform the wideband spectrum sensing (WSS) to detect temporarily unoccupied frequency bands. We summarize four occupancy features for the frequency bands. 1. The occupancy is sparse; 2. The frequency band allocation information is fixed and common; 3. There are three categories for the frequency band usages; 4. The occupied frequency bands are common in the CRN. For the first time, we consider all features as the prior knowledge in the compressed sensing based cooperative WSS (CWSS) algorithm design for a centralized CRN. We propose a modified orthogonal matching pursuit (Mod-OMP) algorithm and a modified simultaneous orthogonal matching pursuit (Mod-SOMP) algorithm for the CWSS. We compare the CWSS performance of Mod-OMP/Mod-SOMP with the original OMP/SOMP and show the performance improvements
An Adaptive Algorithm for Compressive Approximation of Trajectory (AACAT) for Delay Tolerant Networks by Rajib Rana, Wen Hu, Tim Wark, and Chun Tung Chou. The abstract reads:
Highly efficient compression provides a promising approach to address the transmission and computation challenges imposed by moving object tracking applications on resource constrained Wireless Sensor Networks (WSNs). In this paper, we propose and design a Compressive Sensing (CS) based trajectory approximation algorithm, Adaptive Algorithm for Compressive Approximation of Trajectory (AACAT), which performs trajectory compression, so as to maximize the information about the trajectory subject to limited bandwidth. Our extensive evaluation using “real” trajectories of three di"erent object groups (animals, pedestrians and vehicles) shows that CS-based trajectory compression reduces up to 30% transmission overheads, for given information loss bounds, compared to the state-of-the-art trajectory compression algorithms. We implement AACAT on the resource-impoverished sensor nodes, which shows that AACAT achieves high compression performance with very limited resource (computation power and energy) overheads.
THE VALUE OF REDUNDANT MEASUREMENT IN COMPRESSED SENSING by Victoria Kostina, Marco Duarte, Sina Jafarpour, Robert Calderbank. The abstract reads:
The aim of compressed sensing is to recover attributes of sparse signals using very few measurements. Given an overall bit budget for quantization, this paper demonstrates that there is value to redundant measurement. The measurement matrices considered here are required to have the property that signal recovery is still possible even after dropping certain subsets of D measurements. It introduces the concept of a measurement matrix that is weakly democratic in the sense that the amount of information about the signal carried by each of the designated D-subsets is the same. Examples of deterministic measurement matrices that are weakly democratic are constructed by exponentiating codewords from the binary second order Reed Muller code. The value in rejecting D measurements that are on average larger, is to be able to provide a finer grid for vector quantization of the remaining measurements, even after discounting the original budget by the bits used to identify the reject set. Simulation results demonstrate that redundancy improves recovery SNR, sometimes by a wide margin. Optimum performance occurs whena significant fraction of measurements are rejected.
NEW WAVELET COEFFICIENT RASTER SCANNINGS FOR COMPRESSIVE IMAGING by Marco Duarte, Sina Jafarpour, Robert Calderbank. The abstract reads:
The Delsarte-Goethals frame has been proposed for deterministic compressive sensing of sparse and compressible signals. Its performance in compressive imaging applications falls short of that obtained for arbitrary sparse vectors. Prior work has proposed specially tailored signal recovery algorithms that partition the recovery of the input vector into clustered and unclustered portions. In this paper we present a formal analysis of the Delsarte-Goethals frame that shows that its hampered performance in compressive imaging is due to the presence of clustered sparse vectors in its null space. Such a clustered structure is characteristic in commonly-used raster scanning of 2- D wavelet representations. We also show that a simple randomized raster scanning of the wavelet coefficient vector can remedy these shortcomings, allowing the use of standard recovery algorithms. Additionally, we propose an alternative deterministic raster scanning that achieves similar recovery performance. Experimental results confirm the improvements in recovery performance for both the noiseless and noisy measurement regimes.
Curvelet-Wavelet Regularized Split Bregman Iteration for Compressed Sensing by Gerlind Plonka, Jianwei Ma. The abstract reads:
Compressed sensing is a new concept in signal processing. Assuming that a signal can be represented or approximated by only a few suitably chosen terms in a frame expansion, compressed sensing allows to recover this signal from much fewer samples than the Shannon-Nyquist theory requires. Many images can be sparsely approximated in expansions of suitable frames as wavelets, curvelets, wave atoms and others. Generally, wavelets represent point-like features while curvelets represent line-like features well. For a suitable recovery of images, we propose models that contain weighted sparsity constraints in two different frames. Given the incomplete measurements f = Φu + with the measurement matrix Φ ∈ RK×N , K much less N, we consider a jointly sparsity-constrained optimization problem of the form argmin u{ ΛcΨcu 1 + ΛwΨwu 1 + 1 2 f − Φu 2 2}. Here Ψc and Ψw are the transform matrices corresponding to the two frames, and the diagonal matrices Λc, Λw contain the weights for the frame coefficients. We present efficient iteration methods to solve the optimization problem, based on Alternating Split Bregman algorithms. The convergence of the proposed iteration schemes will be proved by showing that they can be understood as special cases of the Douglas-Rachford Split algorithm. Numerical experiments for compressed sensing based Fourier-domain random imaging show good performances of the proposed curvelet-wavelet regularized split Bregman (CWSpB) methods, where we particularly use a combination of wavelet and curvelet coefficients as sparsity constraints.
Nonlinear regression modeling via Compressed Sensing by Hiroshi Inoue, Shohei Tateishi and Sadanori Konishi. The abstract reads:
We consider the problem of constructing nonlinear regression models in case that the dimension of data is less than the number of basis functions. We propose a procedure where the smooth curve are effectively estimated along with the technique of regularization method. We give a simple idea for applying the Restricted Isometry Property to curve fitting. Simulation results and real data analysis demonstrate that our methodology performs well in various situations.
A signal recovery algorithm for sparse matrix based compressed sensing by Yoshiyuki Kabashima, Tadashi Wadayama. The abstract reads:
We have developed an approximate signal recovery algorithm with low computational cost for compressed sensing on the basis of randomly constructed sparse measurement matrices. The law of large numbers and the central limit theorem suggest that the developed algorithm saturates the Donoho-Tanner weak threshold for the perfect recovery when the matrix becomes as dense as the signal size $N$ and the number of measurements $M$ tends to infinity keep $\alpha=M/N \sim O(1)$, which is supported by extensive numerical experiments. Even when the numbers of non-zero entries per column/row in the measurement matrices are limited to $O(1)$, numerical experiments indicate that the algorithm can still typically recover the original signal perfectly with an $O(N)$ computational cost per update as well if the density $\rho$ of non-zero entries of the signal is lower than a certain critical value $\rho_{\rm th}(\alpha)$ as $N,M \to \infty$.
A simple proof of Kashin’s decomposition theorem by Simon Foucart. The abstract reads:
Compressive Sensing techniques are used in a short proof of Kashin’s decomposition theorem generalized to `p-spaces for p 1. The proof is based on the observation that the null-space of a properly-sized matrix with restricted isometry property is almost Euclidean when endowed with the `p-quasinorm.
PERTURBATIONS OF MEASUREMENT MATRICES AND DICTIONARIES IN COMPRESSED SENSING by Akram Aldroubi, Xuemei Chen, and Alexander Powell. The abstract reads:
The compressed sensing problem for redundant dictionaries aims to use a small number of linear measurements to represent signals that are sparse with respect to a general dictionary. Under an appropriate restricted isometry property for a dictionary, reconstruction methods based on l_q minimization are known to provide an effective signal recovery tool in this setting. This note explores conditions under which l_q minimization is robust to measurement noise, and stable with respect to perturbations of the sensing matrix A and the dictionary D. We propose a new condition, the D null space property, which guarantees that l_q minimization produces solutions that are robust and stable against perturbations of A and D. We also show that l_q minimization is jointly stable with respect to imprecise knowledge of the measurement matrix A and the dictionary D when A satisfi es the restricted isometry property
When a matrix A with n columns is known to be well approximated by a linear combination of basis matrices B1, . . . , Bp, we can apply A to a random vector and solve a linear system to recover this linear combination. The same technique can be used to recover an approximation to A^−1 A basic question is whether this linear system is invertible and well-conditioned. In this paper, we show that if the Gram matrix of the Bj ’s is sufficiently well-conditioned and each Bj has a high numerical rank, then n ∝ p log2p will ensure that the linear system is well-conditioned with high probability. Our main application is probing linear operators with smooth pseudodifferential symbols such as the wave equation Hessian in seismic imaging [6]. We demonstrate numerically that matrix probing can also produce good preconditioners for inverting elliptic operators in variable media.
Robust Lower Bounds for Communication and Stream Computation by Amit Chakrabarti, Graham Cormode, Andrew McGregor. (the slides are here). The abstract reads:
We study the communication complexity of evaluating functions when the input data is randomly allocated (according to some known distribution) amongst two or more players, possibly with information overlap. This naturally extends previously studied variable partition models such as the best-case and worst-case partition models [36, 33]. We aim to understand whether the
hardness of a communication problem holds for almost every allocation of the input, as opposed to holding for perhaps just a few atypical partitions. A key application is to the heavily studied data stream model. There is a strong connection between our communication lower bounds and lower bounds in the data stream model that are “robust” to the ordering of the data. That is, we prove lower bounds for when the order of the items in the stream is chosen not adversarially but rather uniformly (or near-uniformly) from the set of all permutations. This random-order data stream model has attracted recent interest, since lower bounds here give stronger evidence for the inherent hardness of streaming problems. Our results include the first random-partition communication lower bounds for problems including multi-party set disjointness and gap-Hamming-distance. Both are tight. We also extend and improve previous results [23, 8] for a form of pointer jumping that is relevant to the problem of selection (in particular, median finding). Collectively, these results yield lower bounds for a variety of problems in the random-order data stream model, including estimating the number of distinct elements, approximating frequency moments, and quantile estimation.
Filtering via a Reference Set by Ali Haddad, Dan Kushnir, Ronald Coifman. The abstract reads:
Patch-based de-noising algorithms and patch manifold smoothing have emerged as efficient de-noising methods. This paper provides a new insight on these methods, such as the Non Local Means [1] or the image graph de-noising [8], by showing its use for ltering a selected pattern.
Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions by Alekh Agarwal, Sahand Negahban, Martin J. Wainwright. The abstract reads:
We analyze a class of estimators based on convex relaxation for solving high-dimensional matrix decomposition problems. The observations are the noisy realizations of the sum of an (appproximately) low rank matrix $\Theta^\star$ with a second matrix $\Gamma^\star$ endowed with a complementary form of low-dimensional structure. We derive a general theorem that gives upper bounds on the Frobenius norm error for an estimate of the pair $(\Theta^\star, \Gamma^\star)$ obtained by solving a convex optimization problem that combines the nuclear norm with a general decomposable regularizer. We then specialize this result to two cases that have been studied in the context of robust PCA: low rank plus an entrywise sparse matrix, and low rank plus a columnwise sparse matrix. For both models, our theory yields non-asymptotic Frobenius error bounds for both for deterministic and stochastic noise matrices, and applies to matrices $\Theta^\star$ that can be exactly or approximately low rank, and matrices $\Gamma^\star$ that can be exactly or approximately sparse. Moreover, for the case of stochastic noise matrices, we establish matching lower bounds on the minimax error, showing that our results cannot be improved beyond constant factors. The sharpness of our theoretical predictions is confirmed by numerical simulations.
Learning pattern transformation manifolds with parametric atom selection by Vural, Elif ; Frossard, Pascal. The abstract reads:
We address the problem of building a manifold in order to represent a set of geometrically transformed images by selecting a good common sparse approximation of them with parametric atoms. We propose a greedy method to construct a representative pattern such that the total distance between the transformation manifold of the representative pattern and the input images is minimized. In the progressive construction of the pattern we select atoms from a continuous dictionary by optimizing the atom parameters. Experimental results suggest that the representative pattern built with the proposed method provides an accurate representation of data, where the invariance to geometric transformations is achieved due to theA Regularization Framework for Mobile Social Network Analysis by Dong, Xiaowen ; Frossard, Pascal ; Vandergheynst, Pierre ; Nefedov, Nikolai. The abstract reads:
transformation manifold model.
Mobile phone data provides rich dynamic information on human activities in social network analysis. In this paper, we represent data from two different modalities as a graph and functions defined on the vertex set of the graph. We propose a regularization framework for the joint utilization of these two modalities of data, which enables us to model evolution of social network information and efficiently classify relationships among mobile phone users. Simulations based on real world data demonstrate the potential application of our model in dynamic scenarios, and present competitive results to baseline methods for combining multimodal data in the learning and clustering communities.
Exemplar-based sparse representations for noise robust automatic speech recognition by Jort F. Gemmeke, Tuomas Virtanen, Antti Hurmalainen. The abstract reads:
This paper proposes to use exemplar-based sparse representations for noise robust automatic speech recognition. First, we describe how speech can be modelled as a linear combination of a small number of exemplars from a large speech exemplar dictionary. The exemplars are time-frequency patches of real speech, each spanning multiple time frames. We then propose to model speech corrupted by additive noise as a linear combination of noise and speech exemplars, and we derive an algorithm for recovering this sparse linear combination of exemplars from the observed noisy speech. We describe how the framework can be used for doing hybrid exemplar-based/HMM recognition by using the exemplar-activations together with the phonetic information associated with the exemplars. As an alternative to hybrid recognition, the framework also allows us to take a source separation approach which enables exemplar-based feature enhancement as well as missing data mask estimation. We evaluate the performance of these exemplarbased methods in connected digit recognition on the AURORA-2 database. Our results show that the hybrid system performed substantially better than source separation or missing data mask estimation at lower SNRs, achieving up to 57.1% accuracy at SNR= -5 dB. Although not as effective as two baseline recognisers at higher SNRs, the novel approach offers a promising direction of future research on exemplar-based ASR
Data Separation by Sparse Representations by Gitta Kutyniok. The abstract reads:
Recently, sparsity has become a key concept in various areas of applied mathematics, computer science, and electrical engineering. One application of this novel methodology is the separation of data, which is composed of two (or more) morphologically distinct constituents. The key idea is to carefully select representation systems each providing sparse approximations of one of the components. Then the sparsest coefficient vector representing the data within the composed - and therefore highly redundant - representation system is computed by $\ell_1$ minimization or thresholding. This automatically enforces separation. This paper shall serve as an introduction to and a survey about this exciting area of research as well as a reference for the state-of-the-art of this research field. It will appear as a chapter in a book on "Compressed Sensing: Theory and Applications" edited by Yonina Eldar and Gitta Kutyniok.
Optimal Quantization for Compressive Sensing under Message Passing Reconstruction by Ulugbek Kamilov, Vivek K Goyal, Sundeep Rangan. The abstract reads:
We consider the optimal quantization of compressive sensing measurements following the work on generalization of relaxed belief propagation (BP) for arbitrary measurement channels. Relaxed BP is an iterative reconstruction scheme inspired by message passing algorithms on bipartite graphs. Its asymptotic error performance can be accurately predicted and tracked through the state evolution formalism. We utilize these results to design mean-square optimal scalar quantizers for relaxed BP signal reconstruction and empirically demonstrate the superior error performance of the resulting quantizers.Rank Aggregation via Nuclear Norm Minimization by David F. Gleich, Lek-Heng Lim. The abstract reads:
The process of rank aggregation is intimately intertwined with the structure of skew-symmetric matrices. We apply recent advances in the theory and algorithms of matrix completion to skew-symmetric matrices. This combination of ideas produces a new method for ranking a set of items. The essence of our idea is that a rank aggregation describes a partially filled skew-symmetric matrix. We extend an algorithm for matrix completion to handle skew-symmetric data and use that to extract ranks for each item. Our algorithm applies to both pairwise comparison and rating data. Because it is based on matrix completion, it is robust to both noise and incomplete data. We show a formal recovery result for the noiseless case and present a detailed study of the algorithm on synthetic data and Netflix ratings.
Measurement Matrix Design for Compressive Sensing Based MIMO Radar by Yao Yu, Athina P. Petropulu, H.Vincent. Poor. The abstract reads:
In colocated multiple-input multiple-output (MIMO) radar using compressive sensing (CS), a receive node compresses its received signal via a linear transformation, referred to as measurement matrix. The samples are subsequently forwarded to a fusion center, where an L1-optimization problem is formulated and solved for target information. CS-based MIMO radar exploits the target sparsity in the angle-Doppler-range space and thus achieves the high localization performance of traditional MIMO radar but with many fewer measurements. The measurement matrix is vital for CS recovery performance. This paper considers the design of measurement matrices that achieve an optimality criterion that depends on the coherence of the sensing matrix (CSM) and/or signal-to-interference ratio (SIR). The first approach minimizes a performance penalty that is a linear combination of CSM and the inverse SIR. The second one imposes a structure on the measurement matrix and determines the parameters involved so that the SIR is enhanced. Depending on the transmit waveforms, the second approach can significantly improve SIR, while maintaining CSM comparable to that of the Gaussian random measurement matrix (GRMM). Simulations indicate that the proposed measurement matrices can improve detection accuracy as compared to a GRMM.Verifiable and Computable $\ell_\infty$ Performance Evaluation of $\ell_1$ Sparse Signal Recovery by Gongguo Tang, Arye Nehorai. The abstract reads:
In this paper, we develop verifiable and computable performance analysis of the $\ell_\infty$ norms of the recovery errors for $\ell_1$ minimization algorithms. We define a family of goodness measures for arbitrary sensing matrices as a set of optimization problems, and design algorithms with a theoretical global convergence guarantee to compute these goodness measures. The proposed algorithms solve a series of second-order cone programs, or linear programs. As a by-product, we implement an efficient algorithm to verify a sufficient condition for exact $\ell_1$ recovery in the noise-free case. This implementation performs orders-of-magnitude faster than the state-of-the-art techniques. We derive performance bounds on the $\ell_\infty$ norms of the recovery errors in terms of these goodness measures. We establish connections between other performance criteria e.g., the $\ell_2$ norm, $\ell_1$ norm, and support recovery) and the $\ell_\infty$ norm in a tight manner. We also analytically demonstrate that the developed goodness measures are non-degenerate for a large class of random sensing matrices, as long as the number of measurements is relatively large. Numerical experiments show that, compared with the restricted isometry based performance bounds, our error bounds apply to a wider range of problems and are tighter, when the sparsity levels of the signals are relatively low.SPARSE SPECTRAL FACTORIZATION: UNICITY AND RECONSTRUCTION ALGORITHMS by Yue M. Lu, and Martin Vetterli. The abstract reads:
Spectral factorization is a classical tool in signal processing and communications. It also plays a critical role in X-ray crystallography, in the context of phase retrieval. In this work, we study the problem of sparse spectral factorization, aiming to recover a one-dimensional sparse signal from its autocorrelation. We present a sufficient condition for the recovery to be unique, and propose an iterative algorithm that can obtain the original signal (up to a sign change, time-shift and time-reversal). Numerical simulations verify the effectiveness of the proposed algorithm.
Multichannel Sampling with Unknown Gains and Offsets: A Fast Reconstruction Algorithmby Yue M. Lu, and Martin Vetterli.. The abstract reads:
We study a multichannel sampling scheme, where different channels observe scaled and shifted versions of a common bandlimited signal. The channel gains and offsets are unknown a priori, and each channel samples at sub-Nyquist rates. This setup appears in many practical signal processing applications, including time-interleaved ADC with timing skews, unsynchronized distributed sampling in sensor networks, and superresolution imaging. In this paper, we propose a new algorithm to efficiently estimate the unknown channel gains and offsets. Key to our algorithm is a novel linearization technique, which converts a system of trigonometric polynomial equations of the unknown parameters to an overparameterized linear system. The computation steps of the proposed algorithm boil down to forming a fixed data matrix from the discrete-time Fourier transforms of the observed channel samples and computing the singular value decomposition of that matrix. Numerical simulations verify the effectiveness, efficiency, and robustness of the proposed algorithm in the presence of noise. In the high SNR regime (40 dB and above), the proposed algorithm significantly outperforms a previous method in the literature in terms of estimation accuracy, at the same time being three orders of magnitudes faster.
LEARNING SPARSE SYSTEMS AT SUB-NYQUIST RATES: A FREQUENCY-DOMAIN APPROACH by Martin McCormick, Yue M. Lu, and Martin Vetterli. The abstract reads:
We propose a novel algorithm for sparse system identification in the frequency domain. Key to our result is the observation that the Fourier transform of the sparse impulse response is a simple sum of complex exponentials, whose parameters can be efficiently determined from only a narrow frequency band. From this perspective, we present a sub-Nyquist sampling scheme, and show that the original continuous-time system can be learned by considering an equivalent low-rate discrete system. The impulse response of that discrete system can then be adaptively obtained by a novel frequency-domain LMS filter, which exploits the parametric structure of the model. Numerical experiments confirm the effectiveness of the proposed scheme for sparse system identification tasks.
Occlusion Detection and Motion Estimation via Convex Optimization by Michalis Raptis, Alper Ayvaci and Stefano Soatto. The abstract reads:
We tackle the problem of detecting occluded regions in a video stream. Under assumptions of Lambertian reflection and static illumination, the task can be posed as a variational optimization problem, and its solution approximated using convex minimization. We describe efficient numerical schemes that reach the global optimum of the relaxed cost functional, for any number of independently moving objects, and any number of occlusion layers. We test the proposed algorithm on benchmark datasets, expanded to enable evaluation of occlusion detection performance, in addition to optical flow.
LOCALIZING POINT SOURCES IN DIFFUSION FIELDS FROM SPATIOTEMPORAL SAMPLES by Yue M. Lu,, Pier Luigi Dragotti and Martin Vetterli The abstract reads:
Consider a diffusion field induced by a finite number of localized and instantaneous sources. In this paper, we study the problem of estimating these sources (including their intensities, spatial locations, and activation time) from the spatiotemporal samples taken by a network of spatially distributed sensors. We propose two estimation algorithms, depending on whether the activation time of the sources is known. For the case of known activation time, we present an annihilating filter based method to estimate the Euclidean distances between the sources and sensors, which can be subsequently used to localize the sources. For the case of a single source but with unknown activation time, we show that the diffusion field at any spatial location is a scaled and shifted version of a common prototype function, and that this function is the unique solution to a particular differential equation. This observation leads to an efficient algorithm that can estimate the unknown parameters of the source by solving a system of linear equations. For both algorithms proposed in this work, the minimum number of sensors required is d + 1, where d is the spatial dimension of the field. This requirement is independent of the number of active sources.
Interest Zone Matrix Approximation by Gil Shabat and Amir Averbuch. The abstract reads:
We present an algorithm for low rank approximation of matrices where only some of the entries in the matrix are taken into consideration. This algorithm appears in recent literature under di erent names, where it is described as an EM based algorithm that maximizes the likelihood for the missing entries without any relation for the mean square error minimization. When the algorithm is minimized from a mean-square-error point of view, we prove that the error produced by the algorithm is monotonically decreasing. It guarantees to converge to a local MSE minimum. We also show that an extension of the EM based algorithm for weighted low rank approximation, which appeared in recent literature, claiming that it converges to a local minimum of the MSE is wrong. Finally, we show the use of the algorithm in di erent
applications for physics, electrical engineering and data interpolation
Patch-to-Tensor Embedding by Guy Wolf Moshe Salhov Amir Averbuch. The abstract reads:
A popular approach to deal with the \curse of dimensionality" in relation with the analysis of high-dimensional datasets, is to assume that points in these datasets lie on a low-dimensional manifold immersed in a high-dimensional ambient space. Kernel methods operate on this assumption and introduce the notion of local affinities between data-points via the construction of a suitable kernel. Spectral analysis of this kernel provides a global, preferably low-dimensional, coordinate system that preserves the qualities of the manifold. In this paper, we extend the scalar relations used in this framework to matrix relations, which can encompass multidimensional similarities between local neighborhoods of points on the manifold. We utilize the diffusion maps method together with linear-projection operators between tangent spaces of the manifold to construct a super-kernel that represents these relations. The properties of the presented super-kernels are explored and their spectral decompositions are utilized to embed the patches of the manifold into a tensor space in which the relations between them are revealed.
Cooperative Wideband Spectrum Sensing for the Centralized Cognitive Radio Network by Peng Zhang, Robert Qiu. The abstract reads:
Various primary user (PU) radios have been allocated into fixed frequency bands in the whole spectrum. A cognitive radio network (CRN) should be able to perform the wideband spectrum sensing (WSS) to detect temporarily unoccupied frequency bands. We summarize four occupancy features for the frequency bands. 1. The occupancy is sparse; 2. The frequency band allocation information is fixed and common; 3. There are three categories for the frequency band usages; 4. The occupied frequency bands are common in the CRN. For the first time, we consider all features as the prior knowledge in the compressed sensing based cooperative WSS (CWSS) algorithm design for a centralized CRN. We propose a modified orthogonal matching pursuit (Mod-OMP) algorithm and a modified simultaneous orthogonal matching pursuit (Mod-SOMP) algorithm for the CWSS. We compare the CWSS performance of Mod-OMP/Mod-SOMP with the original OMP/SOMP and show the performance improvements.
Subspace Expanders and Matrix Rank Minimization by Amin Khajehnejad, Samet Oymak, Babak Hassibi. The abstract reads:
Matrix rank minimization (RM) problems recently gained extensive attention due to numerous applications in machine learning, system identification and graphical models. In RM problem, one aims to find the matrix with the lowest rank that satisfies a set of linear constraints. The existing algorithms include nuclear norm minimization (NNM) and singular value thresholding. Thus far, most of the attention has been on i.i.d. Gaussian measurement operators. In this work, we introduce a new class of measurement operators, and a novel recovery algorithm, which is notably faster than NNM. The proposed operators are based on what we refer to as subspace expanders, which are inspired by the well known expander graphs based measurement matrices in compressed sensing. We show that given an $n\times n$ PSD matrix of rank $r$, it can be uniquely recovered from a minimal sampling of $O(nr)$ measurements using the proposed structures, and the recovery algorithm can be cast as matrix inversion after a few initial processing steps.Random Tight Frames by Martin Ehler. The abstract reads:
We introduce probabilistic frames to study finite frames whose elements are chosen at random. While finite tight frames generalize orthonormal bases by allowing redundancy, independent, uniformly distributed points on the sphere approximately form a finite unit norm tight frame (FUNTF). In the present paper, we develop probabilistic versions of tight frames and FUNTFs to significantly weaken the requirements on the random choice of points to obtain an approximate finite tight frame. Namely, points can be chosen from any probabilistic tight frame, they do not have to be identically distributed, nor have unit norm. We also observe that classes of random matrices used in compressed sensing are induced by probabilistic tight frames.
Improved RIP Analysis of Orthogonal Matching Pursuit by Ray Maleh. The abstract reads:
Orthogonal Matching Pursuit (OMP) has long been considered a powerful heuristic for attacking compressive sensing problems; however, its theoretical development is, unfortunately, somewhat lacking. This paper presents an improved Restricted Isometry Property (RIP) based performance guarantee for T-sparse signal reconstruction that asymptotically approaches the conjectured lower bound given in Davenport et al. We also further extend the state-of-the-art by deriving reconstruction error bounds for the case of general non-sparse signals subjected to measurement noise. We then generalize our results to the case of K-fold Orthogonal Matching Pursuit (KOMP). We finish by presenting an empirical analysis suggesting that OMP and KOMP outperform other compressive sensing algorithms in average case scenarios. This turns out to be quite surprising since RIP analysis (i.e. worst case scenario) suggests that these matching pursuits should perform roughly T^0.5 times worse than convex optimization, CoSAMP, and Iterative Thresholding.
Structure-Aware Sampling: Flexible and Accurate Summarization by Edith Cohen, Graham Cormode, Nick Duffield. The abstract reads:
In processing large quantities of data, a fundamental problem is to obtain a summary which supports approximate query answering. Random sampling yields flexible summaries which naturally support subset-sum queries with unbiased estimators and well-understood confidence bounds.Classic sample-based summaries, however, are designed for arbitrary subset queries and are oblivious to the structure in the set of keys. The particular structure, such as hierarchy, order, or product space (multi-dimensional), makes range queries much more relevant for most analysis of the data.
Dedicated summarization algorithms for range-sum queries have also been extensively studied. They can outperform existing sampling schemes in terms of accuracy on range queries per summary size. Their accuracy, however, rapidly degrades when, as is often the case, the query spans multiple ranges. They are also less flexible - being targeted for range sum queries alone - and are often quite costly to build and use.
In this paper we propose and evaluate variance optimal sampling schemes that are structure-aware. These summaries improve over the accuracy of existing structure-oblivious sampling schemes on range queries while retaining the benefits of sample-based summaries: flexible summaries, with high accuracy on both range queries and arbitrary subset queries.
Sparse Bayesian Methods for Low-Rank Matrix Estimation by S. Derin Babacan, Martin Luessi, Rafael Molina, Aggelos K. Katsaggelos. The abstract reads:
Recovery of low-rank matrices has recently seen significant activity in many areas of science and engineering, motivated by recent theoretical results for exact reconstruction guarantees and interesting practical applications. A number of methods have been developed for this recovery problem. However, a principled method for choosing the unknown target rank is generally not provided. In this paper, we present novel recovery algorithms for estimating low-rank matrices in matrix completion and robust principal component analysis based on sparse Bayesian learning (SBL) principles. Starting from a matrix factorization formulation and enforcing the low-rank constraint in the estimates as a sparsity constraint, we develop an approach that is very effective in determining the correct rank while providing high recovery performance. We provide connections with existing methods in other similar problems and empirical results and comparisons with current state-of-the-art methods that illustrate the effectiveness of this approach.
An Adaptive Inverse Scale Space Method for Compressed Sensing by Martin Burger, Michael Moller, Martin Benning, Stanley Osher. The abstract reads:
In this paper we introduce a novel adaptive approach for solving L_1-minimization problems as frequently arising in compressed sensing, which is based on the recently introduced inverse scale space method. The scheme allows to e ciently compute minimizers by solving a sequence of low-dimensional nonnegative least-squares problems. We provide a detailed convergence analysis in a general setup as well as re fined results under special conditions. In addition we discuss experimental observations in several numerical examples.
RECONSTRUCTING AND SEGMENTING HYPERSPECTRAL IMAGES FROM COMPRESSED MEASUREMENTS by Qiang Zhang, Robert Plemmons, David Kittle, David Brady, Sudhakar Prasad. The abstract reads:
A joint reconstruction and segmentation model for hyperspectral data obtained from a compressive measurement system is described. Although hyperspectral imaging (HSI) technology has incredible potential, its utility is currently limited because of the enormous quantity and complexity of the data it gathers. Yet, often the scene to be reconstructed from the HSI data contains far less information, typically consisting of spectrally and spatially homogeneous segments that can be represented sparsely in an appropriate basis. Such vast informational redundancy thus implicitly contained in the HSI data warrants a compressed sensing (CS) strategy that acquires appropriately coded spectral-spatial data from which one can reconstruct the original image more efficiently while still enabling target identification procedures. A codedaperture snapshot spectral imager (CASSI) that collects compressed measurements is considered here, and a joint reconstruction and segmentation model for hyperspectral data obtained from CASSI compressive measurements is described. Promising test results on simulated and real data are reported.
Belief propagation for joint sparse recovery by Jongmin Kim, Woohyuk Chang, Bangchul Jung, Dror Baron, Jong Chul Ye. The abstract reads:
Compressed sensing (CS) demonstrates that sparse signals can be recovered from underdetermined linear measurements. We focus on the joint sparse recovery problem where multiple signals share the same common sparse support sets, and they are measured through the same sensing matrix. Leveraging a recent information theoretic characterization of single signal CS, we formulate the optimal minimum mean square error (MMSE) estimation problem, and derive a belief propagation algorithm, its relaxed version, for the joint sparse recovery problem and an approximate message passing algorithm. In addition, using density evolution, we provide a sufficient condition for exact recovery.Compressive MUSIC with optimized partial support for joint sparse recovery by Jong Min Kim, Ok Kyun Lee, Jong Chul Ye. The abstarct reads:
Multiple measurement vector (MMV) problem addresses the identification of unknown input vectors that share common sparse support. The MMV problems had been traditionally addressed either by sensor array signal processing or compressive sensing. However, recent breakthrough in this area such as compressive MUSIC (CS-MUSIC) or subspace-augumented MUSIC (SA-MUSIC) optimally combines the compressive sensing (CS) and array signal processing such that $k-r$ supports are first found by CS and the remaining $r$ supports are determined by generalized MUSIC criterion, where $k$ and $r$ denote the sparsity and the independent snapshots, respectively. Even though such hybrid approach significantly outperforms the conventional algorithms, its performance heavily depends on the correct identification of $k-r$ partial support by compressive sensing step, which often deteriorate the overall performance. The main contribution of this paper is, therefore, to show that as long as $k-r+1$ correct supports are included in any $k$-sparse CS solution, the optimal $k-r$ partial support can be found using a subspace fitting criterion, significantly improving the overall performance of CS-MUSIC. Furthermore, unlike the single measurement CS counterpart that requires infinite SNR for a perfect support recovery, we can derive an information theoretic sufficient condition for the perfect recovery using CS-MUSIC under a {\em finite} SNR scenario.
Density Evolution Analysis of Node-Based Verification-Based Algorithms in Compressed Sensing by Yaser Eftekhari, Anoosheh Heidarzadeh, Amir H. Banihashemi, Ioannis Lambadaris. The abstract reads:
In this paper, we present a new approach for the analysis of iterative node-based verification-based (NB-VB) recovery algorithms in the context of compressive sensing. These algorithms are particularly interesting due to their low complexity (linear in the signal dimension $n$). The asymptotic analysis predicts the fraction of unverified signal elements at each iteration $\ell$ in the asymptotic regime where $n \rightarrow \infty$. The analysis is similar in nature to the well-known density evolution technique commonly used to analyze iterative decoding algorithms. To perform the analysis, a message-passing interpretation of NB-VB algorithms is provided. This interpretation lacks the extrinsic nature of standard message-passing algorithms to which density evolution is usually applied. This requires a number of non-trivial modifications in the analysis. The analysis tracks the average performance of the recovery algorithms over the ensembles of input signals and sensing matrices as a function of $\ell$. Concentration results are devised to demonstrate that the performance of the recovery algorithms applied to any choice of the input signal over any realization of the sensing matrix follows the deterministic results of the analysis closely. Simulation results are also provided which demonstrate that the proposed asymptotic analysis matches the performance of recovery algorithms for large but finite values of $n$. Compared to the existing technique for the analysis of NB-VB algorithms, which is based on numerically solving a large system of coupled differential equations, the proposed method is much simpler and more accurate.
Phase Transition in Limiting Distributions of Coherence of High-Dimensional Random Matrices by Tony Cai, Tiefeng Jiang. The abstract reads:
The coherence of a random matrix, which is defined to be the largest magnitude of the Pearson correlation coefficients between the columns of the random matrix, is an important quantity for a wide range of applications including high-dimensional statistics and signal processing. Inspired by these applications, this paper studies the limiting laws of the coherence of $n\times p$ random matrices for a full range of the dimension $p$ with a special focus on the ultra high-dimensional setting. Assuming the columns of the random matrix are independent random vectors with a common spherical distribution, we give a complete characterization of the behavior of the limiting distributions of the coherence. More specifically, the limiting distributions of the coherence are derived separately for three regimes: $\frac{1}{n}\log p \to 0$, $\frac{1}{n}\log p \to \beta\in (0, \infty)$, and $\frac{1}{n}\log p \to\infty$. The results show that the limiting behavior of the coherence differs significantly in different regimes and exhibits interesting phase transition phenomena as the dimension $p$ grows as a function of $n$. Applications to statistics and compressed sensing in the ultra high-dimensional setting are also discussed.
Limiting Laws of Coherence of Random Matrices with Applications to Testing Covariance Structure and Construction of Compressed Sensing Matrices by Tony Cai, Tiefeng Jiang. The abstract reads:
Testing covariance structure is of significant interest in many areas of statistical analysis and construction of compressed sensing matrices is an important problem in signal processing. Motivated by these applications, we study in this paper the limiting laws of the coherence of an $n\times p$ random matrix in the high-dimensional setting where $p$ can be much larger than $n$. Both the law of large numbers and the limiting distribution are derived. We then consider testing the bandedness of the covariance matrix of a high dimensional Gaussian distribution which includes testing for independence as a special case. The limiting laws of the coherence of the data matrix play a critical role in the construction of the test. We also apply the asymptotic results to the construction of compressed sensing matrices.
Modified Orthogonal Matching Pursuit Algorithm for Cognitive Radio Wideband Spectrum Sensing by Peng Zhang, Robert Qiu. The abstract reads:
Sampling rate is the bottleneck for spectrum sensing over multi-GHz bandwidth. Recent progress in compressed sensing (CS) initialized several sub-Nyquist rate approaches to overcome the problem. However, efforts to design CS reconstruction algorithms for wideband spectrum sensing are very limited. It is possible to further reduce the sampling rate requirement and improve reconstruction performance via algorithms considering prior knowledge of cognitive radio spectrum usages. In this paper, we group the usages of cognitive radio spectrum into three categories and propose a modified orthogonal matching pursuit (OMP) algorithm for wideband spectrum sensing. Simulation results show that this modified OMP algorithm outperforms two modified basis pursuit de-noising (BPDN) algorithms in terms of reconstruction performance and computation time.
Bounds on the Reconstruction of Sparse Signal Ensembles from Distributed Measurements by Marco F. Duarte, Michael B. Wakin, Dror Baron, Shriram Sarvotham, Richard G. Baraniuk. The abstract reads:
In compressive sensing, a small collection of linear projections of a sparse signal contains enough information to permit signal recovery. Distributed compressive sensing (DCS) extends this framework, allowing a correlated ensemble of sparse signals to be jointly recovered from a collection of separately acquired compressive measurements. In this paper, we introduce an ensemble sparsity model for capturing the intra- and inter-signal correlations within a collection of sparse signals. For strictly sparse signals obeying an ensemble sparsity model, we characterize the fundamental number of noiseless measurements that each sensor must collect to ensure that the signals are jointly recoverable. Our analysis is based on a novel bipartite graph representation that links the sparse signal coefficients with the measurements obtained for each signal.
A Novel Sparsity-Promoted Approach to State Estimation from Dynamic Observation by Lianlin Li, Behnam. Jafarpour. The abstract reads:
The problem of time-varying signal estimation or state estimation from dynamic observation is encountered in many applications. Among well-developed approaches the Kalman filter-type approaches are the most popular and have played an important role of estimating dynamic signal from sequential observations. However, when the number of observations at each time step is highly limited, they usually take too many time steps to catch up with the true value with satisfied accuracy, even fail to do that in many cases. How to address this problem has become appealing for many practical applications, for example, real-time radar tracing, navigation, medical images, and so on. A feasible strategy is to incorporate other efficient prior information into Kalman filter (KF) approach. An elegant achievement coming from compressed sensing (CS) recently developed shows that the signal which is sparse or compressible in some transform domain can be exactly reconstructed from highly limited observations through the use of sparsity constraint regularization. Accordingly, one key to the estimation of dynamic statess how to efficiently deal with sparsity of unknown state and prior information coming from forecast step simultaneously. This paper proposes a novel approach which is the heuristic generalization of the well-known Kalman filter (for convenience, we call it sparsity-promoted Kalman filter, or SKF). Basically, the proposed three-step sparsity-promoted Kalman filter consists of sparse processing following the forecast and prediction steps involved in standard KF procedure. The primary results provided in this paper show the proposed SKF is highly efficient, and can bring us important improvement on the results generated by standard Kalman filter, or only sparsity-promoted reconstruction.
And finally, just the abstract : SUBTYPING OF LEUKEMIA WITH GENE EXPRESSION ANALYSIS USING COMPRESSIVE SENSING METHOD by Tang W, Cao H, Wang Y. The abstract reads:
Detection of different types of cancers is important in clinical diagnosis and treatment. Leukemia is one of the cancers that has different subtypes: acute lymphoblastic leukemia (ALL) and acute myeloid leukemia (AML). The detection of these subtypes according to different genetic markups in leukemia patients will lead to individualized therapies. Gene expression analysis has been used for the study of leukemia patients, and a variety of approaches have been developed to classify leukemia subtypes. In this paper, we propose a novel compressive sensing (CS) based approach for the subtyping of leukemia. The CS method is an emerging approach in statistics and mathematical signal analysis, which enables the reconstruction of signals from a small set of incoherent projections. We develop a CS based detector to classify ALL and AML, based on 1 or 2 gene expression data samples out of 7129 samples. The accuracy of the classification is as high as 97% evaluated with cross validation method among 38 subjects (27 ALL and 11 AML), and 94% evaluated with an independent collection (21 ALL and 14 AML). This work demonstrates that the CS method can be effectively used to detect subtypes of leukemia subjects, implying improved accuracy of diagnosing leukemia patients.
No comments:
Post a Comment