Recovering Compressively Sampled Signals Using Partial Support Information by Michael P. Friedlander, Hassan Mansour, Rayan Saab, Ozgur Yilmaz. The abstract reads:
In this paper we study recovery conditions of weighted $\ell_1$ minimization for signal reconstruction from compressed sensing measurements when partial support information is available. We show that if at least $50%$ of the (partial) support information is accurate, then weighted $\ell_1$ minimization is stable and robust under weaker conditions than the analogous conditions for standard $\ell_1$ minimization. Moreover, weighted $\ell_1$ minimization provides better bounds on the reconstruction error in terms of the measurement noise and the compressibility of the signal to be recovered. We illustrate our results with extensive numerical experiments on synthetic data and real audio and video signals.
Statistical Compressive Sensing of Gaussian Mixture Models by Guoshen Yu, Guillermo Sapiro. The abstract reads:
A new framework of compressive sensing (CS), namely statistical compressive sensing (SCS), that aims at efficiently sampling a collection of signals that follow a statistical distribution and achieving accurate reconstruction on average, is introduced. For signals following a Gaussian distribution, with Gaussian or Bernoulli sensing matrices of O(k) measurements, considerably smaller than the O(k log(N/k)) required by conventional CS, where N is the signal dimension, and with an optimal decoder implemented with linear filtering, significantly faster than the pursuit decoders applied in conventional CS, the error of SCS is shown tightly upper bounded by a constant times the k-best term approximation error, with overwhelming probability. The failure probability is also significantly smaller than that of conventional CS. Stronger yet simpler results further show that for any sensing matrix, the error of Gaussian SCS is upper bounded by a constant times the k-best term approximation with probability one, and the bound constant can be efficiently calculated. For signals following Gaussian mixture models, SCS with a piecewise linear decoder is introduced and shown to produce for real images better results than conventional CS based on sparse models.
Sparse coding and dictionary learning based on the MDL principle by: Ignacio Ramírez, Guillermo Sapiro . The abstract reads:
Robust PCA via Outlier Pursuit by: Huan Xu, Constantine Caramanis, Sujay Sanghavi. The abstract reads:The power of sparse signal coding with learned dictionaries has been demonstrated in a variety of applications and fields, from signal processing to statistical inference and machine learning. However, the statistical properties of these models, such as underfitting or overfitting given sets of data, are still not well characterized in the literature. This work aims at filling this gap by means of the Minimum Description Length (MDL) principle -- a well established information-theoretic approach to statistical inference. The resulting framework derives a family of efficient sparse coding and modeling (dictionary learning) algorithms, which by virtue of the MDL principle, are completely parameter free. Furthermore, such framework allows to incorporate additional prior information in the model, such as Markovian dependencies, in a natural way. We demonstrate the performance of the proposed framework with results for image denoising and classification tasks.
Singular Value Decomposition (and Principal Component Analysis) is one of the most widely used techniques for dimensionality reduction: successful and efficiently computable, it is nevertheless plagued by a well-known, well-documented sensitivity to outliers. Recent work has considered the setting where each point has a few arbitrarily corrupted components. Yet, in applications of SVD or PCA such as robust collaborative filtering or bioinformatics, malicious agents, defective genes, or simply corrupted or contaminated experiments may effectively yield entire points that are completely corrupted.
We present an efficient convex optimization-based algorithm we call Outlier Pursuit, that under some mild assumptions on the uncorrupted points (satisfied, e.g., by the standard generative assumption in PCA problems) recovers the exact optimal low-dimensional subspace, and identifies the corrupted points. Such identification of corrupted points that do not conform to the low-dimensional approximation, is of paramount interest in bioinformatics and financial applications, and beyond. Our techniques involve matrix decomposition using nuclear norm minimization, however, our results, setup, and approach, necessarily differ considerably from the existing line of work in matrix completion and matrix decomposition, since we develop an approach to recover the correct column space of the uncorrupted matrix, rather than the exact matrix itself.
Feasibility of high temporal resolution breast DCE-MRI using compressed sensing theory. by Wang H, Miao Y, Zhou K, Yu Y, Bao S, He Q, Dai Y, Xuan SY, Tarabishy B, Ye Y, Hu J.
PURPOSE: To investigate the feasibility of high temporal resolution breast DCE-MRI using compressed sensing theory.
METHODS: Two experiments were designed to investigate the feasibility of using reference image based compressed sensing (RICS) technique in DCE-MRI of the breast. The first experiment examined the capability of RICS to faithfully reconstruct uptake curves using undersampled data sets extracted from fully sampled clinical breast DCE-MRI data. An average approach and an approach using motion estimation and motion compensation (ME/MC) were implemented to obtain reference images and to evaluate their efficacy in reducing motion related effects. The second experiment, an in vitro phantom study, tested the feasibility of RICS for improving temporal resolution without degrading the spatial resolution.
RESULTS: For the uptake-curve reconstruction experiment, there was a high correlation between uptake curves reconstructed from fully sampled data by Fourier transform and from undersampled data by RICS, indicating high similarity between them. The mean Pearson correlation coefficients for RICS with the ME/MC approach and RICS with the average approach were 0.977 +/- 0.023 and 0.953 +/- 0.031, respectively. The comparisons of final reconstruction results between RICS with the average approach and RICS with the ME/MC approach suggested that the latter was superior to the former in reducing motion related effects. For the in vitro experiment, compared to the fully sampled method, RICS improved the temporal resolution by an acceleration factor of 10 without degrading the spatial resolution.
CONCLUSIONS: The preliminary study demonstrates the feasibility of RICS for faithfully reconstructing uptake curves and improving temporal resolution of breast DCE-MRI without degrading the spatial resolution.
Behind a paywall: Compressive sensing applied to audio signals using higher-order projections and the spatial domain. by Elizabeth Hoppe and Michael Roan. The abstract reads:
Compressive sensing (CS) is a new approach to data acquisition that is receiving much attention in the current literature. The main goal in CS is to accurately reconstruct a signal from a small number of randomly projected basis functions. The CS reconstruction accuracy hinges on the assumption that the signal can be projected into a sparse domain. A majority of the CS research to date has been focused on the application of CS to image processing. There has, however, been very limited research on the application of CS techniques to audio signals focused in two main areas: the use of the frequency and wavelet domains as the sparse signal domain, and the use of the spatial domain to perform compressive beamforming. In this work, two new approaches are examined. The first is the use of the spatial domain to reconstruct signals instead of simply identifying their direction of arrival. The second is the use of higher-order projection techniques (such as principal component analysis) as a sparse domain. This work will apply these techniques to speech signals to examine the ability of CS to reconstruct wide-band audio signals. In addition, the effect of additive noise on the reconstruction accuracy will be examined.
Finally, at Tsinghua, there was a talk last week:
Wireless Sensor Network Group
Title: Compressive Sensing in Wireless Sensor Networks, Surface Coverage in Wireless Sensor Networks
Speaker: Liwen Xu,Xiaohong Hao
Date: 10:00 -- 11:30 Oct 18, 2010
Venue: FIT 1-203
Abstract:
Compressive Sensing in Wireless Sensor Networks
Compressive Sensing (CS) is an emerging theory that is baed on the fact that a relatively small number of random projections of a signal can contain most of its salient information. CS is now successfully applied in the field of image and video compression. It is obvious that the CS is also attractive to Wireless Sensor Networks (WSN). In this talk, several schemes how CS is applied will be introduced, and we will talk about the future of CS in WSN.
Surface Coverage in Wireless Sensor Networks
Coverage is a fundamental problem in Wireless Sensor Networks (WSNs). Existing studies on this topic focus on 2D ideal plane coverage and 3D full space coverage. The 3D surface of a targeted Field of Interest is complex in many real world applications; and yet, existing studies on coverage do not produce practical results. In this paper, we propose a new coverage model called surface coverage. In surface coverage, the targeted Field of Interest is a complex surface in 3D space and sensors can be deployed only on the surface. We show that existing 2D plane coverage is merely a special case of surface coverage. Simulations point out that existing sensor deployment schemes for a 2D plane cannot be directly applied to surface coverage cases. In this paper, we target two problems assuming cases of surface coverage to be true. One, under stochastic deployment, how many sensors are needed to reach a certain expected coverage ratio? Two, if sensor deployment can be planned, what is the optimal deployment strategy with guaranteed full coverage with the least number of sensors? We show that the latter problem is NP-complete and propose three approximation algorithms. We further prove that these algorithms have a provable approximation ratio. We also conduct comprehensive simulations to evaluate the performance of the proposed algorithms.
Image Credit: NASA/JPL/Space Science Institute
W00065872.jpg was taken on October 23, 2010 and received on Earth October 25, 2010. The camera was pointing toward SATURN at approximately 2,451,860 kilometers away, and the image was taken using the CL1 and RED filters.
2 comments:
Hi, I think that in the abstract about PCA Outlier pursuit, is missing the title and the link... Thanks for the today's post, was useful for me :)
Fixed.
Thanks Giseli.
Post a Comment