This past month, we had a few papers showing up. I'll probably discuss some of them in the coming weeks.But first here are some announcements, prizes, presentations:
- Super-Resolution Microscope Will Be Built at the University of Houston
- Keck Foundation Awards $1 Million for Super-Resolution Microscope
- Stanford mathematician Thomas Goldstein receives SIAM Richard C. DiPrima Prize
- CAC WINS $1000 PAPER AWARD AT IEEE SAM SIGNAL PROCESSING WORKSHOP
- a list of the Compressive Sensing papers at a recent SPIE meeting on CS.
- Tutorial Feature Learning Methods for Vision by Rob Fergus, Kai Yu, Marc’Aurelio Ranzato, Honglak Lee, Ruslan Salakhutdinov, Graham Taylor
- Learning with Submodular Functions , Structured sparsity through convex optimization, Supervised learning for computer vision:Kernel methods & sparse methods by Francis Bach
- Presentation Compressive Principal Component Pursuit by Yi Ma
- and a paper for which I cannot grab the text: http://scil.dinf.usherbrooke.ca/wp-content/papers/merlet-etal-ismrm12a.pdf
Then we have the papers that showed up on ArXiv:
Reconstructing Structured Sparse Signals from Compressive Samples via a Max-Product EM Algorithm
(Submitted on 5 Sep 2012 (v1), last revised 7 Sep 2012 (this version, v2))
We propose a Bayesian expectation-maximization (EM) algorithm for reconstructing structured approximately sparse signals via belief propagation. The measurements follow an underdetermined linear model where the regression-coefficient vector is the sum of an unknown approximately sparse signal and a zero-mean white Gaussian noise with an unknown variance. The signal is composed of large- and small-magnitude components identified by binary state variables whose probabilistic dependence structure is described by a hidden Markov tree. Gaussian priors are assigned to the signal coefficients given their state variables and the Jeffreys' noninformative prior is assigned to the noise variance. Our signal reconstruction scheme is based on an EM iteration that aims at maximizing the posterior distribution of the signal and its state variables given the noise variance. We employ a max-product algorithm to implement the maximization (M) step of our EM iteration. The proposed EM algorithm estimates the vector of state variables as well as solves iteratively a linear system of equations to obtain the corresponding signal estimate. We select the noise variance so that the corresponding estimated signal and state variables (obtained upon convergence of the EM iteration) have the largest marginal posterior distribution. Our numerical examples show that the proposed algorithm achieves better reconstruction performance compared with the state-of-the-art methods.
Sparse coding for multitask and transfer learning
(Submitted on 4 Sep 2012)
We present an extension of sparse coding to the problems of multitask and transfer learning. The central assumption of the method is that the tasks parameters are well approximated by sparse linear combinations of the atoms of a dictionary on a high or infinite dimensional space. This assumption, together with the large quantity of available data in the multitask and transfer learning settings, allows a principled choice of the dictionary. We provide bounds on the generalization error of this approach, for both settings. Preliminary experiments indicate the advantage of the sparse multitask coding method over single task learning and a previous method based on orthogonal and dense representation of the tasks.
Fixed-rank matrix factorizations and Riemannian low-rank optimization
(Submitted on 3 Sep 2012)
Motivated by the problem of learning a linear regression model whose parameter is a large fixed-rank non-symmetric matrix, we consider the optimization of a smooth cost function defined on the set of fixed-rank matrices. We adopt the geometric optimization framework of optimization on Riemannian matrix manifolds. We study the underlying geometries of several well-known fixed-rank matrix factorizations and then exploit the Riemannian geometry of the search space in the design of a class of gradient descent and trust-region algorithms. The proposed algorithms generalize our previous results on fixed-rank symmetric positive semidefinite matrices, apply to a broad range of applications, scale to high-dimensional problems and confer a geometric basis to recent contributions on the learning of fixed-rank non-symmetric matrices. We make connections with existing algorithms in the context of low-rank matrix completion and discuss relative usefulness of the proposed framework. Numerical experiments suggest that the proposed algorithms compete with the state-of-the-art and that manifold optimization offers an effective and versatile framework for the design of machine learning algorithms that learn a fixed-rank matrix.
Two Newton methods on the manifold of fixed-rank matrices endowed with Riemannian quotient geometries
(Submitted on 1 Sep 2012)
We consider two Riemannian geometries for the manifold $\mathcal{M}(p,m\times n)$ of all $m\times n$ matrices of rank $p$. The geometries are induced on $\mathcal{M}(p,m\times n)$ by viewing it as the base manifold of the submersion $\pi:(M,N)\mapsto MN^T$, selecting an adequate Riemannian metric on the total space, and turning $\pi$ into a Riemannian submersion. The theory of Riemannian submersions, an important tool in Riemannian geometry, makes it possible to obtain expressions for fundamental geometric objects on $\mathcal{M}(p,m\times n)$ and to formulate the Riemannian Newton methods on $\mathcal{M}(p,m\times n)$ induced by these two geometries. The Riemannian Newton methods admit a stronger and more streamlined convergence analysis than the Euclidean counterpart, and the computational overhead due to the Riemannian geometric machinery is shown to be mild. Potential applications include low-rank matrix completion and other low-rank matrix approximation problems.
I note the conclusion of the paper:
We have reached the end of a technical hike that led us to give in Theorem 6.4 what is, to the best of our knowledge, the first closed-form description of a purely Riemannian Newton method on the set of all matrices of fixed dimension and rank. By “closed-form”, we mean that, besides calling an oracle for Euclidean first and second derivatives, the method only needs to perform elementary matrix operations, solve linear systems of equations, and compute (small-size) matrix exponentials. By “purely Riemannian”, we mean that it uses the tools provided by Riemannian geometry, namely, the Riemannian connection (instead of any other affine connection) and the Riemannian exponential (instead of any other retraction). The developments strongly rely on the theory of Riemannian submersions and are based on factorizations of low rank matrices X as MNT, where one of the factors is orthonormal. Relaxing the orthonormality constraint is more appealing for its symmetry (the two factors are treated alike), but it did not allow us to obtain a closed-form expression for the Riemannian exponential.
Bayesian compressed sensing with new sparsity-inducing prior
(Submitted on 31 Aug 2012)
Sparse Bayesian learning (SBL) is a popular approach to sparse signal recovery in compressed sensing (CS). In SBL, the signal sparsity information is exploited by assuming a sparsity-inducing prior for the signal that is then estimated using Bayesian inference. In this paper, a new sparsity-inducing prior is introduced and efficient algorithms are developed for signal recovery. The main algorithm is shown to produce a sparser solution than existing SBL methods while preserving their desirable properties. Numerical simulations with one-dimensional synthetic signals and two-dimensional images verify our analysis and show that for sparse signals the proposed algorithm outperforms its SBL peers in both the signal recovery accuracy and computational speed. Its improved performance is also demonstrated in comparison with other state-of-the-art methods in CS.
The Circulant Rational Covariance Extension Problem: The Complete Solution
(Submitted on 30 Aug 2012)
The rational covariance extension problem to determine a rational spectral density given a finite number of covariance lags can be seen as a matrix completion problem to construct an infinite-dimensional positive-definite Toeplitz matrix the north-west corner of which is given. The circulant rational covariance extension problem considered in this paper is a modification of this problem to partial stochastic realization of reciprocal and periodic stationary process, which are better represented on the discrete unit circle $\mathbb{Z}_{2N}$ rather than on the discrete real line $\mathbb{Z}$. The corresponding matrix completion problem then amounts to completing a finite-dimensional Toeplitz matrix that is circulant. Another important motivation for this problem is that it provides a natural approximation, involving only computations based on the fast Fourier transform, for the ordinary rational covariance extension problem, potentially leading to an efficient numerical procedure for the latter. The circulant rational covariance extension problem is an inverse problem with infinitely many solutions in general, each corresponding to a bilateral ARMA representation of the underlying periodic (reciprocal) process. In this paper we present a complete smooth parameterization of all solutions and convex optimization procedures for determining them. A procedure to determine which solution that best matches additional data in the form of logarithmic moments is also presented.
One-bit compressed sensing with non-Gaussian measurements
(Submitted on 30 Aug 2012)
In one-bit compressed sensing, previous results state that sparse signals may be robustly recovered when the measurements are taken using Gaussian random vectors. In contrast to standard compressed sensing, these results are not extendable to natural non-Gaussian distributions without further assumptions, as can be demonstrated by simple counter-examples. We show that approximately sparse signals, which also satisfy a mild infinity-norm constraint, can be accurately reconstructed from single-bit measurements sampled according to a sub-gaussian distribution, and the reconstruction comes as the solution to a convex program.
from the conclusion
In contrast to standard compressed sensing, one-bit compressed sensing is infeasible when themeasurement vectors are Bernoulli and the signal is extremely sparse. Nevertheless, we show that when the signal is sparse, but not overly sparse, it may be recovered from Bernoulli (or more generally, sub-gaussian) one-bit measurements. To our knowledge, these are the first theoretical results in one-bit compressed sensing that specifically allow non-Gaussian measurements.
Average Case Recovery Analysis of Tomographic Compressive Sensing
(Submitted on 29 Aug 2012 (v1), last revised 30 Aug 2012 (this version, v2))
The reconstruction of three-dimensional sparse volume functions from few tomographic projections constitutes a challenging problem in image reconstruction and turns out to be a particular instance problem of compressive sensing. The tomographic measurement matrix encodes the incidence relation of the imaging process, and therefore is not subject to design up to small perturbations of non-zero entries. We present an average case analysis of the recovery properties and a corresponding tail bound to establish weak thresholds, in excellent agreement with numerical experiments. Our result improve the state-of-the-art of tomographic imaging in experimental fluid dynamics by a factor of three.
Matrix-free Interior Point Method for Compressed Sensing Problems
(Submitted on 27 Aug 2012)
We consider a class of optimization problems for sparse signal reconstruction which arise in the field of Compressed Sensing (CS). A plethora of approaches and solvers exist for such problems, for example GPSR, FPC AS, SPGL1, NestA, $\ell_{1}_\ell_{s}$, PDCO to mention a few. Compressed Sensing applications lead to very well conditioned optimization problems and therefore can be solved easily by simple first-order methods. Interior point methods (IPMs) rely on the Newton method hence they use the second-order information. They have numerous advantageous features and one clear drawback: being the second-order approach they need to solve linear equations and this operation has (in the general dense case) an $O(n^3)$ computational complexity. Attempts have been made to specialize IPMs to sparse reconstruction problems and they have led to interesting developments implemented in $\ell_1\_\ell_s$ and PDCO softwares. We go a few steps further. First, we use the matrix-free interior point method, an approach which redesigns IPM to avoid the need to explicitly formulate (and store) the Newton equation systems. Secondly, we exploit the special features of the signal processing matrices within the matrix-free IPM. Two such features are of particular interest: an excellent conditioning of these matrices and the ability to perform inexpensive (low complexity) matrix-vector multiplications with them. Computational experience with large scale one-dimensional signals confirms that the new approach is efficient and compares favorably with other state-of-the-art solvers.
Support Recovery with Sparsely Sampled Free Random Matrices
(Submitted on 27 Aug 2012)
Consider a Bernoulli-Gaussian complex $n$-vector whose components are $V_i = X_i B_i$, with $X_i \sim \Cc\Nc(0,\Pc_x)$ and binary $B_i$ mutually independent and iid across $i$. This random $q$-sparse vector is multiplied by a square random matrix $\Um$, and a randomly chosen subset, of average size $n p$, $p \in [0,1]$, of the resulting vector components is then observed in additive Gaussian noise. We extend the scope of conventional noisy compressive sampling models where $\Um$ is typically %A16 the identity or a matrix with iid components, to allow $\Um$ satisfying a certain freeness condition. This class of matrices encompasses Haar matrices and other unitarily invariant matrices. We use the replica method and the decoupling principle of Guo and Verd\'u, as well as a number of information theoretic bounds, to study the input-output mutual information and the support recovery error rate in the limit of $n \to \infty$. We also extend the scope of the large deviation approach of Rangan, Fletcher and Goyal and characterize the performance of a class of estimators encompassing thresholded linear MMSE and $\ell_1$ relaxation.
Quantum state tomography by compressed sensing: Fast and robust decoding of a quantum fingerprint
(Submitted on 24 Aug 2012)
Recovering a full description of a complex system or process from limited information is a central problem in science and engineering. To address this, a set of techniques known as "compressed sensing" have been widely used in, for example, image compression, movie recommendation, location estimation, and earthquake analysis. In physics, one often seeks an estimate of an unknown quantum state based on a sparse set of measurements. Here we demonstrate a compressed sensing algorithm that reconstructs quantum states from continuous-time measurements performed on an ensemble of atomic spins, and compare its performance to a conventional least-squares estimator. Both approaches yield high fidelity estimates of nearly pure states from similar amounts of incomplete and noisy data, but we find compressed sensing to be significantly more robust against systematic errors. Our findings illustrate the tradeoffs inherent in quantum tomography and point the way to fast and robust state estimation in large dimensional Hilbert spaces.
Compressive Source Separation: Theory and Methods for Hyperspectral Imaging
(Submitted on 22 Aug 2012)
With the development of numbers of high resolution data acquisition systems and the global requirement to lower the energy consumption, the development of efficient sensing techniques becomes critical. Recently, Compressed Sampling (CS) techniques, which exploit the sparsity of signals, have allowed to reconstruct signal and images with less measurements than the traditional Nyquist sensing approach. However, multichannel signals like Hyperspectral images (HSI) have additional structures, like inter-channel correlations, that are not taken into account in the classical CS scheme. In this paper we exploit the linear mixture of sources model, that is the assumption that the multichannel signal is composed of a linear combination of sources, each of them having its own spectral signature, and propose new sampling schemes exploiting this model to considerably decrease the number of measurements needed for the acquisition and source separation. Moreover, we give theoretical lower bounds on the number of measurements required to perform reconstruction of both the multichannel signal and its sources. We also proposed optimization algorithms and extensive experimentation on our target application which is HSI, and show that our approach recovers HSI with far less measurements and computational effort than traditional CS approaches.
Improved Total Variation based Image Compressive Sensing Recovery by Nonlocal Regularization
(Submitted on 18 Aug 2012)
Recently, total variation (TV) based minimization algorithms have achieved great success in compressive sensing (CS) recovery for natural images due to its virtue of preserving edges. However, the use of TV is not able to recover the fine details and textures, and often suffers from undesirable staircase artifact. To reduce these effects, this letter presents an improved TV based image CS recovery algorithm by introducing a new nonlocal regularization constraint into CS optimization problem. The nonlocal regularization is built on the well known nonlocal means (NLM) filtering and takes advantage of self-similarity in images, which helps to suppress the staircase effect and restore the fine details. Furthermore, an efficient augmented Lagrangian based algorithm is developed to solve the above combined TV and nonlocal regularization constrained problem. Experimental results demonstrate that the proposed algorithm achieves significant performance improvements over the state-of-the-art TV based algorithm in both PSNR and visual perception.
A Sub-Nyquist Radar Prototype: Hardware and Algorithms
(Submitted on 13 Aug 2012)
Traditional radar sensing typically involves matched filtering between the received signal and the shape of the transmitted pulse. Under the confinement of classic sampling theorem this requires that the received signals must first be sampled at twice the baseband bandwidth, in order to avoid aliasing. The growing demands for target distinction capability and spatial resolution imply significant growth in the bandwidth of the transmitted pulse. Thus, correlation based radar systems require high sampling rates, and with the large amounts of data sampled also necessitate vast memory capacity. In addition, real-time processing of the data typically results in high power consumption. Recently, new approaches for radar sensing and detection were introduced, based on the Finite Rate of Innovation and Xampling frameworks. These techniques allow significant reduction in sampling rate, implying potential power savings, while maintaining the system's detection capabilities at high enough SNR. Here we present for the first time a design and implementation of a Xampling-based hardware prototype that allows sampling of radar signals at rates much lower than Nyquist. We demostrate by real-time analog experiments that our system is able to maintain reasonable detection capabilities, while sampling radar signals that require sampling at a rate of about 30MHz at a total rate of 1Mhz.
Sparsity Averaging for Compressive Imaging
(Submitted on 11 Aug 2012)
We propose a novel regularization method for sparse image reconstruction from compressive measurements. The approach relies on the conjecture that natural images exhibit strong average sparsity over multiple coherent frames. The associated reconstruction algorithm, based on an analysis prior and a reweighted $\ell_1$ scheme, is dubbed Sparsity Averaging Reweighted Analysis (SARA). We test our prior and the associated algorithm through extensive numerical simulations for spread spectrum and Gaussian acquisition schemes suggested by the recent theory of compressed sensing with coherent and redundant dictionaries. Our results show that average sparsity outperforms state-of-the-art priors that promote sparsity in a single orthonormal basis or redundant frame, or that promote gradient sparsity. We also illustrate the performance of SARA in the context of Fourier imaging, for particular applications in astronomy and medicine.
Compressed Sensing based Protocol for Efficient Reconstruction of Sparse Superimposed Data in a Multi-Hop Wireless Sensor Network
(Submitted on 7 Aug 2012)
We consider a multi-hop wireless sensor network that measures sparse events and propose a simple forwarding protocol based on Compressed Sensing (CS) which does not need any sophisticated Media Access Control (MAC) scheduling, neither a routing protocol, thereby making significant overhead and energy savings. By means of flooding, multiple packets with different superimposed measurements are received simultaneously at any node. Thanks to our protocol, each node is able to recover each measurement and forward it while avoiding cycles. Numerical results show that our protocol achieves close to zero reconstruction errors at the sink, while greatly reducing overhead. This initial research reveals a new and promising approach to protocol design through CS for wireless mesh and sensor networks.
A weighted belief-propagation algorithm to estimate volume-related properties of random polytopes
(Submitted on 6 Aug 2012)
In this work we introduce a novel weighted message-passing algorithm based on the cavity method to estimate volume-related properties of random polytopes, properties which are relevant in various research fields ranging from metabolic networks, to neural networks, to compressed sensing. Unlike the usual approach consisting in approximating the real-valued cavity marginal distributions by a few parameters, we propose an algorithm to faithfully represent the entire marginal distribution. We explain various alternatives to implement the algorithm and benchmark the theoretical findings by showing concrete applications to random polytopes. The results obtained with our approach are found to be in very good agreement with the estimates produced by the Hit-and-Run algorithm, known to produce uniform sampling.
Compressive sensing as a new paradigm for model building
(Submitted on 31 Jul 2012)
The widely-accepted intuition that the important properties of solids are determined by a few key variables underpins many methods in physics. Though this reductionist paradigm is applicable in many physical problems, its utility can be limited because the intuition for identifying the key variables often does not exist or is difficult to develop. Machine learning algorithms (genetic programming, neural networks, Bayesian methods, etc.) attempt to eliminate the a priori need for such intuition but often do so with increased computational burden and human time. A recently-developed technique in the field of signal processing, compressive sensing (CS), provides a simple, general, and efficient way of finding the key descriptive variables. CS is a new paradigm for model building-we show that its models are just as robust as those built by current state-of-the-art approaches, but can be constructed at a fraction of the computational cost and user effort.
Universality in Polytope Phase Transitions and Message Passing Algorithms
(Submitted on 31 Jul 2012)
We consider a class of nonlinear mappings F in R^N indexed by symmetric random matrices A in R^{N\timesN} with independent entries. Within spin glass theory, special cases of these mappings correspond to iterating the TAP equations and were studied by Erwin Bolthausen. Within information theory, they are known as 'approximate message passing' algorithms. We study the high-dimensional (large N) behavior of the iterates of F for polynomial functions F, and prove that it is universal, i.e. it depends only on the first two moments of the entries of A, under a subgaussian tail condition. As an application, we prove the universality of a certain phase transition arising in polytope geometry and compressed sensing. This solves -for a broad class of random projections- a conjecture by David Donoho and Jared Tanner.
In this work we propose a unique sampling scheme of Radon Projections and a non-linear reconstruction algorithm based on compressive sensing (CS) theory to implement a progressive compressive sampling imaging system. The progressive sampling scheme offers online control of the tradeoff between the compression and the quality of reconstruction. It avoids the need of a priori knowledge of the object sparsity that is usually required for CS design. In addition, the progressive data acquisition enables straightforward application of ordered-subsets algorithms which overcome computational constraints associated with the reconstruction of very large images. We present, to the best of our knowledge for the first time, a compressive imaging implementation of megapixel size images with a compression ratio of 20:1.
Yi Liu, Lei Tian, Justin W. Lee, Howard Y. H. Huang, Michael S. Triantafyllou, and George Barbastathis
We propose quantitative localization measurement of a known object with subpixel accuracy using compressive holography. We analyze the theoretical optimal solution in the compressive sampling framework and experimentally demonstrate localization accuracy of 1/45 pixel, in good agreement with the analysis.
Victor Bittorf, Benjamin Recht, Christopher R e and Joel A. Tropp
This paper describes a new approach for computing nonnegative matrix factorizations (NMFs) with linear programming. The key idea is a data-driven model for the factorization, in which the most salient features in the data are used to express the remaining features. More precisely, given a data matrix X, the algorithm identifies a matrix C that satisfies X CX and some linear constraints. The matrix C selects features, which are then used to compute a low-rank NMF of X. A theoretical analysis demonstrates that this approach has the same type of guarantees as the recent NMF algorithm of Arora et al. (2012). In contrast with this earlier work, the proposed method (1) has better noise tolerance, (2) extends to more general noise models, and (3) leads to e fficient, scalable algorithms. Experiments with synthetic and real datasets provide evidence that the new approach is also superior in practice. An optimized C++ implementation of the new algorithm can factor a multi-Gigabyte matrix in a matter of minutes.
This paper surveys recent work in applying ideas from graphical models and message passing algorithms to solve large scale regularized regression problems. In particular, the focus is on compressed sensing reconstruction via `1 penalized least-squares (known as LASSO or BPDN). We discuss how to derive fast approximate message passing algorithms to solve this problem. Surprisingly, the analysis of such algorithms allows to prove exact high-dimensional limit results for the LASSO risk. This paper will appear as a chapter in a book on `Compressed Sensing' edited by Yonina Eldar and Gitta Kutynok.
Xiao Liang, Xiang Ren, Zhengdong Zhang, and Yi Ma
In this paper, we show how to harness both low-rank and sparse structures in regular or near regular textures for image completion. Our method leverages the new convex optimization for low-rank and sparse signal recovery and can automatically correctly repair the global structure of a corrupted texture, even without precise information about the regions to be completed. Through extensive simulations, we show
our method can complete and repair textures corrupted by errors with both random and contiguous supports better than existing low-rank matrix recovery methods. Through experimental comparisons with existing image completion systems (such as Photoshop) our method demonstrate significant advantage over local patch based texture synthesis techniques in dealing with large corruption, non-uniform texture, and large perspective deformation.
Zafer Dogan, Ivana Jovanovic, Thierry Blu, and Dimitri Van De Ville
Reconstruction of point sources from boundary measurements is a challenging problem in many applications. Recently, we proposed a new sensing and non-iterative reconstruction scheme for systems governed by the three-dimensional wave equation. The points sources are described by their magnitudes and positions. The core of the method relies on the principles of finite-rate-of-innovation, and allows retrieving the parameters in the continuous domain without discretization. Here we extend the method when the source configuration shows joint sparsity for different temporal frequencies; i.e., the sources have same positions for different frequencies, not necessarily the same magnitudes. We demonstrate that joint sparsity improves upon the robustness of the estimation results. In addition, we propose a modified multi-source version of Dijkstra’s algorithm to recover the Z parameters. We illustrate the feasibility of our method to reconstruct multiple sources in a 3-D spherical geometry.
Ulugbek S. Kamilov, A. Bourquard, A. Amini, M. Unser
We introduce a new method for adaptive one-bit quantization of linear measurements and propose an algorithm for the recovery of signals based on generalized approximate message passing (GAMP). Our method exploits the prior statistical information on the signal for estimating the minimum-mean-squared error solution from one-bit measurements. Our approach allows the one-bit quantizer to use thresholds on the real line. Given the previous measurements, each new threshold is selected so as to partition the consistent region along its centroid computed by GAMP. We demonstrate that the proposed adaptive-quantization scheme with GAMP reconstruction greatly improves the performance of signal and image recovery from one-bit measurements.
Ulugbek S. Kamilov, Vivek K Goyal, and Sundeep Rangan.
Estimation of a vector from quantized linear measurements is a common problem for which simple linear techniques are suboptimal—sometimes greatly so. This paper develops message-passing de-quantization (MPDQ) algorithms for minimum mean-squared error estimation of a random vector from quantized linear measurements, notably allowing the linear expansion to be overcomplete or undercomplete and the scalar quantization to be regular or non-regular. The algorithm is based on generalized approximate message passing (GAMP), a recently-developed Gaussian approximation of loopy belief propagation for estimation with linear transforms and nonlinear componentwise-separable output channels. For MPDQ, scalar quantization of measurements is incorporated into the output channel formalism, leading to the first tractable and effective method for high-dimensional estimation problems involving nonregular scalar quantization. The algorithm is computationally simple and can incorporate arbitrary separable priors on the input vector including sparsity-inducing priors that arise in the context of compressed sensing. Moreover, under the assumption of a Gaussian measurement matrix with i.i.d. entries, the asymptotic error performance of MPDQ can be accurately predicted and tracked through a simple set of scalar state evolution equations. We additionally use state evolution to design MSE-optimal scalar quantizers for MPDQ signal reconstruction and empirically demonstrate the superior error performance of the resulting quantizers. In particular, our results show that non-regular quantization can greatly improve rate–distortion performance in some problems with oversampling or with undersampling combined with a sparsity-inducing prior.
Purpose: To assess the potential of compressed-sensing parallel-imaging four-dimensional (4D) phase-contrast magnetic resonance (MR) imaging and specialized imaging software in the evaluation of valvular insufficiency and intracardiac shunts in patients with congenital heart disease.
Materials and Methods: Institutional review board approval was obtained for this HIPAA-compliant study. Thirty-four consecutive retrospectively identified patients in whom a compressed-sensing parallel-imaging 4D phase-contrast sequence was performed as part of routine clinical cardiac MR imaging between March 2010 and August 2011 and who had undergone echocardiography were included. Multiplanar, volume-rendered, and stereoscopic three-dimensional velocity-fusion visualization algorithms were developed and implemented in Java and OpenGL. Two radiologists independently reviewed 4D phase-contrast studies for each of 34 patients (mean age, 6 years; age range, 10 months to 21 years) and tabulated visible shunts and valvular regurgitation. These results were compared with color Doppler echocardiographic and cardiac MR imaging reports, which were generated without 4D phase-contrast visualization. Cohen κ statistics were computed to assess interobserver agreement and agreement with echocardiographic results.
Results: The 4D phase-contrast acquisitions were performed, on average, in less than 10 minutes. Among 123 valves seen in 34 4D phase-contrast studies, 29 regurgitant valves were identified, with good agreement between observers (κ = 0.85). There was also good agreement with the presence of at least mild regurgitation at echocardiography (observer 1, κ = 0.76; observer 2, κ = 0.77) with high sensitivity (observer 1, 75%; observer 2, 82%) and specificity (observer 1, 97%; observer 2, 95%) relative to the reference standard. Eight intracardiac shunts were identified, four of which were not visible with conventional cardiac MR imaging but were detected with echocardiography. No intracardiac shunts were found with echocardiography alone.
Conclusion: With velocity-fusion visualization, the compressed-sensing parallel-imaging 4D phase-contrast sequence can augment conventional cardiac MR imaging by improving sensitivity for and depiction of hemodynamically significant shunts and valvular regurgitation.
Peng Zhang, Robert Qiu,
Using signal feature as the prior knowledge can improve spectrum sensing performance. In this paper, we consider signal feature as the leading eigenvector (rank-1 information) extracted from received signal’s sample covariance matrix. Via real-world data and hardware experiments, we are able to demonstrate that such a feature can be learned blindly and it can be used to improve spectrum sensing performance. We derive several generalized likelihood ratio test (GLRT) based algorithms considering signal feature as the prior knowledge under rank-1 assumption. The performances of the new algorithms are compared with other state-of-the-art covariance matrix based spectrum sensing algorithms via Monte Carlo simulations. Both synthesized rank-1 signal and real-world digital TV (DTV) data are used in the simulations. In general, our GLRT-based algorithms have better detection performances, and the algorithms using signal feature as the prior knowledge have better performances than the algorithms without any prior knowledge.
Aurele Balavoine, Justin Romberg, and Christopher J. Rozell,
We present an analysis of the Locally Competitive Algorithm (LCA), a Hopfield-style neural network that efficiently solves sparse approximation problems (e.g., approximating a vector from a dictionary using just a few non-zero coefficients). This class of problems plays a significant role in both theories of
neural coding and applications in signal processing. However, the LCA lacks analysis of its convergence properties and previous results on neural networks for nonsmooth optimization do not apply to the specifics of the LCA architecture. We show that the LCA has desirable convergence properties, such as stability and global convergence to the optimum of the objective function when it is unique. Under some mild conditions, the support of the solution is also proven to be reached in finite time. Furthermore, some restrictions on the problem specifics allow us to characterize the convergence rate of the system by showing that the LCA converges exponentially fast with an analytically bounded convergence rate. We support our analysis with several illustrative simulations.
Samuel Shapero, Adam Charles Christopher J. Rozell , and Paul Hasler
Compressed sensing is an important application in signal and image processing which requires solving non-linear optimization problems. A Hopfield-Network-like analog system is proposed as a solution, using the Locally Competitive Algorithm (LCA) to solve an overcomplete `1 sparse approximation problem. A scalable system architecture using sub-threshold currents is described, including vector matrix multipliers (VMMs) and a nonlinear thresholder. A 4x6 nonlinear system is implemented on the RASP 2.9v chip, a Field Programmable Analog Array with directly programmable floating gate elements, allowing highly
accurate VMMs. The circuit successfully reproduced the outputs of a digital optimization program, converging to within 4.8% RMS, and an objective value only 1.3% higher on average. The active circuit consumed 29 A of current at 2.4 V, and converges on solutions in 240 s. A smaller 2x3 system is also implemented. Extrapolating the scaling trends to a N=1000 node system, the Analog LCA compares favorably with State-of-the-Art digital solutions, using a small fraction of the power to arrive at solutions
ten times faster. Finally we provide simulations of large scale systems to show the behavior of the system scaled to non-trivial problem sizes.
We adapt a well known streaming algorithm for approximating item frequencies to the matrix sketching setting. The algorithm receives the rows of a large matrix A 2 R n m one after the other in a streaming
fashion. For ` = d1="e it maintains a sketch matrix B 2 R ` m such that for any unit vector x kAxk
2 kBxk 2 kAxk 2 "kAk2f : Sketch updates per row in A require amortized O(m`) operations. This
gives the rst algorithm whose error guaranty decreases proportional to 1=` using O(m`) space. Prior art algorithms produce bounds proportional to 1= p`. Our experiments corroborate that the faster convergence rate is observed in practice. The presented algorithm also stands out in that it is: deterministic, simple to implement, and elementary to prove. Regardless of streaming aspects, the algorithm can be used to compute
a 1+"0 approximation to the best rank k approximation of any matrix A 2 Rn m. This requires O(mn`0)operations and O(m`0) space where `0P =2i ="02k+1 and i are the singular values of A in descending magnitude order. In many practical applications, e.g. PCA, `0is assumed to be
constant.
When 'exact recovery' is exact recovery in compressed sensing simulation,
Bob. L. Sturm
Bob. L. Sturm
In a simulation of compressed sensing (CS), one must test whether the recovered solution \(\vax\) is the true solution \(\vx\), i.e., ``exact recovery.''. Most CS simulations employ one of two criteria: 1) the recovered support is the true support; or 2) the normalized squared error is less than \(\epsilon^2\). We analyze these exact recovery criteria independent of any recovery algorithm, but with respect to signal distributions that are often used in CS simulations. That is, given a pair \((\vax,\vx)\), when does ``exact recovery'' occur with respect to only one or both of these criteria for a given distribution of \(\vx\)? We show that, in a best case scenario, \(\epsilon^2\) sets a maximum allowed missed detection rate in a majority sense.
The slides are here.
Comparison of Orthogonal Matching Pursuit Implementations,
Bob. L. Sturm and M. G. Christensen
Bob. L. Sturm and M. G. Christensen
We study the numerical and computational performance of three implementations of orthogonal matching pursuit: one using the QR matrix decomposition, one using the Cholesky matrix decomposition, and one using the matrix inversion lemma. We find that none of these implementations suffer from numerical error accumulation in the inner products or the solution. Furthermore, we empirically compare the computational times of each algorithm over the phase plane.
P. Noorzad and Bob. L. Sturm
The attendant poster is here,
We propose sparse approximation weighted regression (SPARROW), a method for local estimation of the regression function that uses sparse approximation with a dictionary of measurements. SPARROW estimates the regression function at a point with a linear combination of a few regressands selected by a sparse approximation of the point in terms of the regressors. We show SPARROW can be considered a variant of \(k\)-nearest neighbors regression (\(k\)-NNR), and more generally, local polynomial kernel regression. Unlike \(k\)-NNR, however, SPARROW can adapt the number of regressors to use based on the sparse approximation process. Our experimental results show the locally constant form of SPARROW performs competitively.
The attendant poster is here,
Kui Jia, Tsung-Han Chan, and Yi Ma
Sparse representation based classification (SRC) methods have recently drawn much attention in face recognition, due to their good performance and robustness against misalignment, illumination variation, and occlusion. They assume the errors caused by image variations can be modeled as pixel-wisely sparse. However, in many practical scenarios these errors are not truly pixel-wisely sparse but rather sparsely distributed with structures, i.e., they constitute contiguous regions distributed at different face positions. In this paper, we introduce a class of structured sparsity-inducing norms into the SRC framework, to model various corruptions in face images caused by misalignment, shadow (due to illumination change), and occlusion. For practical face recognition, we develop an automatic face alignment method based on minimizing the structured sparsity norm. Experiments on benchmark face datasets show improved performance over SRC and other alternative methods.
Tsung-Han Chan, Kui Jia, Eliot Wycoff, Chong-Yung Chi, and Yi Ma
Multiplexed illumination has been proved to be valuable and beneficial, in terms of noise reduction, in wide applications of computer vision and graphics, provided that the limitations of photon noise and saturation are properly tackled. Existing optimal multiplexing codes, in the sense of maximum signal-to-noise ratio (SNR), are primarily designed for time multiplexing, but they only apply to a multiplexing system requiring the number of measurements (M) equal to the number of illumination sources (N). In this paper, we formulate a general code design problem, where M ≥ N, for time and color multiplexing, and develop a sequential semi-definite programming to deal with the formulated optimization problem. The proposed formulation and method can be readily specialized to time multiplexing, thereby making such optimized codes have a much broader application. Computer simulations will discover the main merit of the method— a significant boost of SNR as M increases. Experiments will also be presented to demonstrate the effectiveness and superiority of the method in object illumination.
Tsung-Han Chan, Kui Jia, Eliot Wycoff, Chong-Yung Chi, and Yi Ma
Multiplexed illumination has been proved to be valuable and beneficial, in terms of noise reduction, in wide applications of computer vision and graphics, provided that the limitations of photon noise and saturation are properly tackled. Existing optimal multiplexing codes, in the sense of maximum signal-to-noise ratio (SNR), are primarily designed for time multiplexing, but they only apply to a multiplexing system requiring the number of measurements (M) equal to the number of illumination sources (N). In this paper, we formulate a general code design problem, where M ≥ N, for time and color multiplexing, and develop a sequential semi-definite programming to deal with the formulated optimization problem. The proposed formulation and method can be readily specialized to time multiplexing, thereby making such optimized codes have a much broader application. Computer simulations will discover the main merit of the method— a significant boost of SNR as M increases. Experiments will also be presented to demonstrate the effectiveness and superiority of the method in object illumination.
Xiao Liang, Xiang Ren, Zhengdong Zhang, and Yi Ma
In this paper, we show how to harness both low-rank and sparse structures in regular or near regular textures for image completion. Our method leverages the new convex optimization for low-rank and sparse signal recovery and can automatically correctly repair the global structure of a corrupted texture, even without precise information about the regions to be completed. Through extensive simulations, we show our method can complete and repair textures corrupted by errors with both random and contiguous supports better than existing low-rank matrix recovery methods. Through experimental comparisons with existing image completion systems (such as Photoshop) our method demonstrate significant advantage over local patch based texture synthesis techniques in dealing with large corruption, non-uniform texture, and large perspective deformation.
Stephen J. Tarsa and H.T. Kung
The process of detecting logical faults in integrated circuits (ICs) due to manufacturing variations is bottlenecked by the I/O cost of scanning in test vectors and offloading test results. Traditionally, the output bottleneck is alleviated by reducing the number of bits in output responses using XOR networks, or computing signatures from the responses of multiple tests. However, these many-to-one computations reduce test time at the cost of higher detection failure rates, and lower test granularity. In this paper, we propose an output compression approach that uses compressive sensing to exploit the redundancy of correlated outputs from closely related tests, and of correlated faulty responses across many circuits. Compressive sensing’s simple encoding method makes our approach attractive because it can be implemented on-chip using only a small number of accumulators. Through simulation, we show that our method can reduce the output I/O bottleneck without increasing failure rates, and can reconstruct higher granularity results off-chip than current compaction approaches.
Youngjune Gwon, H.T. Kung, and Dario Vlah
We describe a method of integrating KarhunenLoeve Transform (KLT) into compressive sensing, which can ` as a result improve the compression ratio without affecting the accuracy of decoding. We present two complementary results 1) by using KLT to find an optimal basis for decoding we can drastically reduce the number of measurements for compressive sensing used in applications such as radio spectrum analysis; 2) by using compressive sensing we can estimate and recover the KLT basis from compressive measurements of an input signal. In particular, we propose CS-KLT, an online estimation algorithm to cope with nonstationarity of wireless channels in reality. We validate our results with empirical data collected from a wideband UHF spectrum and field experiments to detect multiple radio transmitters, using software-defined radios.
Parikshit Shah, Badri Narayan Bhaskar, Gongguo Tang and Benjamin Recht
This paper proposes a new algorithm for linear system identi cation from noisy measurements. The proposed algorithm balances a data fidelity term with a norm induced by the set of single pole fi lters. We pose a convex optimization problem that approximately solves the atomic norm minimization problem and identifi es the unknown system from noisy linear measurements. This problem can be solved e fficiently with standard, freely available software. We provide rigorous statistical guarantees that explicitly bound the estimation error (in the H2-norm) in terms of the stability radius, the Hankel singular values of the true system and the number of measurements. These results in turn yield complexity bounds and asymptotic consistency. We provide numerical experiments demonstrating the e fficacy of our method for estimating linear systems from a variety of linear measurements.
Albert Fannjiang, Wenjing Liao
Martin Azizyan and Aarti Singh
We consider the problem of detecting whether a high dimensional vector 2 Rn lies in a r-dimensional
subspace S, where r n, given few compressive measurements of the vector. This problem arises in several applications such as detecting anomalies, targets, interference and brain activations. In these applications, the object of interest is described by a large number of features and the ability to detect them using only linear combination of the features (without the need to measure, store or compute the entire feature vector) is desirable. We present a test statistic for subspace detection using compressive samples and demonstrate that the probability of error of the proposed detector decreases exponentially in the number of compressive samples, provided that the energy o the subspace scales as n. Using information- theoretic lower bounds, we demonstrate that no other detector can achieve the same probability of error for weaker signals. Simulation results also indicate that this scaling is near-optimal.
Hector R. A. Basevi, Kenneth M. Tichauer, Frederic Leblond,Hamid Dehghani, James A. Guggenheim, Robert W. Holt, andIain B. Styles
Bioluminescence Tomography attempts to quantify 3-dimensional luminophore distributions from surface measurements of the light distribution. The reconstruction problem is typically severely under-determined due to the number and location of measurements, but in certain cases the molecules or cells of interest form localised clusters, resulting in a distribution of luminophores that is spatially sparse. A Conjugate Gradient-based reconstruction algorithm using Compressive Sensing was designed to take advantage of this sparsity, using a multistage sparsity reduction approach to remove the need to choose sparsity weighting a priori. Numerical simulations were used to examine the effect of noise on reconstruction accuracy. Tomographic bioluminescence measurements of a Caliper XPM-2 Phantom Mouse were acquired and reconstructions from simulation and this experimental data show that Compressive Sensing-based reconstruction is superior to standard reconstruction techniques, particularly in the presence of noise.
Surya Ganguli and Haim Sompolinsky
The curse of dimensionality poses severe challenges to both technical and conceptual progress in neuroscience. In particular, it plagues our ability to acquire, process, and model high-dimensional data sets.
Moreover, neural systems must cope with the challenge of processing data in high dimensions to learn and operate successfully within a complex world. We review recent mathematical advances that provide ways
to combat dimensionality in specific situations. These advances shed light on two dual questions in neuroscience. First, how can we as neuroscientists rapidly acquire high-dimensional data from the brain and
subsequently extract meaningful models from limited amounts of these data? And second, how do brains themselves process information in their intrinsically high-dimensional patterns of neural activity as well as learn meaningful, generalizable models of the external world from limited experience?
Depeng Yang, Husheng Li, Zhenghao Zhang, Gregory D. Peterson
A key challenge to achieve very high positioning accuracy (such as sub-mm accuracy) in Ultra-Wideband (UWB) positioning systems is how to obtain ultra-high resolution UWB echo pulses, which requires ADCs with a prohibitively high sampling rate. The theory of Compressed Sensing (CS) has been applied to UWB systems to acquire UWB pulses below the Nyquist sampling rate. This paper proposes a front-end optimized scheme for the CS-based UWB positioning system. A Space–Time Bayesian Compressed Sensing (STBCS) algorithm is developed for joint signal reconstruction by transferring mutual a priori information, which can dramatically decrease ADC sampling rate and improve noise tolerance. Moreover, the STBCS and time difference of arrival (TDOA) algorithms are integrated in a pipelined mode for fast tracking of the target through an incremental optimization method. Simulation results show the proposed STBCS algorithm can significantly reduce the number of measurements and has better noise tolerance than the traditional BCS, OMP, and multi-task BCS (MBCS) algorithms. The sub-mm accurate CS-based UWB positioning system using the proposed STBCS–TDOA algorithm requires only 15% of the original sampling rate compared with the UWB positioning system using a sequential sampling method.
Compressed sensing (CS) is a revolutionary signal acquisition theory, enabling signal acquisition at a rate that is below the Nyquist sampling rate. However, CS signal reconstruction algorithms are computationally expensive. One of the key computation steps in CS algorithms is to iteratively compute a Cholesky decomposition. Modern application acceleration devices, such as FPGAs and GPUs, can accelerate Cholesky decomposition and CS signal reconstruction computation. This paper presents high performance parallel Cholesky decomposition algorithms for GPU and FPGA implementation. For GPUs, an optimized Cholesky decomposition algorithm is developed with high parallelism, reduced data copying, and improved memory access. For FPGAs, a dedicated pipelined hardware architecture for Cholesky decomposition is designed. Only one pipelined triangular linear equation solver is needed for solving Cholesky decomposition and Cholesky decomposition-based linear equation systems. Moreover, CS signal reconstruction algorithms are accelerated on GPUs and FPGAs for fast signal recovery based on our iterative Cholesky decomposition. Results show that the proposed Cholesky decomposition on FPGAs and GPUs are much faster than LAPACK and MAGMA for small matrices. For accelerating CS signal reconstruction algorithms, our FPGA implementation can achieve around 15x speedup and our GPU implementation can achieve about a 38x speedup compared with the CPU using LAPACK and the hybrid CPU/GPU system with MAGMA.
Qiang Qiu, Vishal M. Patel, Pavan Turaga, and Rama Chellappa
Many recent reports have shown the effectiveness of dictionary learning methods in solving several computer vision problems. However, when designing dictionaries, training and testing domains may by di fferent, due to different view points and illumination conditions. In this paper, we present a function learning framework for the task of transforming a dictionary learned from one visual domain to the other, while maintaining a domain-invariant sparse representation of a signal. Domain dictionaries are modeled by a linear or non-linear parametric function. The dictionary function parameters and domain-invariant sparse codes are then jointly learned by solving an optimization problem. Experiments on real datasets demonstrate the e ectiveness of our approach for applications such as face recognition, pose alignment and pose estimation.
Nazar Khan, Marshall F. Tappen
Discriminative learning of sparse-code based dictionaries tends to be inherently unstable. We show that using a discriminative version of the deviation function to learn such dictionaries leads to a more stable formulation that can handle the reconstruction/discrimination trade-off in a principled manner. Results on Graz02 and UCF Sports datasets validate the proposed formulation.
Claudia Prieto, Marcelo E. Andia, Christian von Bary, David C. Onthank Tobias Schaeffter, Rene M. Botnar
Purpose:
To accelerate the acquisition of three-dimensional (3D) high-resolution cardiovascular molecular MRI by using Compressed Sensing (CS) reconstruction.
Materials and Methods:
Molecular MRI is an emerging technique for the early assessment of cardiovascular disease. This technique provides excellent soft tissue differentiation at a molecular and cellular level using target-specific contrast agents (CAs). However, long scan times are required for 3D molecular MRI. Parallel imaging can be used to speed-up these acquisitions, but hardware considerations limit the maximum acceleration factor. This limitation is important in small-animal studies, where single-coils are commonly used. Here we exploit the sparse nature of molecular MR images, which are characterized by localized and high-contrast biological target-enhancement, to accelerate data acquisition. CS was applied to detect: (a) venous thromboembolism and (b) coronary injury and aortic vessel wall in single- and multiple-coils acquisitions, respectively.
Results:
Retrospective undersampling showed good overall image quality with accelerations up to four for thrombus and aortic images, and up to three for coronary artery images. For higher acceleration factors, features with high CA uptake were still well recovered while low affinity targets were less preserved with increased CS undersampling artifacts. Prospective undersampling was performed in an aortic image with acceleration of two, showing good contrast and well-defined tissue boundaries in the contrast-enhanced regions.
Conclusion:
We demonstrate the successful application of CS to preclinical molecular MR with target specific gadolinium-based CAs using retrospective (accelerations up to four) and prospective (acceleration of two) undersampling.
In this work, we propose an original and efficient approach to exploit the ability of Compressed Sensing (CS) to recover Diffusion MRI (dMRI) signals from a limited number of samples while efficiently recovering important diffusion features such as the Ensemble Average Propagator (EAP) and the Orientation Distribution Function (ODF). Some attempts to sparsely represent the diffusion signal have already been performed. However and contrarly to what has been presented in CS dMRI, in this work we propose and advocate the use of a well adapted learned dictionary and show that it leads to a sparser signal estima- tion as well as to an efficient reconstruction of very important diffusion features. We first propose to learn and design a sparse and paramet- ric dictionary from a set of training diffusion data. Then, we propose a framework to analytically estimate in closed form two important diffu- sion features : the EAP and the ODF. Various experiments on synthetic, phantom and human brain data have been carried out and promising results with reduced number of atoms have been obtained on diffusion signal reconstruction, thus illustrating the added value of our method over state-of-the-art SHORE and SPF based approaches.
Hélène Langet , C. Riddell, Y. Trousset Arthur Tenenhaus , Elisabeth Lahalle, Gilles Fleury ,Nikos Paragios
Digital Subtraction Rotational Angiography (DSRA) is a clinical protocol that allows three-dimensional (3D) visualization of vasculature during minimally invasive procedures. C-arm systems that are used to generate 3D reconstructions in interventional radiology have limited sampling rate and thus, contrast resolution. To address this particular subsampling problem, we propose a novel iterative reconstruction algorithm based on compressed sensing. To this purpose, we exploit both spatial and temporal sparsity of DSRA. For computational efficiency, we use a proximal implementation that accommodates multiple '1-penalties. Experiments on both simulated and clinical data confirm the relevance of our strategy for reducing subsampling streak artifacts.
And then finally, the papers behind a paywall
Krzysztof Kazimierczuka, , , Vladislav Yu. Orekhovb
The resolution of multidimensional NMR spectra can be severely limited when regular sampling based on the Nyquist-Shannon theorem is used. The theorem binds the sampling rate with a bandwidth of a sampled signal and thus implicitly creates a dependence between the line width and the time of experiment, often making the latter one very long. Recently (2006), Candès, Romberg and Tao formulated a non-linear sampling theorem that determines the required number of sampling points to be dependent mostly on the number of peaks in a spectrum and only slightly on the number of spectral points. The result was pivotal for rapid development and broad use of signal processing method called compressed sensing. In our previous work, we have introduced compressed sensing to multidimensional NMR and have shown examples of reconstruction of two-dimensional spectra. In the present paper we discuss in detail the accuracy and robustness of two compressed sensing algorithms: convex (iterative soft thresholding) and non-convex (iteratively re-weighted least squares with local ℓ0-norm) in application to two- and three-dimensional datasets. We show that the latter method is in many terms more effective, which is in line with recent works on the theory of compressed sensing. We also present the comparison of both approaches with multidimensional decomposition which is one of the established methods for processing of non-linearly sampled data.
Many image scene analysis applications require a computational approach to visual attention. The foreground in these applications is typically sparse in spatial support. Compressive sampling enables an approach to reconstruct the sparse map of image regions that stand out from the background using fewer measurements. A convex optimization algorithm, for instance, can be used to recover the sparse map in the wavelet domain. Besides being sparse in the transform domain, the background of natural images has an interesting property that the amplitude of the averaged Fourier spectrum is approximately proportional to the inverse of the frequency. This further enables us to approximate an average background signal for extracting the out-of-ordinary foreground signal corresponding to objects of interest.
Conventional sub-Nyquist sampling methods for analog signals exploit prior information about the spectral sup-port. In this paper, we consider the challenging problem of sub-Nyquist sampling of bandlimited or sparse signals. Can the new paradigm Compressive Sampling (CS) challenge Shannon-Nyquist? Our preliminary development goal is to study and evaluate to reduce the system process load and still provide sufficient and useful information. We also review radio frequency process and Shannon Nyquist Sampling Theorem
Abstract
Abstract. Image fusion is a process to combine multiple frames of the same scene into one image. The popular image fusion methods mainly concentrate on static image fusion and lack spatial-temporal adaptability. The conventional multi-resolution image fusion algorithms have not fully exploited the temporal information. To resolve this problem, we present a novel dynamic image fusion algorithm based on Kalman filtered compressed sensing. The fusion procedure characterized by estimation fusion is completed in state space. A parametric fusion model is proposed to learn and combine spatial and temporal information simultaneously. The experiments on the ground-truth data sets show that the proposed fusion algorithm offers a considerable improvement on the dynamic fusion performance and rivals the traditional multi-resolution-based fusion methods.
Abstract. For the first time, a high-resolution hyperspectral single-pixel imaging system based on compressive sensing is presented and demonstrated. The system integrates a digital micro-mirror device array to optically compress the image to be acquired and an optical spectrum analyzer to enable high spectral resolution. The system’s ability to successfully reconstruct images with 10 pm spectral resolution is proven.
Compressive sensing (CS) is an emerging signal processing paradigm that enables sub-Nyquist sampling of sparse signals. Extensive previous work has exploited the sparse representation of ECG signals in compression applications. In this paper, we propose the use of wavelet domain dependencies to further reduce the number of samples in compressive sensing-based ECG compression while decreasing the computational complexity. R wave events manifest themselves as chains of large coefficients propagating across scales to form a connected subtree of the wavelet coefficient tree. We show that the incorporation of this connectedness as additional prior information into a modified version of the CoSaMP algorithm can significantly reduce the required number of samples to achieve good quality in the reconstruction. This approach also allows more control over the ECG signal reconstruction, in particular, the QRS complex, which is typically distorted when prior information is not included in the recovery. The compression algorithm was tested upon records selected from the MIT-BIH arrhythmia database. Simulation results show that the proposed algorithm leads to high compression ratios associated with low distortion levels relative to state-of-the-art compression algorithms.
Image denoising is a fundamental image processing step for improving the overall quality of images. It is more important for remote sensing images because they require significantly higher visual quality than others. Conventional denoising methods, however, tend to over-suppress high-frequency details. To overcome this problem, we present a novel compressive sensing (CS)-based noise removing algorithm using adaptive multiple samplings and reconstruction error control. We first decompose an input noisy image into flat and edge regions, and then generate 8x8 block-based measurement matrices with Gaussian probability distributions. The measurement matrix is applied to the first three levels of wavelet transform coefficients of the input image for compressive sampling. The orthogonal matching pursuit (OMP) is applied to reconstruct each block. In the reconstruction process, we use different error threshold values according to both the decomposed region and the level of the wavelet transform based on the fast that the first level wavelet coefficients in the edge region have the lowest error threshold, whereas the third level wavelet coefficients in the flat region have the highest error threshold. By applying adaptive threshold value, we can reconstruct the image without noise. Experimental results demonstrate that the proposed method removes noise better than existing state-ofthe- art methods in the sense of both objective (PSNR/MSSIM) and subjective measures. We also implement the proposed denoising algorithm for remote sensing images with by minimizing the computational load.
This paper introduces a novel approach for post-processing of depth map which enhances the depth map resolution in order to achieve visually pleasing 3D models from a new monocular 2D/3D imaging system consists of a Photonic mixer device (PMD) range camera and a standard color camera. The proposed method adopts the revolutionary inversion theory framework called Compressive Sensing (CS). The depth map of low resolution is considered as the result of applying blurring and down-sampling techniques to that of high-resolution. Based on the underlying assumption that the high-resolution depth map is compressible in frequency domain and recent theoretical work on CS, the high-resolution version can be estimated and furthermore reconstructed via solving non-linear optimization problem. And therefore the improved depth map reconstruction provides a useful help to build an improved 3D model of a scene. The experimental results on the real data are presented. In the meanwhile the proposed scheme opens new possibilities to apply CS to a multitude of potential applications on various multimodal data analysis and processing.
Images from a novel shortwave infrared (SWIR, 900 nm to 1.7 μm) camera system are presented. Custom electronics and software are combined with a digital micromirror device (DMD) and a single-element sensor; the latter are commercial off-the-shelf devices, which together create a lower-cost imaging system than is otherwise available in this wavelength regime. A compressive sensing (CS) encoding schema is applied to the DMD to modulate the light that has entered the camera. This modulated light is directed to a single-element sensor and an ensemble of measurements is collected. With the data ensemble and knowledge of the CS encoding, images are computationally reconstructed. The hardware and software combination makes it possible to create images with the resolution of the DMD while employing a substantially lower-cost sensor subsystem than would otherwise be required by the use of traditional focal plane arrays (FPAs). In addition to the basic camera architecture, we also discuss a technique that uses the adaptive functionality of the DMD to search and identify regions of interest. We demonstrate adaptive CS in solar exclusion experiments where bright pixels, which would otherwise reduce dynamic range in the images, are automatically removed.
Compressed sensing is a novel signal sampling theory emerging recently. It is a theory that signals could be sampled far below the Nyquist sampling rate. This paper introduces compressed sensing theory into the application of infrared video, proposes a new residual reconstruction algorithm, and establishes a new infrared video codec model with random Gaussian matrix as the measurement matrix and with orthogonal matching pursuit algorithm as the reconstruction method. On the platform of Matlab, this paper performs the reconstruction of infrared video frames. The simulation results verify that the proposed algorithm can provide a good visual quality and speed up evidently by comparison with conventional algorithm.
Abstract. Compressed sensing can recover a signal that is sparse in some way from a small number of samples. For computed tomography (CT) imaging, this has the potential to obtain good reconstruction from a smaller number of projections or views, thereby reducing the amount of radiation that a patient is exposed to In this work, we applied compressed sensing to fan beam CT image reconstruction, which is a special case of an important 3-D CT problem (cone beam CT). We compared the performance of two compressed sensing algorithms, denoted as the LP and the QP, in simulation. Our results indicate that the LP generally provides smaller reconstruction error and converges faster; therefore, it is preferable.
Phillip K. Poon ; Daniel Townsend ; Scott Wehrwein ; Esteban M. Vera ;Michael D. Stenner ; Michael E. Gehm
Traditional approaches to persistent surveillance generate prodigious amounts of data, stressing storage, communication, and analysis systems. As such, they are well suited for compressed sensing (CS) concepts. Existing demonstrations of compressive target tracking have utilized time-sequences of random patterns, an approach that is sub-optimal for real world dynamic scenes. We have been investigating an alternative architecture that we term SCOUT-the Static Computational Optical Undersampled Tracker-which uses a pair of static masks and a defocused detector to acquire a small number of measurements in parallel. We will report on our working prototypes that have demonstrated successful target tracking at 16x compression.
Based on compressed sensing, a new bit-plane image coding method was presented. Due to different important for different image bit-plane, the new method is robust to bit error, and has the advantages of simple structure and easy software and hardware implementation. Because the values of the image bit-plane are 1 or zero, one order difference matrix was chosen as sparse transform matrix, and the simulation show that it has more sparse presentations. For the general 8-bit images, its have 8 Bit-plane, eighth Bit-plane is Most Significant Bit-plane, so we can adopt more measure vectors for reconstruction image precision. At the same time, this kind of image codec scheme can meet many application demands. The method partitioned an image into 8 bit-plane, and made the orthonormal transform using the one order difference matrix for each bit plane, and then formed multiple descriptions after using random measurements of each bit plane. At decoding end, it reconstructed the original image approximately or exactly with the received bit streams by using the OMP algorithms. The proposed method can construct more descriptions with lower complexity because the process of bit plane data measuring is simple and easy to hardware realize. Experiment results show that the proposed method can reconstruction image with different precision and it can easily generate more descriptions.
al plane arrays with associated electronics and cooling are a substantial portion of the cost, complexity, size, weight, and power requirements of Long-Wave IR (LWIR) imagers. Hyperspectral LWIR imagers add significant data volume burden as they collect a high-resolution spectrum at each pixel. We report here on a LWIR Hyperspectral Sensor that applies Compressive Sensing (CS) in order to achieve benefits in these areas. The sensor applies single-pixel detection technology demonstrated by Rice University. The single-pixel approach uses a Digital Micro-mirror Device (DMD) to reflect and multiplex the light from a random assortment of pixels onto the detector. This is repeated for a number of measurements much less than the total number of scene pixels. We have extended this architecture to hyperspectral LWIR sensing by inserting a Fabry-Perot spectrometer in the optical path. This compressive hyperspectral imager collects all three dimensions on a single detection element, greatly reducing the size, weight and power requirements of the system relative to traditional approaches, while also reducing data volume. The CS architecture also supports innovative adaptive approaches to sensing, as the DMD device allows control over the selection of spatial scene pixels to be multiplexed on the detector. We are applying this advantage to the detection of plume gases, by adaptively locating and concentrating target energy. A key challenge in this system is the diffraction loss produce by the DMD in the LWIR. We report the results of testing DMD operation in the LWIR, as well as system spatial and spectral performance.
We present a novel fractal Iterated Function System (IFS based) data interpolation algorithm enabling compressive sampling of 1/f data sets. We avoid the classical inverse IFS parameter estimation problem by using a novel analytical function driven variant of the random Iterated Function System (IFS) algorithm. We attempt to optimize the parameters of the analytical driver equation to optimize the data reconstruction by minimizing errors using various state-of-the-art genetic algorithms. We demonstrate our encouraging results and detail our methods and findings.
Tomographic reconstruction of Cerenkov photons in tissues through approximate message-passing
Jianghong Zhong ; Jie Tian ; Haixiao Liu ; Chenghu Qin ; Xin Yang ; Xibo Ma
Solution with adjustable sparsity to tomographic imaging of Cerenkov photons is presented in this work. The sparsity of radionuclides' distribution in tissues is an objective but unknown fact, and the inverse model of qualitative data is an ill-posed problem. Based on the optimization technique, the uniqueness of numerical solution to the ill-conditioned compact operator can be guaranteed by use of sparse regularization with the approximate message-passing (AMP) method. After absorbing formulations with the AMP, we analyzed the behavior of the hard thresholding operator. Iteratively numerical solutions were used to approximate the real light source by assuming the number of non-zero solution in manual mode. This modified AMP algorithm was performed in numerical simulation and physical experiments with 2-[18F]fluoro-2-deoxy-D-glucose. Experimental results indicated that the proposed method was a kind of low-complexity iterative thresholding algorithms for reconstructing 3D sparse distribution from a small set of optical measurements.
Authors: Cucuringu, MihaiAdvisors: Singer, Amit
Contributors: Applied and Computational Mathematics Department
Abstract: This thesis consists of five chapters, and focuses on two main problems: the graph realization problem with its applications to localization of sensor network and structural biology, and the low-rank matrix completion problem. Chapter 1 is a brief introduction to rigidity theory and supplies the background needed for the subsequent chapters. Chapter 2 introduces the graph realization problem in dimension two, and its application to sensor network localization. Chapter 3 considers the three dimensional graph realization problem and its application to the molecule problem from structural biology. Chapter 4 focuses on the group synchronization problem, and provides a more in-depth analysis of the synchronization methods used in our algorithms for the graph realization problem in R^2 and R^3. Finally, Chapter 5 investigates the problem of uniqueness of low-rank matrix completion, building on tools from rigidity theory.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment