Friday, June 08, 2012

This Week in Compressive Sensing and Advanced Matrix Factorization

With regards to insight of what is to come here is a list of talks worth attending or reading about:



You probably recall the first item I put on the list of technologies that do not exist ? well it was a Random Anger Coding Scheme for PET/SPECT cameras. Looks like the next paper is on the same track although I have not read the paper as it is not available (it is a conference paper for the SNM conference starting tomorrow). This is outstanding as we are now getting into actual implementation of compressive sensing at the acquisition level and make those multiplexing operations seamless. For more information, you may want to read a companion paper on a similar subject before the inclusion of the compressive sensing bit: Investigation of a clinical PET detector module design that employs large-area avalanche photodetectors. Here is the paper: 

Peter Olcott1Ealgoo Kim2Garry Chinn2 and Craig Levin2

1 Bio-engineering, Stanford University, Stanford, CA 2 Radiology, Stanford Medical School, Stanford, CA



Objectives: Potential clinical silicon photomultiplier based PET systems will consist of tens of thousands of individual sensors. Compressed sensing electronics can be used to multiplex a large number of individual readout sensors to significantly reduce the number of readout channels.
Methods: Using brute force optimization method, a two level sensing matrix based on a 2-weight constant weight code C1[128:32] followed by a 3 weight constant weight code C2[32:16] was designed. These codes consists of discrete resistor elements either connected or not connected to intermediate or output signals. A PET block detector PCB and electronics were fabricated that can multiplex 128 3.2 mm x 3.2 mm solid-state photomultiplier pixels arranged into a 16 x 8 array. Signals from the detector were acquired by a custom 16 channel simultaneously sampling 12-bit 65 Msps ADC acquisition system. Each of the signals was summed to form a trigger, and the peak value for each event on each channel was captured simultaneously. For calibration, we placed a single 4 x 4 array of 3.2 mm x 3.2 mm x 20 mm LYSO crystals onto one of the populated detectors and collected a uniform flood calibration dataset using a 125μCi Ge source. We used a KNN Density clustering method to calculate the centroids of the calibration flood irradiation that were mapped through the sensing matrix and captured by the 16 ADC channels.
Results: All 16 crystals were clearly segmented from the 16 dimensional output data using the new KNN-density clustering method. After correcting for the gain non-uniformities of the SiPM sensor, we measured a preliminary 23.7 +/- 1.2% FWHM energy resolution at 511 keV.
Conclusions: We have successfully fabricated, performed data acquisition, developed a new calibration method, and done preliminary calibration for a compressed sensing PET detector.

Two other papers in the same conference are related to CS: 

Koon-Pong Wong1 and Sung-Cheng Huang1

1 Molecular & Medical Pharmacology, UCLA School of Medicine, Los Angeles, CA



Objectives: Whole-body PET/CT imaging of patients or preclinical PET imaging of larger animals requires the image data be acquired at multiple bed positions, making it impossible to provide continuous kinetics of all body regions for standard kinetic analysis. Here, we investigated the use of compressed sensing to estimate kinetic parameters from sparse temporal data through computer simulation.
Methods: Time-activity curves (TACs) of the brain, myocardium, and muscle were simulated with 4 framing protocols (40x90 s, 30x120 s, 20x180 s, and 12x300 s) using an input function (described by a 4-exponential function) and the FDG model (with a set of model parameters derived from a mouse FDG-PET study). Two bed positions (bed 1: blood pool and myocardium; bed 2: brain and muscle) were assumed and thus, every other frame of all the kinetic data were deleted. Realistic noise of variance proportional to the activity concentration and inversely proportional to the frame duration was introduced to simulate noisy blood pool and tissue TACs. 100 noise realizations were generated for each framing protocol. The sparsely sampled noisy blood and tissue TACs were fitted by the 4-exponential function and the FDG model simultaneously and the parameters were estimated. FDG uptake constant (Ki) in tissues was calculated and compared to the true values used to simulate the noise-free data. The procedure was repeated with the noise level doubled to evaluate the noise sensitivity.
Results: Variability of the FDG model parameter estimates increased as the TACs became more sparsely sampled. Ki estimates in various tissues agreed well with the true values. Coefficient of variation (CV) of Ki estimates averaged over 4 protocols was 10±1% in brain, 8±3% in myocardium, and 9±1% in muscle. When the noise level was doubled, CV of Ki was doubled in the brain and increased by ~55% in myocardium and muscle.
Conclusions: Reliable estimates of Ki can be obtained from sparsely sampled kinetics using compressed sensing, which has great potential for quantitative dynamic whole-body imaging in human and animal studies.


Chia-Jui Hsieh1Huihua Kenny Chiang2Yung-Hsiang Chiu3Bo-Wen Xiao3Cheng-Wei Sun3,Ming-Hua Yeh3Ming-Hua Yeh3 and Jyh-cheng Chen1

1 Department of Biomedical Imaging & Radiological Sciences, National Yang-Ming University, Taipei, Taiwan 2 Institute of Biomedical Engineering, National Yang-Ming University, Taipei, Taiwan 3 Display Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan



Objectives: The objective of this study is to develop a new iterative algorithm for computed tomography (CT) reconstruction. This algorithm can be used in the circumstances of substantially reduced projection data, which implies to decrease X-ray exposure time and consequently reduce radiation dose, to accelerate the image reconstruction speed and maintain good image quality.
Methods: In this study, we combine the compressed sensing (CS) technology with the simultaneous algebraic reconstruction technique (SART) to create a new CT reconstruction algorithm called CS-SART. The algorithm minimizes the total variation (TV) of the image that has been transformed into sparse domain to obtain the gradient direction of this image. Then, the gradient direction is used to improve the image. The reconstructed image will be obtained by following above procedures repeatedly until the stopping criteria are satisfied.
Results: To validate and evaluate the performance of this CS-SART algorithm, we use Shepp-Logan phantom as the target for reconstruction with the corresponding simulated sparse projection data (angular sampling interval is 5 deg). From the results, the CS-SART algorithm can reconstruct images with relatively less artifacts compared with that obtained by traditional FBP (filtered back projection) and ART (algebraic reconstruction technique) with full-scan data (angular sampling interval is 1 deg). Compared with the reconstruction speed of existing reconstruction methods under the same image quality condition, CS-SART is also the fastest one.
Conclusions: We have developed the CS-SART algorithm which can accelerate the computational speed while maintaining good image quality under the circumstances of substantially reduced projection data




While on arxiv we had the following preprints:

A quasi-Newton proximal splitting method

A new result in convex analysis on the calculation of proximity operators in certain scaled norms is derived. We describe efficient implementations of the proximity calculation for a useful class of functions; the implementations exploit the piece-wise linear nature of the dual problem. The second part of the paper applies the previous result to acceleration of convex minimization problems, and leads to an elegant quasi-Newton method. The optimization method compares favorably against state-of-the-art alternatives. The algorithm has extensive applications including signal processing, sparse recovery and machine learning and classification.


Multi-Sparse Signal Recovery for Compressive Sensing

Signal recovery is one of the key techniques of Compressive sensing (CS). It reconstructs the original signal from the linear sub-Nyquist measurements. Classical methods exploit the sparsity in one domain to formulate the L0 norm optimization. Recent investigation shows that some signals are sparse in multiple domains. To further improve the signal reconstruction performance, we can exploit this multi-sparsity to generate a new convex programming model. The latter is formulated with multiple sparsity constraints in multiple domains and the linear measurement fitting constraint. It improves signal recovery performance by additional a priori information. Since some EMG signals exhibit sparsity both in time and frequency domains, we take them as example in numerical experiments. Results show that the newly proposed method achieves better performance for multi-sparse signals.

Adaptive Sensing Performance Lower Bounds for Sparse Signal Estimation and Testing

This paper gives a precise characterization of the fundamental limits of adaptive sensing for diverse estimation and testing problems concerning sparse signals. We consider in particular the setting introduced in Haupt, Castro and Nowak (2011) and show necessary conditions on the minimum signal magnitude for both detection and estimation: if $x\in\R^n$ is a sparse vector with $s$ non-zero components then it can be reliably detected in noise provided the magnitude of the non-zero components exceeds $\sqrt{2/s}$. Furthermore, the signal support can be exactly identified provided the minimum magnitude exceeds $\sqrt{2\log s}$. Notably there is no dependence on $n$, the extrinsic signal dimension. These results show that the adaptive sensing methodologies proposed previously in the literature are essentially optimal, and cannot be substantially improved. In addition these results provide further insights on the limits of adaptive compressive sensing.

Phase Recovery, MaxCut and Complex Semidefinite Programming

Phase retrieval seeks to recover a complex signal x from the amplitude |Ax| of linear measurements. We cast the phase retrieval problem as a non-convex quadratic program over a complex phase vector and formulate a tractable relaxation similar to the classical MaxCut semidefinite program. Numerical results show the performance of this approach over three different phase retrieval problems, in comparison with greedy phase retrieval algorithms and matrix completion approaches.


Beyond $\ell_1$-norm minimization for sparse signal recovery

Sparse signal recovery has been dominated by the basis pursuit denoise (BPDN) problem formulation for over a decade. In this paper, we propose an algorithm that outperforms BPDN in finding sparse solutions to underdetermined linear systems of equations at no additional computational cost. Our algorithm, called WSPGL1, is a modification of the spectral projected gradient for $\ell_1$ minimization (SPGL1) algorithm in which the sequence of LASSO subproblems are replaced by a sequence of weighted LASSO subproblems with constant weights applied to a support estimate. The support estimate is derived from the data and is updated at every iteration. The algorithm also modifies the Pareto curve at every iteration to reflect the new weighted $\ell_1$ minimization problem that is being solved. We demonstrate through extensive simulations that the sparse recovery performance of our algorithm is superior to that of $\ell_1$ minimization and approaches the recovery performance of iterative re-weighted $\ell_1$ (IRWL1) minimization of Cand{\`e}s, Wakin, and Boyd, although it does not match it in general. Moreover, our algorithm has the computational cost of a single BPDN problem.

Weighted-{$\ell_1$} minimization with multiple weighting sets

In this paper, we study the support recovery conditions of weighted $\ell_1$ minimization for signal reconstruction from compressed sensing measurements when multiple support estimate sets with different accuracy are available. We identify a class of signals for which the recovered vector from $\ell_1$ minimization provides an accurate support estimate. We then derive stability and robustness guarantees for the weighted $\ell_1$ minimization problem with more than one support estimate. We show that applying a smaller weight to support estimate that enjoy higher accuracy improves the recovery conditions compared with the case of a single support estimate and the case with standard, i.e., non-weighted, $\ell_1$ minimization. Our theoretical results are supported by numerical simulations on synthetic signals and real audio signals.


Application of compressed sensing to the simulation of atomic systems

Compressed sensing is a method that allows a significant reduction in the number of samples required for accurate measurements in many applications in experimental sciences and engineering. In this work, we show that compressed sensing can also be used to speed up numerical simulations. We apply compressed sensing to extract information from the real-time simulation of atomic and molecular systems, including electronic and nuclear dynamics. We find that for the calculation of vibrational and optical spectra the total propagation time, and hence the computational cost, can be reduced by approximately a factor of five.


Learning Dictionaries with Bounded Self-Coherence

Sparse coding in learned dictionaries is a successful approach for signal denoising, source separation and solving inverse problems in general. A dictionary learning method adapts an initial dictionary to a particular signal class by iteratively computing an approximate factorization of a training data matrix into a dictionary and a sparse coding matrix. The learned dictionary is characterized by two properties: the coherence of the dictionary to observations of the signal class, and the self-coherence of the dictionary atoms. A high coherence to signal observations enables the sparse coding of signal observations with a small approximation error, while a low self-coherence of the atoms guarantees atom recovery and a more rapid residual error decay rate for the sparse coding algorithm. The two goals of high signal coherence and low self-coherence are typically in conflict, therefore one seeks a trade-off between them, depending on the application. We present a dictionary learning method which enables an effective control over the self-coherence of the trained dictionary, enabling a trade-off between maximizing the sparsity of codings and approximating an equi-angular tight frame.

Distributed Functional Scalar Quantization Simplified

Distributed functional scalar quantization (DFSQ) theory provides optimality conditions and predicts performance of data acquisition systems in which a computation on acquired data is desired. We address two limitations of previous works: prohibitively expensive decoder design and a restriction to sources with bounded distributions. We rigorously show that a much simpler decoder has equivalent asymptotic performance as the conditional expectation estimator previously explored, thus reducing decoder design complexity. The simpler decoder has the feature of decoupled communication and computation blocks. Moreover, we extend the DFSQ framework with the simpler decoder to acquire sources with infinite-support distributions such as Gaussian or exponential distributions. Finally, through simulation results we demonstrate that performance at moderate coding rates is well predicted by the asymptotic analysis, and we give new insight on the rate of convergence.


Orthogonal Matching Pursuit with Noisy and Missing Data: Low and High Dimensional Results

Many models for sparse regression typically assume that the covariates are known completely, and without noise. Particularly in high-dimensional applications, this is often not the case. This paper develops efficient OMP-like algorithms to deal with precisely this setting. Our algorithms are as efficient as OMP, and improve on the best-known results for missing and noisy data in regression, both in the high-dimensional setting where we seek to recover a sparse vector from only a few measurements, and in the classical low-dimensional setting where we recover an unstructured regressor. In the high-dimensional setting, our support-recovery algorithm requires no knowledge of even the statistics of the noise. Along the way, we also obtain improved performance guarantees for OMP for the standard sparse regression problem with Gaussian noise.


Central limit theorem for partial linear eigenvalue statistics of Wigner matrices

In this paper, we study the complex Wigner matrices $M_n=\frac{1}{\sqrt{n}}W_n$ whose eigenvalues are typically in the interval $[-2,2]$. Let $\lambda_1\leq \lambda_2...\leq\lambda_n$ be the ordered eigenvalues of $M_n$. Under the assumption of four matching moments with the Gaussian Unitary Ensemble(GUE), for test function $f$ 4-times continuously differentiable on an open interval including $[-2,2]$, we establish central limit theorems for two types of partial linear statistics of the eigenvalues. The first type is defined with a threshold $u$ in the bulk of the Wigner semicircle law as $\mathcal{A}_n[f; u]=\sum_{l=1}^nf(\lambda_l)\mathbf{1}_{\{\lambda_l\leq u\}}$. And the second one is $\mathcal{B}_n[f; k]=\sum_{l=1}^{k}f(\lambda_l)$ with positive integer $k=k_n$ such that $k/n\rightarrow y\in (0,1)$ as $n$ tends to infinity. Moreover, we derive a weak convergence result for a partial sum process constructed from $\mathcal{B}_n[f; \lfloor nt\rfloor]$.

Greedy expansions in convex optimization

This paper is a follow up to the previous author's paper on convex optimization. In that paper we began the process of adjusting greedy-type algorithms from nonlinear approximation for finding sparse solutions of convex optimization problems. We modified there three the most popular in nonlinear approximation in Banach spaces greedy algorithms -- Weak Chebyshev Greedy Algorithm, Weak Greedy Algorithm with Free Relaxation and Weak Relaxed Greedy Algorithm -- for solving convex optimization problems. We continue to study sparse approximate solutions to convex optimization problems. It is known that in many engineering applications researchers are interested in an approximate solution of an optimization problem as a linear combination of elements from a given system of elements. There is an increasing interest in building such sparse approximate solutions using different greedy-type algorithms. In this paper we concentrate on greedy algorithms that provide expansions, which means that the approximant at the $m$th iteration is equal to the sum of the approximant from the previous iteration ($(m-1)$th iteration) and one element from the dictionary with an appropriate coefficient. The problem of greedy expansions of elements of a Banach space is well studied in nonlinear approximation theory. At a first glance the setting of a problem of expansion of a given element and the setting of the problem of expansion in an optimization problem are very different. However, it turns out that the same technique can be used for solving both problems. We show how the technique developed in nonlinear approximation theory, in particular, the greedy expansions technique can be adjusted for finding a sparse solution of an optimization problem given by an expansion with respect to a given dictionary.

Poisson noise reduction with non-local PCA

Photon-limited imaging, which arises in applications such as spectral imaging, night vision, nuclear medicine, and astronomy, occurs when the number of photons collected by a sensor is small relative to the desired image resolution. Typically a Poisson distribution is used to model these observations, and the inherent heteroscedasticity of the data combined with standard noise removal methods yields significant artifacts. This paper introduces a novel denoising algorithm for photon-limited images which combines elements of dictionary learning and sparse representations for image patches. The method employs both an adaptation of Principal Component Analysis (PCA) for Poisson noise and recently developed sparsity regularized convex optimization algorithms for photon-limited images. A comprehensive empirical evaluation of the proposed method helps characterize the performance of this approach relative to other state-of-the-art denoising methods. The results reveal that, despite its simplicity, PCA-flavored denoising appears to be highly competitive in very low light regimes.

Sensing with Optimal Matrices

We consider the problem of designing optimal $M \times N$ ($M \leq N$) sensing matrices which minimize the maximum condition number of all the submatrices of $K$ columns. Such matrices minimize the worst-case estimation errors when only $K$ sensors out of $N$ sensors are available for sensing at a given time. For M=2 and matrices with unit-normed columns, this problem is equivalent to the problem of maximizing the minimum singular value among all the submatrices of $K$ columns. For M=2, we are able to give a closed form formula for the condition number of the submatrices. When M=2 and K=3, for an arbitrary $N\geq3$, we derive the optimal matrices which minimize the maximum condition number of all the submatrices of $K$ columns. Surprisingly, a uniformly distributed design is often \emph{not} the optimal design minimizing the maximum condition number.


Fast MCMC sampling for sparse Bayesian inference in high-dimensional inverse problems using L1-type priors

Felix Lucka (Institute for Computational and Applied Mathematics, Institute for Biomagnetism and Biosignalanalysis, University of Münster, Germany)
Sparsity has become a key concept for solving of high-dimensional inverse problems using variational regularization techniques. Recently, using similar sparsity-constraints in the Bayesian framework for inverse problems by encoding them in the prior distribution has attracted attention. Important questions about the relation between regularization theory and Bayesian inference still need to be addressed when using sparsity promoting inversion. A practical obstacle for these examinations is the lack of fast posterior sampling algorithms for sparse, high-dimensional Bayesian inversion: Accessing the full range of Bayesian inference methods requires being able to draw samples from the posterior probability distribution in a fast and efficient way. The most commonly applied Markov chain Monte Carlo (MCMC) sampling algorithms for this purpose are Metropolis-Hastings (MH) schemes. However, we demonstrate in this article that for sparse priors relying on L1-norms, their efficiency dramatically decreases when the level of sparsity or the dimension of the unknowns is increased. Practically, Bayesian inversion for L1-type priors using these samplers is not feasible at all. We therefore develop a sampling algorithm that relies on single component Gibbs sampling. We show that the efficiency of our Gibbs sampler even increases when the level of sparsity or the dimension of the unknowns is increased. This property is not only distinct to the MH schemes but also challenges common beliefs about MCMC sampling.

Linearized Alternating Direction Method with Adaptive Penalty and Warm Starts for Fast Solving Transform Invariant Low-Rank Textures

Transform Invariant Low-rank Textures (TILT) is a novel and powerful tool that can effectively rectify a rich class of low-rank textures in 3D scenes from 2D images despite significant deformation and corruption. The existing algorithm for solving TILT is based on the alternating direction method (ADM). It suffers from high computational cost and is not theoretically guaranteed to converge to a correct solution. In this paper, we propose a novel algorithm to speed up solving TILT, with guaranteed convergence. Our method is based on the recently proposed linearized alternating direction method with adaptive penalty (LADMAP). To further reduce computation, warm starts are also introduced to initialize the variables better and cut the cost on singular value decomposition. Extensive experimental results on both synthetic and real data demonstrate that this new algorithm works much more efficiently and robustly than the existing algorithm. It could be at least five times faster than the previous method.

Quantum Tomography via Compressed Sensing: Error Bounds, Sample Complexity, and Efficient Estimators

Intuitively, if a density operator has small rank, then it should be easier to estimate from experimental data, since in this case only a few eigenvectors need to be learned. We prove two complementary results that confirm this intuition. First, we show that a low-rank density matrix can be estimated using fewer copies of the state, i.e., the sample complexity of tomography decreases with the rank. Second, we show that unknown low-rank states can be reconstructed from an incomplete set of measurements, using techniques from compressed sensing and matrix completion. These techniques use simple Pauli measurements, and their output can be certified without making any assumptions about the unknown state. 
We give a new theoretical analysis of compressed tomography, based on the restricted isometry property (RIP) for low-rank matrices. Using these tools, we obtain near-optimal error bounds, for the realistic situation where the data contains noise due to finite statistics, and the density matrix is full-rank with decaying eigenvalues. We also obtain upper-bounds on the sample complexity of compressed tomography, and almost-matching lower bounds on the sample complexity of any procedure using adaptive sequences of Pauli measurements. 
Using numerical simulations, we compare the performance of two compressed sensing estimators with standard maximum-likelihood estimation (MLE). We find that, given comparable experimental resources, the compressed sensing estimators consistently produce higher-fidelity state reconstructions than MLE. In addition, the use of an incomplete set of measurements leads to faster classical processing with no loss of accuracy. 
Finally, we show how to certify the accuracy of a low rank estimate using direct fidelity estimation and we describe a method for compressed quantum process tomography that works for processes with small Kraus rank.

Robust subspace recovery by geodesically convex optimization

We reintroduce an M-estimator that was implicitly discussed by Tyler in 1987, to robustly recover the underlying linear model from a data set contaminated by outliers. We prove that the objective function of this estimator is geodesically convex on the manifold of all positive definite matrices, and propose a fast algorithm that obtains its unique minimum. Besides, we prove that when inliers (i.e., points that are not outliers) are sampled from a subspace and the percentage of outliers is bounded by some number, then under some very weak assumptions this algorithm can recover the underlying subspace exactly. We also show that our algorithm compares favorably with other convex algorithms of robust PCA empirically.

Factoring nonnegative matrices with linear programs

This paper describes a new approach for computing nonnegative matrix factorizations (NMFs) with linear programming. The key idea is a data-driven model for the factorization, in which the most salient features in the data are used to express the remaining features. More precisely, given a data matrix X, the algorithm identifies a matrix C that satisfies X is approximately equal to CX and some linear constraints. The matrix C selects features, which are then used to compute a low-rank NMF of X. A theoretical analysis demonstrates that this approach has the same type of guarantees as the recent NMF algorithm of Arora et al. (2012). In contrast with this earlier work, the proposed method (1) has better noise tolerance, (2) extends to more general noise models, and (3) leads to efficient, scalable algorithms. Experiments with synthetic and real datasets provide evidence that the new approach is also superior in practice. An optimized C++ implementation of the new algorithm can factor a multi-Gigabyte matrix in a matter of minutes.


Alternating Direction Methods for Latent Variable Gaussian Graphical Model Selection

Chandrasekaran, Parrilo and Willsky (2010) proposed a convex optimization problem to characterize graphical model selection in the presence of unobserved variables. This convex optimization problem aims to estimate an inverse covariance matrix that can be decomposed into a sparse matrix minus a low-rank matrix from sample data. Solving this convex optimization problem is very challenging, especially for large problems. In this paper, we propose a novel alternating direction method of multipliers (ADMM) for solving this problem. The classical ADMM does not apply to this problem because there are three blocks in the problem and there is currently no convergence guarantee. Our method is a variant of the classical ADMM but only consists of two blocks and one of the subproblems is solved inexactly. Our method exploits and takes advantage of the special structure of the problem and thus can solve large problems very efficiently. Global convergence result is established for our proposed method. Numerical results on both synthetic data and gene expression data show that our method usually solve problems with one million variables in one to two minutes, and are usually five to thirty five times faster than a state-of-the-art Newton-CG proximal point algorithm.

Sparse Trace Norm Regularization

We study the problem of estimating multiple predictive functions from a dictionary of basis functions in the nonparametric regression setting. Our estimation scheme assumes that each predictive function can be estimated in the form of a linear combination of the basis functions. By assuming that the coefficient matrix admits a sparse low-rank structure, we formulate the function estimation problem as a convex program regularized by the trace norm and the $\ell_1$-norm simultaneously. We propose to solve the convex program using the accelerated gradient (AG) method and the alternating direction method of multipliers (ADMM) respectively; we also develop efficient algorithms to solve the key components in both AG and ADMM. In addition, we conduct theoretical analysis on the proposed function estimation scheme: we derive a key property of the optimal solution to the convex program; based on an assumption on the basis functions, we establish a performance bound of the proposed function estimation scheme (via the composite regularization). Simulation studies demonstrate the effectiveness and efficiency of the proposed algorithms.




In this paper, a novel multi-target sparse localization (SL) algorithm based on compressive sampling (CS) is proposed. Different from the existing literature for target counting and localization where signal/received-signal-strength (RSS) readings at different access points (APs) are used separately, we propose to reformulate the SL problem so that we can make use of the cross-correlations of the signal readings at different APs. We analytically show that this new framework can provide a considerable amount of extra information compared to classical SL algorithms. We further highlight that in some cases this extra
information converts the under-determined problem of SL into an over-determined problem for which we can use ordinary leastsquares (LS) to efficiently recover the target vector even if it is not sparse. Our simulation results illustrate that compared to classical SL this extra information leads to a considerable improvement in terms of number of localizable targets as well as localization accuracy.

Compressive sensing is an emerging area which uses a relatively small number of non-traditional samples in the form of randomized projections to reconstruct sparse or compressible signals. This study considered the carrier frequency offset estimation problem for interleaved orthogonal frequency-division multiple-access uplink systems. A new carrier frequency offset estimation method based on the compressive sensing theory is proposed to estimate the carrier frequency offsets in interleaved OFDMA uplink systems. The presented method can effectively estimate all carrier frequency offsets of the active users by finding the sparest coefficients. Simulation results are presented to verify the efficiency of the proposed approach.


Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly