Pages

Sunday, February 17, 2008

Compressed Sensings: Updates from Rice, a course and a blog

Petar Maymounkov has a blog where he discusses subjects related to Compressed Sensing, here is the latest entry and an earlier one ( LP decoding and WP vs. RIP). In the blog, I found the webpage of the course given by Piotr Indyk last fall at MIT. It is entitled: Sketching, Streaming and Sub-linear Space Algorithms. The course introduction reads:

Over the last few years, a new approach to computing succinct approximate “sketches” of data has been discovered. These sketches are much shorter (often exponentially!) than the original data, but nevertheless they retain important and useful information , such as the number of distinct elements in the data set, the "similarity" between the data elements, etc. The methods have been successfully applied to web data compression, approximate query processing in databases, network measurement and signal processing/acquisition.

There are many different "manifestations" of the sketching approach: data stream algorithms, randomized dimensionality reduction and compressive sensing. In this course we will cover results from those areas in a unified and systematic manner. On the way, we will introduce the necessary tools from communication complexity, statistics, geometric functional analysis, combinatorial group testing, and other areas.
The Rice Compressed Sensing site has new entries (and not so new ones that are now more easily available). I list them with their abstracts:

* Przemysław Wojtaszczyk
, Stability and instance optimality for Gaussian measurements in compressed sensing
In compressed sensing we seek to gain information about vector x belonging to R^N from d much less than N nonadaptive linear measurements. Candes, Donoho, Tao et. al. ( see e.g. [2, 4, 8]) proposed to seek good approximation to x via l1 minimisation. In this paper we show that in the case of Gaussian measurements it recovers the signal well from inacurate measurements, thus improving result from [4]. We also show that with big probability it gives information comparable with best k term approximation in euclidean norm, k ~ d/lnN. This provides the first numerically friendly algorithm to do so, see [7].

* Irina Gorodnitsky and Bhaskar Rao, Sparse signal reconstruction from limited data using FOCUSS: A re-weighted norm minimization algorithm
We present a nonparametric algorithm for finding localized energy solutions from limited data. The problem we address is underdetermined, and no prior knowledge of the shape of the region on which the solution is nonzero is assumed. Termed the FOcal Underdetermined System Solver (FOCUSS), the algorithm has two integral parts: a low-resolution initial estimate of the real signal and the iteration process that refines the initial estimate to the final localized energy solution. The iterations are based on weighted norm minimization of the dependent variable with the weights being a function of the preceding iterative solutions. The algorithm is presented as a general estimation tool usable across different applications. A detailed analysis laying the theoretical foundation for the algorithm is given and includes proofs of global and local convergence and a derivation of the rate of convergence. A view of the algorithm as a novel optimization method which combines desirable characteristics of both classical optimization and learning-based algorithms is provided. Mathematical results on conditions for uniqueness of sparse solutions are also given. Applications of the algorithm are illustrated on problems in direction-of-arrival (DOA) estimation and neuromagnetic imaging.
* Bhaskar Rao and Kenneth Kreutz-Delgado, An affine scaling methodology for best basis selection
A methodology is developed to derive algorithms for optimal basis selection by minimizing diversity measures proposed by Wickerhauser and Donoho. These measures include the p-norm-like (p less than 1)) diversity measures and the Gaussian and Shannon entropies. The algorithm development methodology uses a factored representation for the gradient and involves successive relaxation of the Lagrangian necessary condition. This yields algorithms that are intimately related to the Affine Scaling Transformation (AST) based methods commonly employed by the interior point approach to nonlinear optimization. The algorithms minimizing the (p less than 1) diversity measures are equivalent to a recently developed class of algorithms called FOCal Underdetermined System Solver (FOCUSS). The general nature of the methodology provides a systematic approach for deriving this class of algorithms and a natural mechanism for extending them. It also facilitates a better understanding of the convergence behavior and a strengthening of the convergence results. The Gaussian entropy minimization algorithm is shown to be equivalent to a well-behaved p = 0 norm-like optimization algorithm. Computer experiments demonstrate that the p-norm-like and the Gaussian entropy algorithms perform well, converging to sparse solutions. The Shannon entropy algorithm produces solutions that are concentrated but are shown to not converge to a fully sparse solution.

* Shane Cotter, J. Adler, Bhaskar Rao and Kenneth Kreutz-Delgado, Forward sequential algorithms for best basis selection

* Bhaskar Rao, K. Engan, S.F. Cotter, Jason Palmer, Kenneth Kreutz-Delgado, Subset selection in noise based on diversity measure minimization
In this paper, we develop robust methods for subset selection based on the minimization of diversity measures. A Bayesian framework is used to account for noise in the data and a maximum a posteriori (MAP) estimation procedure leads to an iterative procedure which is a regularized version of the FOCal Underdetermined System Solver (FOCUSS) algorithm. The convergence of the regularized FOCUSS algorithm is established and it is shown that the stable fixed points of the algorithm are sparse. We investigate three different criteria for choosing the regularization parameter: quality of fit, sparsity criterion, and L-curve. The L-curve method, as applied to the problem of subset selection, is found not to be robust, and we propose a novel modified L-curve procedure that solves this problem. Each of the regularized FOCUSS algorithms is evaluated through simulation of a detection problem, and the results are compared with those obtained using a sequential forward selection algorithm termed orthogonal matching pursuit (OMP). In each case, the regularized FOCUSS algorithm is shown to be superior to the OMP in noisy environments.

* Shane Cotter, Bhaskar Rao, K. Engan, and Kenneth Kreutz-Delgado, Sparse solutions to linear inverse problems with multiple measurement vectors
We address the problem of finding sparse solutions to an underdetermined system of equations when there are multiple measurement vectors having the same, but unknown, sparsity structure. The single measurement sparse solution problem has been extensively studied in the past. Although known to be NP-hard, many single–measurement suboptimal algorithms have been formulated that have found utility in many different applications. Here, we consider in depth the extension of two classes of algorithms–Matching Pursuit (MP) and FOCal Underdetermined System Solver (FOCUSS)–to the multiple measurement case so that they may be used in applications such as neuromagnetic imaging, where multiple measurement vectors are available, and solutions with a common sparsity structure must be computed. Cost functions appropriate to the multiple measurement problem are developed, and algorithms are derived based on their minimization. A simulation study is conducted on a test-case dictionary to show how the utilization of more than one measurement vector improves the performance of the MP and FOCUSS classes of algorithm, and their performances are compared.

* David Wipf and Bhaskar Rao, Sparse bayesian learning for basis selection
Sparse Bayesian learning (SBL) and specifically relevance vector machines have received much attention in the machine learning literature as a means of achieving parsimonious representations in the context of regression and classification. The methodology relies on a parameterized prior that encourages models with few nonzero weights. In this paper, we adapt SBL to the signal processing problem of basis selection from overcomplete dictionaries, proving several results about the SBL cost function that elucidate its general behavior and provide solid theoretical justification for this application. Specifically, we have shown that SBL retains a desirable property of the 0-norm diversity measure (i.e., the global minimum is achieved at the maximally sparse solution) while often possessing a more limited constellation of local minima. We have also demonstrated that the local minima that do exist are achieved at sparse solutions. Later, we provide a novel interpretation of SBL that gives us valuable insight into why it is successful in producing sparse representations. Finally, we include simulation studies comparing sparse Bayesian learning with Basis Pursuit and the more recent FOCal Underdetermined System Solver (FOCUSS) class of basis selection algorithms. These results indicate that our theoretical insights translate directly into improved performance.

* David Wipf , Jason Palmer and Bhaskar Rao, Perspectives on Sparse Bayesian Learning.
Recently, relevance vector machines (RVM) have been fashioned from a sparse Bayesian learning (SBL) framework to perform supervised learning using a weight prior that encourages sparsity of representation. The methodology incorporates an additional set of hyperparameters governing the prior, one for each weight, and then adopts a specific approximation to the full marginalization over all weights and hyperparameters. Despite its empirical success however, no rigorous motivation for this particular approximation is currently available. To address this issue, we demonstrate that SBL can be recast as the application of a rigorous variational approximation to the full model by expressing the prior in a dual form. This formulation obviates the necessity of assuming any hyperpriors and leads to natural, intuitive explanations of why sparsity is achieved in practice.
* David Wipf and Bhaskar Rao, An empirical bayesian strategy for solving the simultaneous sparse approximation problem.
Given a large overcomplete dictionary of basis vectors, the goal is to simultaneously represent 1 signal vectors using coefficient expansions marked by a common sparsity profile. This generalizes the standard sparse representation problem to the case where multiple responses exist that were putatively generated by the same small subset of features. Ideally, the associated sparse generating weights should be recovered, which can have physical significance in many applications (e.g., source localization). The generic solution to this problem is intractable and, therefore, approximate procedures are sought. Based on the concept of automatic relevance determination, this paper uses an empirical Bayesian prior to estimate a convenient posterior distribution over candidate basis vectors. This particular approximation enforces a common sparsity profile and consistently places its prominent posterior mass on the appropriate region of weight-space necessary for simultaneous sparse recovery. The resultant algorithm is then compared with multiple response extensions of matching pursuit, basis pursuit, FOCUSS, and Jeffreys prior-based Bayesian methods, finding that it often outperforms the others. Additional motivation for this particular choice of cost function is also provided, including the analysis of global and local minima and a variational derivation that highlights the similarities and differences between the proposed algorithm and previous approaches.
* Michael Lustig, Juan Manuel Santos, David Donoho, and John Pauly, k-t SPARSE: High frame rate dynamic MRI exploiting spatio-temporal sparsity.
Recently rapid imaging methods that exploit the spatial sparsity of images using under-sampled randomly perturbed spirals and non-linear reconstruction have been proposed [1,2]. These methods were inspired by theoretical results in sparse signal recovery [1-5] showing that sparse or compressible signals can be recovered from randomly under-sampled frequency data. We propose a method for high frame-rate dynamic imaging based on similar ideas, now exploiting both spatial and temporal sparsity of dynamic MRI image sequences (dynamic scene). We randomly under-sample k-t space by random ordering of the phase encodes in time (Fig. 1). We reconstruct by minimizing the L1 norm of a transformed dynamic scene subject to data fidelity constraints. Unlike previously suggested linear methods [7, 8], our method does not require a known spatio-temporal structure nor a training set, only that the dynamic scene has a sparse representation. We demonstrate a 7-fold frame-rate acceleration both in simulated data and in vivo non-gated Cartesian balanced-SSFP cardiac MRI .

*Hong Jung, Jong Chul Ye, and Eung Yeop Kim, Improved k-t BLASK and k-t SENSE using FOCUSS.
The dynamic MR imaging of time-varying objects, such as beating hearts or brain hemodynamics, requires a significant reduction of the data acquisition time without sacrificing spatial resolution. The classical approaches for this goal include parallel imaging, temporal filtering, and their combinations. Recently, model-based reconstruction methods called k-t BLAST and k-t SENSE have been proposed which largely overcome the drawbacks of the conventional dynamic imaging methods without a priori knowledge of the spectral support. Another recent approach called k-t SPARSE also does not require exact knowledge of the spectral support. However, unlike the k-t BLAST/SENSE, k-t SPARSE employs the so-called compressed sensing theory rather than using training. The main contribution of this paper is a new theory and algorithm that unifies the above mentioned approaches while overcoming their drawbacks. Specifically, we show that the celebrated k-t BLAST/SENSE are the special cases of our algorithm, which is asymptotically optimal from the compressed sensing theory perspective. Experimental results show that the new algorithm can successfully reconstruct a high resolution cardiac sequence and functional MRI data even from severely limited k-t samples, without incurring aliasing artifacts often observed in conventional methods.

* Jong Chul Ye, Compressed sensing shape estimation of star-shaped objects in Fourier imaging.
Recent theory of compressed sensing informs us that near-exact recovery of an unknown sparse signal is possible from a very limited number of Fourier samples by solving a convex L1 optimization problem. The main contribution of the present paper is a novel non-parametric shape estimation framework and a computational algorithm for binary star shape objects, whose radius functions belong to the space of bounded-variation functions. Specifically, in contrast with standard compressed sensing, the present approach involves directly reconstructing the shape boundary under sparsity constraint. This is done by converting the standard pixel based reconstruction approach into estimation of a non-parametric shape boundary on a wavelet basis. This results in an L1 minimization under a nonlinear constraint, which makes the optimization problem nonconvex. We solve the problem by successive linearization and application of one-dimensional compressed sensing, which significantly reduces the number of sampling requirements as well as the computational burden. Fourier imaging simulation results demonstrate that high quality reconstruction can be quickly obtained from a very limited number of samples. Furthermore, the algorithm outperforms the standard compressed sensing reconstruction approach using the total variation norm.

* Joshua Trzasko, Armando Manduca, and Eric Borisch, Highly undersampled magnetic resonance image reconstruction via homotopic ell-0-minimization. The abstract reads:

In clinical Magnetic Resonance Imaging (MRI), any reduction in scan time offers a number of potential benefits ranging from high-temporal-rate observation of physiological processes to improvements in patient comfort. Following recent developments in Compressive Sensing (CS) theory, several authors have demonstrated that certain classes of MR images which possess sparse representations in some transform domain can be accurately reconstructed from very highly undersampled K-space data by solving a convex L1-minimization problem. Although L1-based techniques are extremely powerful, they inherently require a degree of over-sampling above the theoretical minimum sampling rate to guarantee that exact reconstruction can be achieved. In this paper, we propose a generalization of the Compressive Sensing paradigm based on homotopic approximation of the L0 semi-norm and show how MR image reconstruction can be pushed even further below the Nyquist limit and significantly closer to the theoretical bound. Following a brief review of standard Compressive Sensing methods and the developed theoretical extensions, several example MRI reconstructions from highly undersampled K-space data are presented.



* Irina Gorodnitsky, J. George and Bhaskar Rao, Neuromagnetic source imaging with FOCUSS: A recursive weighted minimum norm algorithm . The abstract reads:

The paper describes a new algorithm for tomographic source reconstruction in neural electromagnetic inverse problems. Termed FOCUSS (FOCal Underdetermined System Solution), this algorithm combines the desired features of the two major approaches to electromagnetic inverse procedures. Like multiple current dipole modeling methods, FOCUSS produces high resolution solutions appropriate for the highly localized sources often encountered in electromagnetic imaging. Like linear estimation methods, FOCUSS allows current sources to assume arbitrary shapes and it preserves the generality and ease of application characteristic of this group of methods. It stands apart from standard signal processing techniques because, as an initialization-dependent algorithm, it accommodates the non-unique set of feasible solutions that arise from the neuroelectric source constraints. FOCUSS is based on recursive, weighted norm minimization. The consequence of the repeated weighting procedure is, in effect, to concentrate the solution in the minimal active regions that are essential for accurately reproducing the measurements. The FOCUSS algorithm is introduced and its properties are illustrated in the context of a number of simulations, first using exact measurements in 2- and 3-D problems, and then in the presence of noise and modeling errors. The results suggest that FOCUSS is a powerful algorithm with considerable utility for tomographic current estimation.

* Shane Cotter and Bhaskar Rao, Sparse channel estimation via matching pursuit, the abstract reads:
Channels with a sparse impulse response arise in a number of communication applications. Exploiting the sparsity of the channel, we show how an estimate of the channel may be obtained using a matching pursuit (MP) algorithm. This estimate is compared to thresholded variants of the least squares (LS) channel estimate. Among these sparse channel estimates, the MP estimate is computationally much simpler to implement and a shorter training sequence is required to form an accurate channel estimate leading to greater information throughput.

No comments:

Post a Comment