Pages

Thursday, April 15, 2010

CS: Block-Based Compressed Sensing of Images and Video, multiple findings,Physics of Algorithm 09

Jim Fowler let me know of his recent presentation entitled: Block-Based Compressed Sensing of Images and Video. The abstract of which reads:
Compressed sensing has received significant attention in recent years, primarily as a mathematical phenomenon. There has been, on the other hand, significantly less attention paid towards incorporation of compressed-sensing methodology into practical signal-processing applications. In this talk, we consider compressed sensing in the context of several image and video applications. The foundations of our discussion rest on a recent block-based strategy for compressed-sensing recovery of a single still image. In our approach, block-based random image sampling is coupled with a projection-driven compressed-sensing recovery that encourages sparsity in the domain of an image transform simultaneously with a smooth reconstructed image. The proposed approach yields images with quality that matches or exceeds that produced by a popular, yet computationally expensive, technique which minimizes total variation. This still-image reconstruction is then extended to multiview image sets, incorporating inter-image disparity compensation into the image-recovery process in order to take advantage of the high degree of inter-image correlation common to multiview scenarios. Finally, a similar approach is adopted for the reconstruction of video in which each frame has been subject to block-based random projections, and motion estimation and motion compensation across an entire group of frames informs the compressed-sensing recovery process.


I just found an expository paper on an architecture for a random lens imager: Compressive measurement for target tracking in persistent, pervasive surveillance applications by Michael E. Gehm, and Michael D. Stenner. The abstract reads:
Motion tracking in persistent surveillance applications enters an interesting regime when the movers are of a size on the order of the image resolution elements or smaller. In this case, for reasonable scenes, information about the movers is a natively sparse signal—in an observation of a scene at two closely separated time-steps, only a small number of locations (those associated with the movers) will have changed dramatically. Thus, this particular application is well-suited for compressive sensing techniques that attempt to efficiently measure sparse signals. Recently, we have been investigating two different approaches to compressive measurement for this application. The first, differential Combinatorial Group Testing (dCGT), is a natural extension of group testing ideas to situations where signal differences are sparse. The second methodology is an -1-minimization based recovery approach centered on recent work in random (and designed) multiplex sensing. In this manuscript we will discuss these methods as they apply to the motion tracking problem, discuss various performance limits, present early simulation results, and discuss notional optical architectures for implementing a compressive measurement scheme.
I mentioned it before but it looks like it has been updated: Phase Transitions for Greedy Sparse Approximation Algorithms by Jeffrey Blanchard, Coralia Cartis, Jared Tanner and Andrew Thompson. The abstract reads:
A major enterprise in compressed sensing and sparse approximation is the design and analysis of computationally tractable algorithms for recovering sparse, exact or approximate, solutions of underdetermined linear systems of equations. Many such algorithms have now been proven to have optimal-order uniform recovery guarantees using the ubiquitous Restricted Isometry Property (RIP). However, it is unclear when the RIP-based sufficient conditions on the algorithm are satisfied. We present a framework in which this task can be achieved; translating these conditions for Gaussian measurement matrices into requirements on the signal's sparsity level, length, and number of measurements. We illustrate this approach on three of the state-of-the-art greedy algorithms: CoSaMP, Subspace Pursuit (SP), and Iterative Hard Thresholding (IHT). Designed to allow a direct comparison of existing theory, our framework implies that, according to the best known bounds, IHT requires the fewest number of compressed sensing measurements and has the lowest per iteration computational cost of the three algorithms compared here.
Of related interest: Matrix Compression using the Nystrom Method by Arik Nemtsov, Amir Averbuch, Alon Schclar. The abstract reads:
The Nystr¨om method is routinely used for out-of-sample extension of kernel matrices. We describe how this method can be applied to find the singular value decomposition (SVD) of general matrices and the eigenvalue decomposition (EVD) of square matrices. We take as an input a matrix M ∈ Rm×n, a user defined integer s ≤ min(m, n) and AM ∈ Rs×s, a matrix sampled from columns and rows of M. These are used to construct an approximate rank-s SVD of M in O s2 (m + n) operations. If M is square, the rank-s EVD can be similarly constructed in O s2n operations. In this sense, AM is a compressed version of M. We discuss theoretical considerations for the choice of AM and how it relates to the approximation error. Finally, we propose an algorithm that selects a good initial sample for a pivoted version of M. The algorithm relies on a previously computed approximate rank-s decomposition of M, termed Mdecomp. We show that ||Mdecomp −M||2 is related to the Nystr¨om approximation error when the selected sample is employed. Then, the user can choose a more computationally expensive algorithm for computing Mdecomp when
higher accuracy is required. We present experimental results that evaluate our sample selection algorithm for general matrices and kernel matrices. The algorithm is shown to work well for matrices whose spectra exhibit fast decay.

The next paper is behind a paywall: Resolution of crossing fibers with constrained compressed sensing using traditional diffusion tensor MRI by Bennett A. Landman, Hanlin Wan, John A. Bogovic, Pierre-Louis Bazin, and Jerry L. Prince. The abstract reads:
Diffusion tensor imaging (DTI) is widely used to characterize tissue micro-architecture and brain connectivity. Yet DTI suffers serious limitations in regions of crossing fibers because traditional tensor techniques cannot represent multiple, independent intra-voxel orientations. Compressed sensing has been proposed to resolve crossing fibers using a tensor mixture model (e.g., Crossing Fiber Angular Resolution of Intra-voxel structure, CFARI). Although similar in spirit to deconvolution approaches, CFARI uses sparsity to stabilize estimation with limited data rather than spatial consistency or limited model order. Here, we extend the CFARI approach to resolve crossing fibers through a strictly positive, parsimonious mixture model. Together with an optimized preconditioned conjugate gradient solver, estimation error and computational burden are greatly reduced over the initial presentation. Reliable estimates of intra-voxel orientations are demonstrated in simulation and in vivo using data representative of typical, low b-value (30 directions, 700 s/mm2) clinical DTI protocols. These sequences are achievable in 5 minutes at 3 T, and the whole brain CFARI analysis is tractable for routine analysis. With these improvements, CFARI provides a robust framework for identifying intra-voxel structure with conventional DTI protocols and shows great promise in helping to resolve the crossing fiber problem in current clinical imaging studies.
Finally, I found the following on arxiv: Matrix Coherence and the Nystrom Method by Ameet Talwalkar, Afshin Rostamizadeh. The abstract reads:
The Nystrom method is an efficient technique to speed up large-scale learning applications by generating low-rank approximations. Crucial to the performance of this technique is the assumption that a matrix can be well approximated by working exclusively with a subset of its columns. In this work we relate this assumption to the concept of matrix coherence and connect matrix coherence to the performance of the Nystrom method. Making use of related work in the compressed sensing and the matrix completion literature, we derive novel coherence-based bounds for the Nystrom method in the low-rank setting. We then present empirical results that corroborate these theoretical bounds. Finally, we present more general empirical results for the full-rank setting that convincingly demonstrate the ability of matrix coherence to measure the degree to which information can be extracted from a subset of columns.


For some odd reason, I don't think I covered the Physics of Algorithms meeting that took place in September at Los Alamos. Here are some of the abstracts and slides of interest to us:





Fast algorithms for nonconvex compressive sensing by Rick Chartrand, LANL

Recent research in applied mathematics has shown that images and other signals can be reconstructed from many fewer data than previously thought possible. In this newly-emerged field of compressive sensing, the inherent compressibility (or sparsity) of real-world signals is exploited to allow reconstruction by means of convex optimization. In this talk, I will show that using a nonconvex approach instead has many advantages. Chief among these is the ability to reconstruct from still fewer data than the usual compressive sensing methods. Others include increased robustness to noise and signal nonsparsity, making the methods more widely applicable. By being able to reconstruct images and other signals from fewer data, we can reduce data collection time and cost, and ease the burden of data transmission or storage.

The apparent challenge of solving a nonconvex optimization problem is the enormous number of local minima to be avoided. Surprisingly, simple and efficient algorithms are able in practice to find the global minimum reliably. I will present a particular algorithmic framework that is especially fast, and scales well to large problems. It is best applied in cases of Fourier-domain measurements, as in the case of MRI, synthetic-aperture imaging (radar, sonar, or lidar), and others. The result is that we get the improved reconstruction performance of nonconvex compressive sensing, while maintaining the speed and reliability of convex optimization.



I like how Rick makes a case for the l_p norm and shows why both l_2 AND the l_1 cases fail . I like it because generally, the l_1 ball always produces the right intersection and yield a sparse solution :)

Message Passing Algorithms for Compressed Sensing by David Donoho, Department of Statistics, Stanford University

Compressed sensing aims to undersample certain high-dimensional signals, yet accurately reconstruct them by exploiting signal characteristics. Accurate reconstruction is possible when the object to be recovered is sufficiently sparse in a known basis. Currently, the best known sparsity-undersampling tradeoff is achieved when reconstructing by convex optimization -- which is expensive in important large-scale applications.

Fast iterative thresholding algorithms have been intensively studied as alternatives to convex optimization for large-scale problems. Unfortunately known fast algorithms offer substantially worse sparsity-undersampling tradeoffs than convex optimization.

We introduce a simple costless modification to iterative thresholding making the sparsity-undersampling tradeoff of the new algorithms equivalent to that of the corresponding convex optimization procedures. The new iterative-thresholding algorithms are inspired by belief propagation in graphical models.

Our empirical measurements of the sparsity-undersampling tradeoff for the new algorithms agree with theoretical calculations. We show that a state evolution formalism correctly derives the true sparsity-undersampling tradeoff. There is a surprising agreement between earlier calculations based on random convex polytopes and this new, apparently very different theoretical formalism.

Joint work with Arian Maleki (EE, Stanford) and Andrea Montanari (Statistics and EE, Stanford).

3.8 CPU-year uh!


Matrix Completion from a Few Entries by Raghunandan Keshavan, Department of Electrical Engineering, Stanford University

Low-rank models are frequently used in machine learning and statistics. An important domain of application is provided by collaborative filtering where one of the problems is to reconstruct a low rank matrix from a sparse subset of entries. Given M, an nα × n matrix of rank r

Joint work with Andrea Montanari and Sewoong Oh.


Sequential Compressed Sensing by Dmitry Malioutov, DRW Trading Group, Chicago

Compressed sensing allows perfect recovery of sparse signals (or signals sparse in some basis) using only a small number of random measurements. Existing results in compressed sensing literature have focused on characterizing the achievable performance by bounding the number of samples required for a given level of signal sparsity. However, using these bounds to minimize the number of samples requires a-priori knowledge of the sparsity of the unknown signal, or the decay structure for near-sparse signals. Furthermore, there are some popular recovery methods for which no such bounds are known.

In this talk, we describe an alternative scenario where observations are available in sequence. For any recovery method, this means that there is now a sequence of candidate reconstructions. We propose a method to estimate the reconstruction error directly from the samples themselves, for every candidate in this sequence. This estimate is universal in the sense that it is based only on the measurement ensemble, and not on the recovery method or any assumed level of sparsity of the unknown signal. With these estimates, one can now stop observations as soon as there is reasonable certainty of either exact or sufficiently accurate reconstruction. They also provide a way to obtain "run-time" guarantees for recovery methods that otherwise lack a-priori performance bounds.

We investigate both continuous (e.g. Gaussian) and discrete (e.g. Bernoulli or Fourier) random measurement ensembles, both for exactly sparse and general near-sparse signals, and with both noisy and noiseless measurements.



Inferring Rankings Under Constrained Sensing by Srikanth Jagabathula, Devavrat Shah, Laboratory for Information and Decision Systems, Department of EECS, MIT

We consider the problem of recovering a function on the space of permutations of n ele- ments, denoted by Sn , using various forms of partial information. This problem naturally arises in various applications such as ranked elections, ranking teams in a sports league, recommen- dation systems, etc. To begin with, we formulate the problem by classifying various forms of partial information through a natural relation to partitions of integer n. This is inspired by the representations of group Sn and hence the resulting formal problem can be restated as that of recovering a function from partially available Fourier coefficients.

As the main result of the paper, we obtain a general condition under which it is possible to recover a function from its partial information. Under a natural random model, we quantify the recoverability conditions in terms of the sparsity (cardinality of the support of the function). We propose a novel, iterative algorithm to recover the function for each form of partial informa- tion. We further establish that under the general recoverability conditions, the function to be recovered has the sparsest support that is consistent with given data. That is, it solves an ap- propriate 0 optimization problem. And, we establish that it is the unique solution. Effectively, under our recoverability conditions our simple algorithm ends up solving the 0 optimization, which is hard in general. We discuss fundamental limitation of this recovery problem using the classical information theoretical framework.

No comments:

Post a Comment