Pages

Monday, January 17, 2011

CS: Matrix probing, True-color CT, Matched Filtering from Limited Frequency Samples, A Compressive Sensing and Unmixing Scheme for Hyperspectral Data Processing, Tensor Sparse Coding, Program on Inverse Problems and Imaging at the Fields Institute


I just found a lecture by Alex Dimakis on compressed sensing, RIP and NSP.

There are also seven other papers of interest for the start of the week:

This one seems to be the latest version of a draft I've mentioned before: Compressive Sensing and Structured Random Matrices by  Holger Rauhut. The abstract reads:
These notes give a mathematical introduction to compressive sensing focusing on recovery using `1-minimization and structured random matrices. An emphasis is put on techniques for proving probabilistic estimates for condition numbers of structured random matrices. Estimates of this type are key to providing conditions that ensure exact or approximate recovery of sparse vectors using `1-minimization.

Here is one paper I hve been waiting for a small while now: Matrix probing: a randomized preconditioner for the wave-equation Hessian by Laurent Demanet, Pierre-David Letourneau, Nicolas Boumal, Henri Calandra, Jiawei Chiu, and Stanley Snelson. The abstract reads:
This paper considers the problem of approximating the inverse of the wave-equation Hessian, also called normal operator, in seismology and other types of wave-based imaging. An expansion scheme for the pseudodi erential symbol of the inverse Hessian is set up. The coe cients in this expansion are found via least-squares tting from a certain number of applications of the normal operator on adequate randomized trial functions built in curvelet space. It is found that the number of parameters that can be tted increases with the amount of information present in the trial functions, with high probability. Once an approximate inverse Hessian is available, application to an image of the model can be done in very low complexity. Numerical experiments show that randomized operator tting o ers a compelling preconditioner for the linearized seismic inversion problem.
In this article, we propose a compressive sensing (CS) approach for true-color computed tomography (CT). This approach utilizes prior knowledge on the rank of a spatial spectral matrix after certain transform, energy-dependent intensity (linear attenuation coefficient) characteristics, namely spectral properties, of structures and/or base materials, and image sparsity after a sparsifying transform. The spatial spectral matrix is formed with the row dimension in space and the column dimension in energy. Real images are presented as a mixture of a low-rank matrix and a sparse matrix, where the low-rank matrix corresponds to the background over energy under the aforementioned transform, while the sparse matrix stands for the distinct spectral features under the sparsifying transform. Also, the energy-dependent intensity information can be incorporated into the PRISM in terms of the spectral curves for base materials. In this case, we can transform the reconstruction of spectral images into the estimation of a material composition matrix, which is independent of energy. Furthermore, we develop an accurate and practical split Bregman method based on the PRISM, and numerically demonstrate its superior performance relative to other competing models.

In this paper, we study a simple correlation-based strategy for estimating the unknown delay and amplitude of a signal based on a small number of noisy, randomly chosen frequency-domain samples. We model the output of this “compressive matched filter” as a random process whose mean equals the scaled, shifted autocorrelation function of the template signal. Using tools from the theory of empirical processes, we prove that the expected maximum deviation of this process from its mean decreases sharply as the number of measurements increases, and we also derive a probabilistic tail bound on the maximum deviation. Putting all of this together, we bound the minimum number of measurements required to guarantee that the empirical maximum of this random process occurs sufficiently close to the true peak of its mean function. We conclude that for broad classes of signals, this compressive matched filter will successfully estimate the unknown delay (with high probability, and within a prescribed tolerance) using a number of random frequency-domain samples that scales inversely with the signal-to-noise ratio and only logarithmically in the in the observation bandwidth and the possible range of delays.

Hyperspectral data processing typically demands enormous computational resources in terms of storage, computation and I/O throughputs, especially when real-time processing is desired. In this paper, we investigate a low-complexity scheme for hyperspectral data compression and reconstruction. In this scheme, compressed hyperspectral data are acquired directly by a device similar to the single-pixel camera [5] based on the principle of compressive sensing. To decode the compressed data, we propose a numerical procedure to directly compute the unmixed abundance fractions of given endmembers, completely bypassing high-complexity tasks involving the hyperspectral data cube itself. The reconstruction model is to minimize the total variational of the abundance fractions subject to a pre-processed fidelity equation with a significantly reduced size, and other side constraints. An augmented Lagrangian type algorithm is developed to solve this model. We conduct extensive numerical experiments to demonstrate the feasibility and efficiency of the proposed approach, using both synthetic data and hardware-measured data. Experimental and computational evidences obtained from this study indicate that the proposed scheme has a high potential in real-world applications.

Tensor Sparse Coding for Region Covariances by Ravishankar Sivalingam, Daniel Boley, Vassilios Morellas, and Nikolaos Papanikolopoulos. The abstract reads:
Sparse representation of signals has been the focus of much research in the recent years. A vast majority of existing algorithms deal with vectors, and higher{order data like images are dealt with by vectorization. However, the structure of the data may be lost in the process, leading to a poorer representation and overall performance degradation. In this paper we propose a novel approach for sparse representation of positive de nite matrices, where vectorization will destroy the inherent structure of the data. The sparse decomposition of a positive de nite matrix is formulated as a convex optimization problem, which falls under the category of determinant maximization (MAXDET) problems [1], for which e cient interior point algorithms exist. Experimental results are shown with simulated examples as well as in real{world computer vision applications, demonstrating the suitability of the new model. This forms the rst step toward extending the cornucopia of sparsity-based algorithms to positive de nite matrices.

The last paper is behind paywall:

Privacy-Enabled Object Tracking in Video Sequences Using Compressive Sensing by M. Cossalter, M. Tagliasacchi, G. Valenzise

Thanks to the Google, I found the following page:

Hardware efficient algorithms and circuit architectures for wireless sensor applications

Sponsors
MIT CICS
People
Fred Chen, Anantha Chandrakasan, Vladimir Stojanovic

In most wireless sensor applications ranging from environmental monitoring to medical implants, the utility of the wireless sensor node is limited by its finite energy source and the replacement cost of the node once the source has been exhausted. In this work, we examine this problem from the perspective of the sensor node's energy consumption and explore algorithms and circuit architectures that could significantly improve on the energy-efficiency of wireless sensor nodes and hence extend their lifetime and utility. Figure 1 shows the typical circuit blocks used in sensors for medical monitoring and their associated energy cost and power consumption at a given sample rate. As Fig. 1 shows, the radio power is typically dominant so any reduction in the amount of data transmitted essentially reduces the system power likewise. In applications such as implantable neural recording arrays, the high energy cost to transmit a bit of information and the radio’s limited bandwidth actually necessitate data compression or filtering at the sensor in order to reduce energy consumption and data throughput.
In this work, we present the design and implementation of a new sensor system architecture based on the theory of compressed sensing that more efficiently reduces the number of bits transmitted while perserving the original signal information. As results from our first work show, this approach reduces the average radio power by exploiting signal sparseness to encode the data at a high compression factor [1]. The reconstruction process also enables power reduction in the frontend circuitry by relaxing the noise and resolution requirements of the AFE and ADC. Unlike event detection based data compression, this approach enables a faithful reconstruction of the entire original signal and is applicable across a variety of signals without knowing the signal details a priori. While most of the results and examples presented are in the context of medical applications, they can be generally applied to other fields as well. In conjunction with the compressed sensing work, a new DAC switching algorithm for SAR ADCs aimed at reducing area and enabling more parallel architectures has also been developed [2].


And finally, there is a thematic Program on Inverse Problems and Imaging at the Fields Institute, Toronto

April 30 – May 31, 2012
Program on Variational Methods and Compressive Sensing in Imaging

Organizers:
Tony Chan, Hong Kong University of Science and Technology
Adrian Nachman, University of Toronto
Luminita Vese, UCLA
Description of the Program
This program is intended to foster research, learning and collaboration between top people in the various areas of Image Analysis. There will be a month-long graduate course and five short courses devoted to various Imaging Analysis and Compressed Sensing. These courses are aimed at attracting graduate students and postdoctoral fellows and at helping define the research questions to be investigated during the program.

In addition, there will be five seminar talks per week devoted to the different research topics of the program.
Preliminary topics for the program:
  1. L1 minimization and applications (including Total Variation minimization).
  2. Compressed Sensing by variational regularization methods.
  3. Proximal point methods and iterative methods for solving ill-posed inverse problems (including iterative Bregman methods, hierarchical decompositions, surrogate functionals).
  4. Geometric processing (denoising of surfaces, non-rigid shape processing and analysis).
  5. Optimal Transportation and Wasserstein Distance methods for registration and segmentation.
  6. PDE methods for image processing.
  7. Nonlocal methods (nonlocal means, nonlocal total variation, bilateral filtering).

Graduate Courses

Recent Advances in Variational Image Analysis
Luminita Vese, UCLA

Short Courses

1.Sparse Signal Processing and Compressive Sensing
Richard Baraniuk, Rice University.

2.Geodesic Methods in Image Analysis
Laurent Cohen, Université Paris-Dauphine.

3.Partial Differential Equation Methods in Image Processing
Selim Esedoglu, University of Michigan.

4.Numerical Methods for Sparse Recovery
Massimo Fornasier, Johann Radon Institute

5.Numerical Geometry of Images
Ron Kimmel, Technion.





Inverted Topography near Juventae Chasma (ESP_020602_1755) on Mars.

Credit: NASA/JPL/University of Arizona

No comments:

Post a Comment