Thursday, April 30, 2009

CS:Video rate spectral imaging, Photon-limited hyperspectral, MRI Bregman l_p, TVL1, Irregular sampling, Data streams, meetings and talks.

Today we have some interesting hyperspectral work from Duke as well as other contributions found through the interwebs:


We have previously reported on coded aperture snapshot spectral imagers (CASSI) that can capture a full frame spectral image in a snapshot. Here we describe the use of CASSI for spectral imaging of a dynamic scene at video rate. We describe significant advances in the design of the optical system, system calibration procedures and reconstruction method. The new optical system uses a double Amici prism to achieve an in-line, direct view configuration, resulting in a substantial improvement in image quality. We describe NeAREst, an algorithm for estimating the instantaneous three-dimensional spatio-spectral data cube from CASSI’s two-dimensional array of encoded and compressed measurements. We utilize CASSI’s snapshot ability to demonstrate a spectral image video of multi-colored candles with live flames captured at 30 frames per second.

A nice figure shows how the transfer function between a point of light and camera is not translation invariant which I think complicates many issues when one is performing this light multiplexing.

This paper studies photon-limited, hyperspectral intensity estimation and proposes a spatially- and spectrally adaptive, nonparametric method to estimate hyperspectral intensities from Poisson observations. Specifically, our method searches over estimates defined over a family of recursive dyadic partitions in both the spatial and spectral domains, and finds the one that maximizes a penalized log likelihood criterion. The key feature of this approach is that the partition cells are anisotropic across the spatial and spectral dimensions so that the method adapts to varying degrees of spatial- and spectral-smoothness, even when the respective degrees of smoothness are not known apriori. The proposed approach is based on the key insight that the spatial boundaries and singularities exist in the same locations in every spectral band, even though the contrast or perceptibility of these features may be very low in some bands. The incorporation of this model into the reconstruction results in significant performance gains. Furthermore, for hyperspectral intensities that belong to the anisotropic H¨older-Besov function class, the proposed approach is shown to be near-minimax optimal. The upper bounds on the risk function, which is the expected squared Hellinger distance between the true intensity and the estimate obtained using the proposed approach, matches the best possible lower bound up to a log factor for certain degrees of spatial- and spectral-smoothness. Experiments conducted on realistic data sets show that the proposed method can reconstruct the spatial- and the spectral- inhomogeneities very well even when the observations are extremely photon-limited (i.e, less than 0.1 photon per voxel).

Wow, "less than 0.1 photon per voxel".

We now also have Bregman solver for l_p as introduced in Fast algorithms for nonconvex compressive sensing: MRI reconstruction from very few data" by Rick Chartrand. The abstract reads:

Compressive sensing is the reconstruction of sparse images or signals from very few samples, by means of solving a tractable optimization problem. In the context of MRI, this can allow reconstruction from many fewer k-space samples, thereby reducing scanning time. Previous work has shown that nonconvex optimization reduces still further the number of samples required for reconstruction, while still being tractable. In this work, we extend recent Fourier-based algorithms for convex optimization to the nonconvex setting, and obtain methods that combine the reconstruction abilities of previous nonconvex approaches with the computational speed of state-of-the-art convex methods.





Here is an interesting geometric characterization of the TV solver using L1: The TVL1 Model: a Geometric Point of View by Vincent Duval, Jean-François Aujol, Yann Gousseau. The abstract reads:

The aim of this paper is to investigate the geometrical behavior of the TVL1 model used in image processing, by making use of the notion of Cheeger sets. This mathematical concept was recently related to the celebrated Rudin-Osher-Fatemi image restoration model, yielding important advances in both fields. We provide the reader with a geometrical characterization of the TVL1 model. We show that, in the convex case, exact solutions of the TVL1 problem are given by an opening followed by a simple test over the ratio perimeter/area. Shapes remain or suddenly vanish depending on this test. As a result of our theoritical study, we suggest a new and efficient numerical scheme to apply the model to digital images. As a by-product, we justify the use of the TVL1 model for image decomposition, by establishing a connection between the model and morphological granulometry. Eventually, we propose an extension of TVL1 into an adaptive framework, in which we derive some theoretical results.
Finally, of connex interest to CS we have: Irregular and multi-channel sampling of operators by Yoon Mi Hong, Götz E. Pfander. The abstract reads:

The classical sampling theorem for bandlimited functions has recently been generalized to apply to so-called bandlimited operators, that is, to operators with band-limited Kohn-Nirenberg symbols. Here, we discuss operator sampling versions of two of the most central extensions to the classical sampling theorem. In irregular operator sampling, the sampling set is not periodic with uniform distance. In multi-channel operator sampling, we obtain complete information on an operator by multiple operator sampling outputs.
and Revisiting Norm Estimation in Data Streams by Daniel M. Kane, Jelani Nelson, David P. Woodruff. The abstract reads:

The problem of estimating the pth moment F_p (p nonnegative and real) in data streams is as follows. There is a vector x which starts at 0, and many updates of the form x_i -- x_i + v come sequentially in a stream. The algorithm also receives an error parameter 0 \lt eps \lt 1. The goal is then to output an approximation with relative error at most eps to F_p = ||x||_p^p. Previously, it was known that polylogarithmic space (in the vector length n) was achievable if and only if p \le 2. We make several new contributions in this regime, including: (*) An optimal space algorithm for 0 \lt p \lt 2, which, unlike previous algorithms which had optimal dependence on 1/eps but sub-optimal dependence on n, does not rely on a generic pseudorandom generator. 
(*) A near-optimal space algorithm for p = 0 with optimal update and query time.
(*) A near-optimal space algorithm for the "distinct elements" problem
(p = 0 and all updates have v = 1) with optimal update and query time.
(*) Improved L_2 to L_2 dimensionality reduction in a stream.
(*) New 1-pass lower bounds to show optimality and near-optimality of our algorithms, as well as of some previous algorithms (the "AMS sketch" for p = 2, and the L_1-difference algorithm of Feigenbaum et al.). As corollaries of our work, we also obtain a few separations in the complexity of moment estimation problems: F_0 in 1 pass vs. 2 passes, p = 0 vs. p \gt 0, and F_0 with strictly positive updates vs. arbitrary updates.


Laurent Jacques mentions that the Poster presentation at ICASSP'09 of the work about CMOS Compressed Imaging by Random Convolution, by Laurent Jacques, Pierre Vandergheynst, vAlexandre Bibet, Vahid Majidzadeh, Alexandre Schmid, and Yusuf Leblebici is now available.

Holger Rauhut is organizing a special session on mathematical aspects of compressed sensing atSampTA09, May 18 - 22, 2009. He is also giving a short course on "Compressive sensing and structured random matrices" at the Summer School on "Theoretical Foundations and Numerical Methods for Sparse Recovery", Linz, August 31-September 4, 2009.

At Rice, James H. McClellan will talk about 

Compressive Sensing Data Acquisition and Imaging for GPRs

on Monday, May 11, 2009 
at 3:00 PM to 4:00 PM
1070 Duncan Hall
Rice University
6100 Main St
Houston, Texas, USA


New data acquisition and imaging strategies are presented for ground penetrating radars (GPRs) used in subsurface imaging. Both stepped-frequency continuous-wave (SFCW) and impulse GPRs are considered. When the target space is sparse, i.e., a small number of point like targets, the theory of compressive sensing (CS) tells us that making measurements at only a small number of random frequencies (or times) is sufficient to construct an image of the target space by solving a convex optimization problem which enforces sparsity through L1 minimization. Random spatial sampling of the synthetic aperture can also be performed as part of this data acquisition method, which greatly reduces the GPR acquisition time at the expense of higher computational costs. Imaging results for both simulated and experimental GPR data exhibit less clutter and better resolution than the standard migration and backprojection methods.



Monday, April 27, 2009

CS: Minimum Sum of Distances Estimator, Iterative reweighted L1, CS with quantized measurements, GPU, Missing Data Imputation, Duke Workshop Posters

Here is a very nice approach to navigation in autonomous robots:
Minimum Sum of Distances Estimator: Robustness and Stability by Yariv Sharon, John Wright and Yi Ma. The abstract reads:

We consider the problem of estimating a state x from noisy and corrupted linear measurements y = Ax + z + e, where z is a dense vector of small-magnitude noise and e is a relatively sparse vector whose entries can be arbitrarily large. We study the behavior of the l1 estimator x^ = argmin_x ||y - Ax||_1, and analyze its breakdown point with respect to the number of corrupted measurements ||e||_0. We show that the breakdown point is independent of the noise. We introduce a novel algorithm for computing the breakdown point for any given A, and provide a simple bound on the estimation error when the number of corrupted measurements is less than the breakdown point. As a motivational example we apply our algorithm to design a robust state estimator for an autonomous vehicles, and show how it can significantly improve performance over the Kalman filter.
They actually show an improvement over a non-linear Kalman filter. We observed this issue of the GPS error during our entry in DARPA's grand challenge.

Compressed sensing has shown that it is possible to reconstruct sparse high dimensional signals from few linear measurements. In many cases, the solution can be obtained by solving an ℓ1-minimization problem, and this method is accurate even in the presence of noise. Recent a modified version of this method, reweighted ℓ1-minimization, has been suggested. Although no provable results have yet been attained, empirical studies have suggested the reweighted version outperforms the standard method. Here we analyze the reweighted ℓ1-minimization method in the noisy case, and provide provable results showing an improvement in the error bound over the standard bounds.
Still in the robotics area here a new use for GPU: learning dictionariesin Large-scale Deep Unsupervised Learning using Graphics Processors by Rajat Raina, Anand Madhavan, Andrew Y. Ng. The abstract reads:
The promise of unsupervised learning methods lies in their potential to use vast amounts of unlabeled data to learn complex, highly nonlinear models with millions of free parameters. We consider two well-known unsupervised learning models, deep belief networks (DBNs) and sparse coding, that have recently been applied to a flurry of machine learning applications (Hinton & Salakhutdinov, 2006; Raina et al., 2007). Unfortunately, current learning algorithms for both models are too slow for large-scale applications, forcing researchers to focus on smaller-scale models, or to use fewer training examples. In this paper, we suggest massively parallel methods to help resolve these problems. We argue that modern graphics processors far surpass the computational capabilities of multicore CPUs, and have the potential to revolutionize the applicability of deep unsupervised learning methods. We develop general principles for massively parallelizing unsupervised learning tasks using graphics processors. We show that these principles can be applied to successfully scaling up learning algorithms for both DBNs and sparse coding. Our implementation of DBN learning is up to 70 times faster than a dual-core CPU implementation for large models. For example, we are able to reduce the time required to learn a four-layer DBN with 100 million free parameters from several weeks to around a single day. For sparse coding, we develop a simple, inherently parallel algorithm, that leads to a 5 to 15-fold speedup over previous methods.
We have had different papers on 1-bit or several-bit Compressive Sensing, here is a new one: Compressed sensing with quantized measurements by Argyrios Zymnis, Stephen Boyd, and Emmanuel Candes. The abstract reads:
We consider the problem of estimating a sparse signal from a set of quantized, Gaussian noise corrupted measurements, where each measurement corresponds to an interval of values. We give two methods for (approximately) solving this problem, each based on minimizing a differentiable convex function plus an regularization term. Using a first order method developed by Yin et al, we demonstrate the performance of the methods through numerical simulation. We find that, using these methods, compressed sensing can be carried out even when the quantization is very coarse, e.g., 1 or 2 bits per measurement.
and finally, we have: Missing Data Imputation using Compressive Sensing techniques for connected digit recognition by Jort Gemmeke and Bert Cranen. The abstract reads:
An effective way to increase the noise robustness of automatic speech recognition is to label noisy speech features as either reliable or unreliable (missing) prior to decoding, and to replace the missing ones by clean speech estimates. We present a novel method based on techniques from the field of Compressive Sensing to obtain these clean speech estimates. Unlike previous imputation frameworks which work on a frame-by-frame basis, our method focuses on exploiting information from a large time-context. Using a sliding window approach, denoised speech representations are constructed using a sparse representation of the reliable features in an overcomplete dictionary of clean, fixed-length speech exemplars. We demonstrate the potential of our approach with experiments on the AURORA-2 connected digit database.


Larry Carin has updated the Duke Workshop page and included the abstract of the posters shown there, here is the list, lots of goodies, whished I had been there:

Poster Abstracts

Aswin Sankaranarayanan - Fast Compressive Acquisition of Reflectance Fields

University of Maryland

Acquiring the reflectance field of real objects is an important problem that is often encountered for solving several imaging, vision and graphics tasks such as relighting. Typical methods for capturing such reflectance fields are based on acquiring images with a single light source 'ON' in each image. Recently, it has been shown that using multiple light sources per each acquired image and performing a linear inversion in order to recover the reflectance field results in higher signal to noise ratios of the captured reflectance fields. Nevertheless, the number of images to be acquired to infer the reflectance field remains identical to the number of illumination sources. Here, we describe a method for acquiring accurate reflectance fields of objects with significantly lower number of captured images than the number of illumination sources. This reduction in the number of required images is achieved by exploiting recent developments in compressive sensing, which essentially show that if the signal to be acquired is sparse in some basis, then the signal can be accurately reconstructed using sub-Nyquist linear samples of the signal. We empirically show that the reflectance fields are sparse in the Haar basis. We then develop a scheme for capturing reflectance fields using multiplexed illumination, thereby achieving the signal-to-noise ratio advantages of multiplexed illumination and use a compressive sensing based recovery algorithm to infer reflectance fields. This is joint work with Dr. Veeraraghavan (MERL), Prof. Chellappa (UMD) and Prof. Raskar (MIT).

Bin Dong, Eric Savitsky and Stanley Osher - A Novel Method for Enhanced Needle Localization Using Ultrasound-Guidance

UCLA

Image guided interventions via ultrasound imaging have become the standard of care for many surgical procedures. A majority of medical care-providers utilize low resolution ultrasound units. In addition, many office-based or emergency department procedures are performed using generic (non-specialized) needles. Therefore, the quality of the imagery obtained by most ultrasound units does not allow for clear and concise visualization of a regular needle during many needle-based procedures. The inability to clearly see the tip of a needle in relation to the object of interest (e.g., a vein, artery, or mass) makes such image guided interventions less accurate. In this paper, we propose a new and improved framework for tracking needles in ultrasound image frames. The problem can be easily transformed to a segmentation problem, which needs to be solved in real time. The core of our method is to employ the split Bregman iterations [T. Goldstein and S. J. Osher, 2008] on solving the weighted geometric active contour (WGAC) model [X. Bresson et al, 2007]. After a simple change of variables, the WGAC model can be converted to a standard L1-minimization problem, which can then be solved rather efficiently by Bregman iterations [W. Yin et al, 2008].

Christy Fernandez - Cull Image Segmentation with a Compressive Snapshot Spectral Imager

Duke University

This poster describes spectrally adaptive signal segmentation with a compressive sensing architecture. Total variation minimization is used to reconstruct a three-dimensional (3D) data cube from a two-dimensional (2D) snapshot. A priori knowledge of spectral signatures found in a sample is used for image segmentation. We apply this algorithm to fluorescence microscopy data acquired from a snapshot spectral imager. The instrument records 32 spectral channels that span the spectral range between 450nm and 750nm with 10nm spectral resolution. We report on reconstructions for both static and dynamic samples used in fluorescence microscopy.

David Mao - Equally-Sloped Tomographic Reconstruction and Its Application to Radiation Dose Reduction.

UCLA

The central problem for tomographic reconstruction is to reconstruct a clean and faithful image from very limited projections and therefore reduce the radiation dose. Since the Fourier slice theorem gives us a linear relationship between the projection data and the medical image, the reconstruction of image becomes a standard compressive sensing type of problem, that is to say, we want to reconstruct the best result (defined by certain criteria) from limited number of linear measurements of the image.

This leads to a constrained optimization problem and we will address a fast and efficient scheme to solve it. The numerical results demonstrate that this method strongly outperforms the conventional tomographic reconstruction in terms of the reduced radiation dose and the quality of the reconstruction.

Igal Bilik - Spatial Compressive Sensing Approach for Field Directionality Estimation Via Spatial Diversity.

University of Massachusetts, Dartmouth

This work addresses the problem of 2-D field directionality estimation using short uniform linear array. The field directionality in the proposed model is represented as a compressible signal in the frequency-wavenumber domain, motivating the application of the compressive sensing (CS) theory to the addressed problem. The spatial interpretation of the CS approach, denoted as spatial CS (SCS), provides a high azimuth resolution for a fixed linear array. Moreover, the SCS provides an efficient way to exploit the spatial diversity achieved by the changing array orientation or distributed networks to resolves the left-right ambiguity of linear array and to improve the estimation performance in the endfire regions. Simplicity of implementation comparing to other methods, is an additional advantage of the proposed approach. The performance of the SCS approach for the field directionality estimation was evaluated via simulation and it is shown that it outperforms other tested methods.

Jort F. Gemmeke - Missing Data Imputation for Noise Robust Speech Recognition Using Compressive Sensing

Radboud University (The Netherlands)

An effective way to increase the noise robustness of automatic speech recognition is to label noisy speech features as either reliable or unreliable (missing) prior to decoding, and to replace (impute) the missing ones by clean speech estimates. Work in Compressive Sensing has shown signals can be reconstructed using relatively few measurements provided the signal is known to be sparse in an appropriate basis. We sparsely represent spectrographic representations of speech in an overcomplete basis of exemplar (clean) speech signals using only the reliable speech features. The missing elements of the speech signals are then obtained by projecting the sparse representation in the basis. Experiments on spoken digit recognition show that this imputation technique greatly outperforms other imputation techniques for noise robust speech recognition.

Lorne Applebaum - Chirp Sensing and A/D Conversion

Princeton University

We consider deterministic compressive sensing matrices where the columns are discrete chirp waveforms from an overcomplete dictionary, and we show that these matrices satisfy a statistical variant of the Restricted Isometry Property. We will then describe an algorithm for recovery of k-sparse signals that is resilient to noise, and has complexity kN logN where N is the number of measurements. The original motivation for our work was the application to active sensing, specifically radar applications. However we have recently developed and interesting connection to A/D conversion through analysis of the NYFR receiver developed by L3 Communications as part of the A-to-I program at DARPA. When the NYFR receiver samples a wideband signal at the level crossings of a chirp waveform, the samples of continuous time sinusoids are discrete chirp sequences, and so the sparse narrowband recovery problem amounts to determining a small number of chirps that agree with the sampled data. This reduces to essentially the same problem that we explored in the original radar application.

Ivana Stojanovic and W. Clem Karl - An Overcomplete Dictionary Approach to Imaging of Moving Targets with Multistatic SAR

Boston University

This paper presents a method for imaging of moving targets with a SAR sensor by treating the problem as one of spatial reflectivity signal inversion over an overcomplete dictionary of target velocities. Since SAR sensors returns can be related to the spatial frequency domain projections of the scattering field, we exploit insights from compressed sensing theory to show that moving targets can be effectively imaged with a small number of transmitters and receivers randomly dispersed within a narrow forward cone around the scene of interest. Existing approaches to dealing with moving targets in SAR solve a coupled non-linear problem of target scattering and motion estimation typically through matched filtering. In contrast, by using an overcomplete dictionary approach we effectively linearize the forward model and solve the moving target problem as a larger, regularized inversion problem subject to sparsity constraints.

Lam Nguyen - SAR Imaging Technique for Reduction of Sidelobes and Noise

Army Research Laboratory

The U.S. Army Research Laboratory (ARL), as part of a mission and customer funded exploratory program, has developed a new low-frequency, ultra-wideband (UWB) synchronous impulse reconstruction (SIRE) synthetic aperture radar (SAR). The radar has been used as a testbed to support proof-of-concept demonstration for several programs for detection of concealed targets.

We will present the recursive sidelobe minimization (RSM) technique - which is based on compressive sensing principle - when combined with the standard SAR image formation would result in significant reduction in sidelobes and noise in resulting SAR imagery. Although the technique is developed using the ARL UWB SIRE radar in forward-looking mode, it is also applicable for other radar configuration such as side-looking SAR.

Sina Jafarpour - Compressed Sensing Using High-quality Expander Graphs: Optimal Measurements, Efficient Recovery, Explicit Construction, and Noise Tolerance.

Princeton University

We develop efficient compressive sensing algorithms based on expander graphs for which the number of measurements is optimal. The recovery algorithm is based on key properties of expander graphs: first, the restricted isometry property with respect to the l1 norm (RIP1) which is a geometric view of expander graphs; and second, the excessive unique neighborhood property which is a combinatorial view of expander graphs. We show that these two properties imply that not only basis pursuit algorithm works with the expander graph, but also that there exists a very simple combinatorial algorithm which recovers any k-sparse signal in about k (sparsity level) very simple iterations. We also show that by tolerating a small penalty on the number of measurements (and not on the number of recovery iterations) we can use known constructions of expander graphs to come up with explicit measurement matrices for this method. Finally we show how the algorithm can be generalized to approximate compressible signals and noisy measurements, with very high precision with respect to the l1 metric.

Tom Goldstein - Split Bregman Schemes for L1 Regularized Problems

UCLA

The previously introduced Split Bregman method is an iterative scheme for solving L1 regularized problems, and is particularly well suited for problems involving total variation and Besov regularizers. This presentation will explore applications of this type of scheme, including compressed sensing MRI. We shall also discuss several novel applications of the Split Bregman method, including image segmentation and surface reconstruction from unorganized data points,

Vishal Patel - Compressive Synthetic Aperture Radar Imaging

University of Maryland

Synthetic Aperture Radar (SAR) is an imaging modality that provides a high resolution map of the spatial distribution of targets and terrain based on their response to transmitted electromagnetic waveforms. In this presentation, we will introduce a new SAR imaging scheme based on compressing the number of transmitted waveforms. We will show that if the target reflectivity function is assumed to be sparse in some domain, one can reconstruct a good estimate of the reflectivity profile using a new image formation algorithm that relies on using far fewer number of waveforms than conventional systems do. Some applications of this compressed aperture Radar will be discussed in the talk. This is joint work with Glenn Easley, Dennis Healy, and Rama Chellappa.

Weiyu Xu - The Balancedness Properties of Linear Subspaces and their Applications in Compressive Sensing

Caltech

In this talk, we will investigate the fundamental "balancedness" properties of linear subspaces and discuss the applications of these properties in the emerging field of compressive sensing. First, we will give the definitions of several "balancedness" properties for linear subspaces and show the respective connections between them and compressive sensing. Then using tools from high-dimensional integral geometry and geometric probability, we establish a unified framework for analyzing these "balancedness" properties, and give sharp performance bounds for the "balancedness" a linear subspace can achieve under these different notions of "balancedness". We will further discuss the applications of this analytical framework in the analysis and design of (new) sparse signal recovery algorithms for compressive sensing. This work concerns fundamental properties of linear subspaces and may be of independent mathematical interest.

Xing Tan - Computationally Efficient Sparse Bayesian Learning via Belief Propagation.

University of Florida

We present two belief propagation (BP) based sparse Bayesian learning (SBL) algorithms, referred to as the SBL-BP and the modified SBL-BP algorithm, to recover sparse transform coefficients in large scale compressed sensing problems. Both algorithms are based on a widely-used hierarchical Bayesian model, which is turned into a factor graph so that BP can be applied to achieve computational efficiency. We prove that the messages in BP are Gaussian probability density functions and therefore, we only need to update their means and variances when we update the messages. The computational complexity of both algorithms is proportional to the number of transform coefficients, allowing the algorithms to deal with large scale compressed sensing problems efficiently. Numerical examples are provided to demonstrate the effectiveness of the algorithms and to show that the modified SBL-BP algorithm performs similarly to the SBL-BP algorithm but the former is computationally faster than the latter.

Thong T. Do*, Yi Chen* , Dzung T. Nguyen* , Nam Nguyen* , Lu Gan** and Trac D. Tran** - Robust Video Transmission Using Layered Compressed Sensing

* John Hopkins University
** Brunel University (UK)

In this work, we propose a novel scheme of layered compressed sensing (LaCoS) for robust transmission of video signals over packet loss channels. In the proposed transmission system, the encoder consists of a base layer and an enhancement layer. The base layer is a conventionally encoded bitstream and transmitted without any error protection. The additional enhancement layer is a stream of compressed measurements taken across slices of video signals for error-resilience. The decoder generates side information that is a measurement vector of the corrupted base layer and then employs a sparse recovery with this side information to recover lost packets. By exploiting the side information at the decoder, the enhancement layer is required to transmit a minimal amount of compressed measurements that is only proportional to the amount of lost packets. Simulation results show that both compression efficiency and error-resilience capacity of the proposed scheme are competitive with those of other state-of-the-art robust transmission methods, in which Wyner-Ziv coders often generate an enhancement layer.

* This work has been supported in part by the National Science Foundation under Grant CCF-0728893.

Thong T. Do*, Yi Chen* , Dzung T. Nguyen* , Nam Nguyen* , Lu Gan** and Trac D. Tran** - Distributed Compressed Video Sensing

* John Hopkins University
** Brunel University (UK)

In this work, we propose a novel framework called Distributed Compressed Video Sensing (DisCoS). Like other Distributed Video Coding (DVC) schemes, in which video frames are often intraframe-coded and interframe-decoded to exploit temporal correlation among them, the proposed scheme also samples each frame independently and recovers them by exploiting the interframe sparsity with side information. In particular, the encoder acquires two types of compressed measurements: block-based measurements (or local measurements) and frame-based measurements (or global measurements). The decoder first generates a prediction of the current frame from previously reconstructed frame(s) and the local measurements. The key observation is that a block in the current frame can be sparsely represented by a linear combination of few neighboring blocks in previously reconstructed frame(s), enabling it to be predicted from its local measurements by soling the l-1 minimization. The process of block prediction follows a similar idea of motion estimation at the decoder side in other state-of-the-art DVC schemes. Then, the decoder generates global measurements of the prediction error from those of the current frame and the predicted one. As the prediction error is often very sparse, it can be recovered from these measurements by solving the l-1 minimization. Simulation results show that DisCoS significantly outperforms the intraframe-sensing and intraframe-sparse recovery scheme. It is even comparable to a scheme of intraframe-sensing and interframe-sparse recovery with oracle motion vectors available at the decoder. In addition, unlike all other previous DVC schemes, the DisCoS can perform encoding operation in the analog domain with very low-complexity, making it a promising candidate for applications where the sampling process is very expensive, e.g., in Terahertz imaging.

* This work has been supported in part by the National Science Foundation under Grant CCF-0728893.

Thong T. Do*, Yi Chen* , Dzung T. Nguyen* , Nam Nguyen* , Lu Gan** and Trac D. Tran** - Fast and Efficient Compressive Imaging Using Separable Measurement Ensembles

* John Hopkins University
** Brunel University (UK)

We extend our previous work of Structurally Random Matrices to propose a fast and efficient compressive sampling for two dimensional signals such as natural images. Unlike other previous compressive sampling algorithms that often regard sensing signals as 1-d signals, the proposed approach uses separable operators to sample rows and columns of an image independently. Let X be the sensing image. Then, the separable measurement process can be mathematically represented as D1 F1 R1 XR2 F2 D2 , where:

* This work has been supported in part by the National Science Foundation under Grant CCF-0728893.

Haichao Wang - Sequential Compressed Sensing

Vanderbilt University

There are two main approaches in compressed sensing: the geometric approach and the combinatorial approach. In this talk, we will introduce an information theoretic approach and construct a sequence of binary sampling vectors to determine a sparse signal. Unlike other approaches, ours is adaptive in the sense that each sampling vector depends on the previous sample. The number of measurements we need is no more than O(k log n) and the reconstruction is O(k).

Lawrence Carin, Bin Guo, Dehong Liu, and Wenbin Lin - On Compressive Sensing for the Random Sensor Array Design

Duke University

As an old problem, random sensor array has been considered for decades. The main motivation for the use of random or non-uniform arrays has typically been the goal of achieving high-resolution sensing while reducing sensing costs. The recently developed compressive sensing (CS) is a framework in which one attempts to measure a signal in a compressive mode, implying that fewer total measurements are required. In this poster, the random sensor arrays are examined from a CS perspective. It is demonstrated that the natural random-array projections manifested by the media Green's function are consistent with the projection type measurements associated with CS. This linkage allows the use of existing CS theory to quantify the performance of random arrays, of interest for array design. The analysis as well as the numerical simulation examples demonstrate that the CS theory is applicable to arrays in vacuum as well as in the presence of a surrounding media; further, the presence of a surrounding media with known properties may be used to improve array performance.

Yingying Li - Heat Source Identification Based on Compressed Sensing

UCLA

We consider the problem of inverting the heat equation, where known point-value samples at time T are used to estimate the initial condition. The initial condition is assumed to be sparse. We solve the problem using compressed sensing techniques, formulating the problem as an L1 optimization and solving it with the linearized Bregman method. Examples in both one and two spatial dimensions are shown. Finally, we show how our approach may be adapted to solve some similar problems.

Robert Muise - A Compressive Imaging Approach for Tracking Targets in Multiplexed Data

Lockheed-Martin

We consider the application of compressive imaging theory to increase the performance of current imaging sensors. Spatial multiplexing sensors can increase the effective field-of-view (FOV) of the sensor without loss of resolution. We present an optical architecture which is compressive in nature and results in 2-orders of magnitude increased FOV when compressive sensing theory is applied to detect moving targets. Building on this compressive imaging model, we then turn to the time multiplexing case. Smart Focal-Plane-Array (FPA) technology is assumed, to integrate light at varying time-constants throughout the scene. The result is the ability to track a moving object during the integration time of the sensor. These simulated examples highlight the ability to increase current sensor performance significantly by designing sensors and exploitation tasks simultaneously with Compressive Imaging as the underlying theory.

Xuejun Liao, Hui Li, and Lawrence Carin - Fast Projections onto the Set of Sparse Signals

Duke University

We propose a method for projecting N-dimensional vectors in Euclidean space onto the set of K-sparse vectors. The method is based on simple operations requiring time on the order of O(N*log(N)). We also consider the case of projecting onto the set of K-block-sparse vectors and extend the method to perform the projection in time O(B*log(B),N), where B is the number of blocks. Together with projections from linear algebra, the proposed methods yield efficient algorithms for CS reconstructions. We present experimental results on large images and compare to state-of-the-art algorithms.

Anwei Chai - Compressed Sensing and Imaging: a Comparative Study

Stanford University

Compressed sensing is a robust method for recovering sparse signals that can also be used in array imaging. In this work we present a framework to assess the results of imaging using compressed sensing and other previously developed approaches. We will show various numerical simulations and interpret those results with analytical ones.

Haojun Chen - Bayesian Group LASSO for Compressive Sensing

Duke University

We introduce a probabilistic formulation of group LASSO and employ it as a block-sparse prior for Bayesian compressive sensing (CS). The resulting Bayesian group LASSO algorithm retains the robustness of Bayesian CS and, in addition, exploits the inter-dependency between the sparse coefficients to achieve accurate signal reconstruction based on a smaller number of measurements. The algorithm effectively reduces the signal dimensionality by squeezing strongly dependent coefficients into a group, and achieves computational efficiency by performing calculation at the level of groups versus individual coefficients. We compare the proposed algorithm, in terms of reconstruction performance as well as time complexity, to state-of-the-art CS algorithms.

Maxim Raginsky and Rebecca M. Willett - Minimax Risk Bounds for Poisson Compressed Sensing

Duke University

We present performance bounds for compressed sensing in the presence of Poisson noise and shows that, for sparse or compressible signals, they are within a log factor of known lower bounds on the risk. The signal-independent and bounded noise models used in the literature to analyze the performance of compressed sensing do not accurately model the effects of Poisson noise. However, Poisson noise is an appropriate noise model for a variety of applications, including low-light imaging, in which sensing hardware is large or expensive and limiting the number of measurements collected is important. We will show how a feasible, positivity-preserving sensing matrix can be constructed and prove a concentration-of-measure inequality for these matrices. We then show that minimizing an objective function consisting of a negative Poisson log likelihood term and a penalty term which could be used as a measure of signal sparsity results in near-minimax rates of error decay.

Zachary Harmany, Roummel Marcia, and Rebecca Willett - Compressive Coded Aperture Imaging

Duke University

A fundamental challenge in applying compressed sensing theory to practical imaging systems is that physical constraints typically make it infeasible to actually measure many of the random projections described in the literature, and therefore, innovative and sophisticated imaging systems must be carefully designed to effectively exploit CS theory. In this work, we propose compressive imaging techniques for improving the performance of video imaging systems in the presence of constraints on the focal plane array size. In particular, we describe a novel yet practical approach that combines coded aperture imaging to enhance pixel resolution with superimposing subframes of a scene onto a single focal plane array to increase field of view. Specifically, the proposed method superimposes coded observations and uses wavelet-based sparsity recovery algorithms to reconstruct the original subframes. We demonstrate the effectiveness of this approach by reconstructing with high resolution the constituent images of a video sequence.

Christian Austin - On the Relation Between Sparse Sampling and Parametric Estimation

Ohio State University

We consider the relationship between parameter estimation of an additive model and sparse inversion of an under-determined matrix (dictionary) in a linear system. The dictionary is constructed by sampling parameters of the additive model and does not satisfy the Restricted Isometry Property (RIP) of Compressive Sensing. Parameters and model order are estimated using regularized least-squares inversion with lp norm penalty, where p≤1. We investigate equi-spaced and Fisher information inspired parameter sampling methods for dictionary construction. Examples are presented quantifying parameter estimation performance for the different sampling methods and for different sampling densities.

Kerkil Choi - Coded Aperture Compressive Computed Tomography

Duke University

This paper analyzes the role of coded apertures and compressive-sensing based inference with x-ray transform measurements to improve data efficiency and reconstruction fidelity. While x-ray tomography is the canonical example, diverse physical measurements can be modeled by x-ray transform measurements. Our group has previously demonstrated the use of coding in reference structure tomography (RST) and coded aperture snapshot spectral imaging (CASSI). This paper investigates coding strategies in these applications and extends this approach in proposing compressive x-ray tomography based on the use of coding concepts.

Manqi Zhao, Venkatesh Saligrama - Thresholded Basis Pursuit: A Linear Programming Approach for Support Recovery

Boston Univeristy

We will describe a novel algorithm based on thresholding the solution to basis pursuit for support recovery and approximation. Our result offers a sharp characterization in that neither the SNR nor the sparsity ratio can be significantly improved in comparison to the information theoretic/Max-Likelihood bounds. Our idea is to adopt a perturbation approach to the noiseless problem. Keeping the number of measurements fixed (at the level achievable for noiseless recovery) we increase the noise and characterize the maximum noise level for which support recovery fails.

Wotao Yin - Enhanced Compressed Sensing Based on Iterative Support Detection

Rice University

We demonstrate that the recovery rate of l1-minimization on signals with fast decaying distribution of nonzero values can be enhanced by applying a simple, novel iterative support detection strategy. Preliminary theoretical and experimental results, as well as the limitation of the strategy, are presented.

Zaiwen Wen - A Fast Algorighm For Sparse Reconstruction Based On Shrinkage, Subspace Optimization and Continuation

Columbia University

We describe a fast algorithm for sparse reconstruction. The algorithm is divided into two stages that are performed repeatedly. In the first stage, "shrinkage" yields an estimate of the subset of variables likely to be nonzero in an optimal solution. Restricting the decision variables to this subset and fixing their signs at their current values results in a smooth quadratic problem that is solved in the second phase. Our method also embeds this basic two-stage algorithm in a continuation (homotopy) approach. Our implementation of this method exhibits state-of-the-art performance both in terms of its speed and its ability to recover sparse signals. It can even recover signals that are not as sparse as required by current compressive sensing theory.

Shiqian Ma - An Efficient Algorithm for Compressed MR Imaging using Total Variation and Wavelets

Columbia University

Because information such as boundaries of organs is very sparse in most MR images, compressed sensing makes it possible to reconstruct the same MR image from a very limited set of measurements significantly reducing the MRI scan duration. In order to do that however, one has to solve the difficult problem of minimizing nonsmooth functions on large data sets. To handle this, we propose an efficient algorithm that jointly minimizes the L1 norm, total variation, and a least squares measure, one of the most powerful models for compressive MR imaging. Our algorithm is based upon an iterative operator-splitting framework. The calculations are accelerated by continuation and takes advantage of fast wavelet and Fourier transforms enabling our code to process MR images from actual real life applications. We show that faithful MR images can be reconstructed from a subset that represents a mere 20 percent of the complete set of measurements.

Hao Gao and Hongkai Zhao - Compressive Sensing in Bioluminescence Tomography Based on Radiative Transfer Equation

UC Irvine

In this work, we study compressive sensing (CS) in bioluminescence tomography (BLT), in particularly based on radiative transfer equation (RTE). BLT is an emerging and promising technique for molecular imaging in which one tries to reconstruct the distribution of bioluminescent sources through boundary measurements. Although the source localization problem can be casted as a linear problem, the reconstruction is still very challenging due to the severe ill-posedness. Since sources in BLT are usually sparse in space, compressive sensing method would be a natural solution. However, directly using Green's function of RTE as the basis and standard measurement data for reconstruction will not satisfy the restricted isometry property (RIP) which is crucial for the success of CS. We propose an algorithm to transform the standard setup into a new system that satisfies the RIP. Hence, with compressive sensing method, we produce much improved results over standard reconstruction method.

Friday, April 24, 2009

Sparsity In Everything: Ah-hah moments, Insight

How many times have you had one of these ah-hah moments ? Not so many times I am sure. How many times have your students gotten it! Insight sure qualifies as one of these sparse events for most people.




A collection of entries featuring Sparsity In Everything examples can be found here. The reason of this collection of entries is to foster the case that techniques, such as compressive sensing, devised to acquire signals/event that are rare in large datasets are needed in a wide variety of everyday life examples.

Reference: The video for this entry came from this entry. The idea for this entry came from a comment made by Leon Palafox in a previous entry. Thanks Leon!

Thursday, April 23, 2009

CS: Clustered Sparse Signals, Matrix Completion code, FRI, Deterministic CS, Union of Frame

Here is thhe new batch today. most were found on the interwebs while others come from Rice,

Recovery of Clustered Sparse Signals from Compressive Measurements by Volkan Cevher, Piotr Indyk, Chinmay Hegde and Richard G. Baraniuk. The abstract reads:

We introduce a new signal model, called (K,C)-sparse, to capture K-sparse signals in N dimensions whose nonzero coefficients are contained within at most C clusters, with C \lt K \lt \lt N. In contrast to the existing work in the sparse approximation and compressive sensing literature on block sparsity, no prior knowledge of the locations and sizes of the clusters is assumed. We prove that O (K + C log(N/C))) random projections are sufficient for (K,C)-model sparse signal recovery based on subspace enumeration. We also provide a robust polynomial time recovery algorithm for (K,C)-model sparse signals with provable estimation guarantees

I mentioned this paper earlier: Matrix Completion from a Few Entries by Raghunandan Keshavan, Andrea Montanari, and Sewoong Oh. The software is now available here.



A Method for Generalized Sampling and Reconstruction of Finite-Rate-of-Innovation Signals by Chandra Sekhar Seelamantula, Michael Unser. The abstract reads:
We address the problem of generalized sampling and reconstruction of finite-rate-of-innovation signals. Specifically, we consider the problem of sampling streams of Dirac impulses and propose a two-channel method that enables fast, local reconstruction under some suitable conditions. We also specify some acquisition kernels and give the associated reconstruction formulas. It turns out that these kernels can also be combined into one kernel, which can be employed in the single-channel sampling scenario. The single-kernel approach carries over all the advantages of the two-channel counterpart. Simulation results are presented to validate the theoretical calculations.

Structured Variable Selection with Sparsity-Inducing Norms by Rodolphe Jenatton, Jean-Yves Audibert, Francis Bach. The abstract reads:

We consider the empirical risk minimization problem for linear supervised learning, with regularization by structured sparsity-inducing norms. These are defined as sums of Euclidean norms on certain subsets of variables, extending the usual $\ell_1$-norm and the group $\ell_1$-norm by allowing the subsets to overlap. This leads to a specific set of allowed nonzero patterns for the solutions of such problems. We first explore the relationship between the groups defining the norm and the resulting nonzero patterns, providing both forward and backward algorithms to go back and forth from groups to patterns. This allows the design of norms adapted to specific prior knowledge expressed in terms of nonzero patterns. We also present an efficient active set algorithm, and analyze the consistency of variable selection for least-squares linear regression in low and high-dimensional settings.

Found the Rice Compressive Sensing repository:

Compressed Sensing aims to capture attributes of a sparse signal using very few measurements. Candes and Tao showed that random matrices satisfy the Restricted Isometry Property or RIP property with high probability. They showed that this condition is sufficient for sparse reconstruction and that random matrices, where the entries are generated by an iid Gaussian or Bernoulli process, satisfy the RIP with high probability. This approach treats all k-sparse signals as equally likely, in contrast to mainstream signal processing where the filtering is deterministic, and the signal is described probabilistically. This paper provides weak conditions that are sufficient to show that a deterministic sensing matrix satisfies the Statistical Restricted Isometry Property (StRIP), and to guarantee the uniqueness of k-sparse representations. The columns of the sensing matrix are required to form a group under pointwise multiplication, and it is this property that provides the concentration inequalities of McDiarmid and Bernstein with the leverage necessary to guarantee uniqueness of sparse representations. We provide a large class of deterministic sensing matrices for which Basis Pursuit is able to recover sparse Steinhaus signals. The new framework encompasses many families of deterministic sensing matrices, including those formed from discrete chirps, Delsarte-Goethals codes, and Extended BCH codes. In these cases it provides theoretical guarantees on the performance of nonlinear reconstruction algorithms with complexity that is only quadratic in the dimension of the measurement domain.

Sampling in a union of frame generated subspaces by Magali Anastasio and Carlos Cabrelli. The abstract reads:
A new paradigm in Sampling theory has been developed recently by Lu and Do. In this new approach the classical linear model is replaced by a non-linear, but structured model consisting of a union of subspaces. This is the natural approach for the new theory of compressed sampling, representation of sparse signals and signals with finite rate of innovation. In this article we extend the theory of Lu and Do, for the case that the subspaces in the union are shift-invariant spaces. We describe the subspaces by means of frame generators instead of orthonormal bases. We show that, the one to one and stability conditions for the sampling operator, are valid for this more general case.


Image Credit: NASA/JPL/Space Science Institute, Saturn as seen two days ago by Cassini.

Tuesday, April 21, 2009

CS: Compressive Diffraction Tomography for Weakly Scattering, On Streaming, a course and some ML python code

The good stuff keeps on coming, this is very interesting:

Compressive Diffraction Tomography for Weakly Scattering by Lianlin Li, Wenji Zhang, Fang Li. The abstract reads:
An appealing requirement from the well-known diffraction tomography (DT) exists for success reconstruction from few-view and limited-angle data. Inspired by the well-known compressive sensing (CS), the accurate super-resolution reconstruction from highly sparse data for the weakly scatters has been investigated in this paper. To realize the compressive data measurement, in particular, to obtain the super-resolution reconstruction with highly sparse data, the compressive system which is realized by surrounding the probed obstacles by the random media has been proposed and empirically studied. Several interesting conclusions have been drawn: (a) if the desired resolution is within the range from to, the K-sparse N-unknowns imaging can be obtained exactly by measurements, which is comparable to the required number of measurement by the Gaussian random matrix in the literatures of compressive sensing. (b) With incorporating the random media which is used to enforce the multi-path effect of wave propagation, the resulting measurement matrix is incoherence with wavelet matrix, in other words, when the probed obstacles are sparse with the framework of wavelet, the required number of measurements for successful reconstruction is similar as above. (c) If the expected resolution is lower than, the required number of measurements of proposed compressive system is almost identical to the case of free space. (d) There is also a requirement to make the tradeoff between the imaging resolutions and the number of measurements. In addition, by the introduction of complex Gaussian variable the kind of fast sparse Bayesian algorithm has been slightly modified to deal with the complex-valued optimization with sparse constraints.


The approach parallel the Random Lens Imager of Bill Freeman et al [2] or the In-situ Random Medium scattering of Larry Carin et al [1]. I'll add this new technique to the Compressive Sensing Hardware page.

Angshul Majumdar has modifed my implementation of the re-weighted L_p algorithm of Rick Chartrand and Wotao Yin [3] so that " it can handle operators instead of explicit matrices. It requires Sparco." It is here. Thanks Angshul !

Muthu Muthukrishnan has a post up on the release of some of the slides of the recent DIMACS workhop on Streaming, they are:



Laurent Duval pointed out to me that Ron DeVore will give a course in Paris. From the flyer:

High Dimensional Approximation: Theory and Algorithms

Ronald DeVore
Chaire d'excellence de la Fondation Sciences Mathmatiques de Paris

Horaires : mardi 16 Juin de 10h30 12h30, jeudi 18, mardi 23 et jeudi 25 Juin, de 10h 12h30.

Lieu: salle 2E1, Laboratoire Jacques-Louis Lions, 175 rue du Chevaleret, 75013 Paris.

This course will study scienti c problems that are challenged by the fact that they live naturally in a Euclidean space RN with N very large. We mention three settings which will be considered in this short course.

Broad-banded signals: It is well known that a bandlimited signal (function on R) can be captured exactly from its equispaced time samples taken at the Nyquist rate. However when the bandwidth of the signal is large then the sampling rate cannot be implemented accurately in circuitry. This leads us to consider other models for signals which are still realistic but enable capturing the signal with much fewer measurements. We shall study one such setting called Compressed Sensing (CS) which models signals as having a sparse or compressible representation in a suitably chosen basis of waveforms. We shall develop the mathematical foundations of compressed sensing culminating with a derivation of the optimal rate/distortion that can be achieved with CS and a discussion of the encoder/decoders which achieve this optimality.
Mathematical learning: The mathematical theory of data tting in a stochastic setting is known as learning. Many interesting learning problems are set in high dimension and must engage the problem of approximating functions of a large number of variables. Classical approaches of approximation in several variables su er from the curse of dimensionality: the approximation rate severely su ers as the dimension increases. We shall consider possible ways to avoid the curse of dimensionality through variable reduction and sparsity.
Stochastic PDEs: The classical approach to solving stochastic PDEs is Monte Carlo methods. While these methods are robust they have severe limits in terms of their rate/distortion bounds. We shall study alternatives to Monte Carlo methods which utilize Wiener chaos expansions to convert the stochastic PDE to a deterministic parametric PDE. The number of parameters in this approach is in finite and therefore its success depends capturing a function of an in finite number of variables in a numerically realistic way. We shall derive properties of the solutions to such parametric problems in the elliptic case that show these solutions have highly accurate sparse approximations. We shall then derive potential numerical approaches to capturing these sparse approximants. Theory and algorithms for high dimensional problems are in their infancy and so this course will expose many interesting open questions.
Thanks Laurent !

On a different note, the python codes implementing the examples of the new book Machine Learning: An Algorithmic Perspective by Stephen Marsland are here.

Reference:
[1] Lawrence Carin, Dehong Liu, Wenbin Lin and Bin Guo, Compressive Sensing for Numerical Multi-Static Scattering Analysis.
[2] Random Lens Imaging, Fergus, Rob; Torralba, Antonio; Freeman, William T.
[3] Rick Chartrand and Wotao Yin, Iteratively Reweighted Algorithms for Compressive Sensing

Monday, April 20, 2009

CS: NESTA, CS with Known Spectral Energy Density, CS based interior tomography, CS-UWB Comm system, a Summer School,.

Here is the new batch of Compressive Sensing related papers:
NESTA: a fast and accurate first-order method for sparse recovery by Stephen Becker, Jerome Bobin, and Emmanuel Candès. The abstract reads:
Accurate signal recovery or image reconstruction from indirect and possibly under-
sampled data is a topic of considerable interest; for example, the literature in the recent field of compressed sensing is already quite immense. Inspired by recent breakthroughs in the development of novel first-order methods in convex optimization, most notably Nesterov's smoothing technique, this paper introduces a fast and accurate algorithm for solving common recovery problems in signal processing. In the spirit of Nesterov's work, one of the key ideas of this algorithm is a subtle averaging of sequences of iterates, which has been shown to improve the convergence properties of standard gradient-descent algorithms. This paper demonstrates that this approach is ideally suited for solving large-scale compressed sensing reconstruction problems as 1) it is computationally efficient, 2) it is accurate and returns solutions with several correct digits, 3) it is flexible and amenable to many kinds of reconstruction problems, and 4) it is robust in the sense that its excellent performance across a wide range of problems does not depend on the fine tuning of several parameters. Comprehensive numerical experiments on realistic signals exhibiting a large dynamic range show that this algorithm compares favorably with recently proposed state-of-the-art methods. We also apply the algorithm to solve other problems for which there are fewer alternatives, such as total-variation minimization, and convex programs seeking to minimize the l_1 norm of Wx under constraints, in which W is not diagonal.

I like the fact that they address point 4) which seems to be an issue with other methods (see this entry). I am looking forward to an implementation of this new scheme.

Compressive Sampling with Known Spectral Energy Density by Andriyan Suksmono. The abstract reads:
A method to improve L1 performance of the CS (Compressive Sampling) for signals with known spectral energy density is proposed. Instead of random sampling, the proposed method selects the location of samples to follow the distribution of the spectral energy. Samples collected from three different measurement methods; the uniform sampling, random sampling, and energy equipartition sampling, are used to reconstruct a given UWB (Ultra Wide Band) signal whose spectral energy density is known. Objective performance evaluation in term of PSNR (Peak Signal to Noise Ratio) indicates that the CS reconstruction of random sampling outperform the uniform sampling, while the energy equipartition sampling outperforms both of them. These results suggest that similar performance improvement can be achieved for CS-based devices, such as the compressive SFCW (Stepped Frequency Continuous Wave) radar and the compressive VLBI (Very Large Baseline Interferometry) imaging, allowing even higher acquisition speed or better reconstruction results.

From the good folks at Rice we have:

Compressed sensing based interior tomography by Hengyong Yu and Ge Wang. The abstract reads:
While conventional wisdom is that the interior problem does not have a unique solution, by analytic continuation we recently showed that the interior problem can be uniquely and stably solved if we have a known sub-region inside a region of interest (ROI). However, such a known sub-region is not always readily available, and it is even impossible to find in some cases. Based on compressed sensing theory, herewe prove that if an object under reconstruction is essentially piecewise constant, a local ROI can be exactly and stably reconstructed via the total variation minimization. Because many objects in computed tomography (CT) applications can be approximately modeled as piecewise constant, our approach is practically useful and suggests a new research direction for interior tomography. To illustrate the merits of our finding, we develop an iterative interior reconstruction algorithm that minimizes the total variation of a reconstructed image and evaluate the performance in numerical simulation.



Compressive Sensing Based Ultra-wideband Communication System by Peng Zhang, Zhen Hu, Robert C. Qiu and Brian M. Sadler. The abstract reads:
UWB) communication. Our major contribution is to exploit the channel itself as part of compressed sensing, through waveformbased pre-coding at the transmitter. We also have demonstrated a UWB system baseband bandwidth (5 GHz) that would, if with the conventional sampling technology, take decades for the industry to reach. The concept has been demonstrated, through simulations, using real-world measurements. Realistic channel estimation is also considered.
the attendant conference paper is here: Compressive Sensing Based Ultra-wideband Communication System by Peng Zhang, Zhen Hu, Robert C. Qiu and Brian M. Sadler. The abstract reads:
Sampling is the bottleneck for ultra-wideband (UWB) communication. Our major contribution is to exploit the channel itself as part of compressive sampling, through
waveform-level pre-coding at the transmitter. We also have demonstrated a UWB system baseband bandwidth (5 GHz) that would, if with the conventional sampling technology, take decades for the industry to reach. The concept has been demonstrated, through simulations, using real-world measurements. Realistic channel estimation is also considered.
and a presentation on the subject is here: Peng Zhang, "Compressive Sensing Based UWB System," Presentation on Compressive Sensing.

I'll be adding this to the compressive sensing hardware page.

There is also a Summer School: Theoretical Foundations and Numerical Methods for Sparse Recovery at RICAM in Linz, Austria on Aug. 31 - Sept. 4. I'll add it to the compressive sensing calendar.

Of related interest we also have:

EM type algorithms for likelihood optimization with non-differentiable penalties by Stephane Chretien, Alfred O. Hero and Herve Perdry. The abstract reads:
The EM algorithm is a widely used methodology for penalized likelihood estimation. Provable monotonicity and convergence are the hallmarks of the EM algorithm and these properties are well established for smooth likelihood and smooth penalty functions. However, many relaxed versions of variable selection penalties are not smooth. The goal of this paper is to introduce a new class of Space Alternating Penalized Kullback Proximal extensions of the EM algorithm for nonsmooth likelihood inference. We show that the cluster points of the new method are stationary points even when on the boundary of the parameter set. Special attention has been paid to the construction of component-wise version of the method in order to ease the implementation for complicated models. Illustration for the problems of model selection for finite mixtures of regression and to sparse image reconstruction is presented.

and in arxiv: An alternating $\ell_1$ relaxation for compressed sensing by Stephane Chretien. I already talked about it before. The scilab and python codes are here

Image Credit: NASA/JPL/Space Science Institute, Saturn rings as seen by Cassini five days ago.

Sunday, April 19, 2009

Sparsity In Everything: The Skyline of Paris


At different points in time, public policies and construction materials did not allow for buildings higher than about 16 meters high. In other periods of time, experiments were attempted where this limit was raised and subsequently dropped. The result: a sparse collection of tall buildings in a sea of 4 or 5 stories high buildings.

A collection of entries featuring Sparsity In Everything examples can be found here.

Credit photo: me, photo taken from one of the tallest building at 210 meters high: Tour Montparnasse.

Saturday, April 18, 2009

CS: Compressive Sensing in the Dark ?

I was reading the following summary of the following paper: The Nucleus Inside Out - Through a Rod Darkly by Tobias Ragoczy and and Mark Groudine which abstract is:

In the nuclei of eukaryotic cells, euchromatin is located at the center, whereas heterochromatin is found at the periphery and is interspersed in the nucleoplasm. Solovei et al. (2009) now reveal that this normal pattern is reversed in the retinal rod cells of mice. This inversion might serve to maximize light transmission to photoreceptors in nocturnal mammals.



In the paper (see the summary), the researchers did a computation showing how the DNA location is acting as a focusing element as in a fiber optics (as shown in the figure above). What triggered my interest in this figure is the following: In the focusing case, a pencil of light coming in gets to be transferred as two peaks with the underlying assumption that two large peaks will trigger some type of clean response. In the case of mammals, the pencil of light coming in gets to be spread around with the underlying assumption that it will be drowned in noise and will not be detectable.

Here is my stupid question: You and I can see in the dark after some accommodation. Does this mean that our vision system switches to a compressive sensing system when in the dark ? It would give a new meaning to the name of this blog!

Friday, April 17, 2009

CS: Herschel, Toeplitz Random Encoding for Reduced Acquisition Using Compressed Sensing

Following up on yesterday's entry (Godspeed Herschel), I went ahead and talked to the initiators of the CS scheme for this project on Herschel. Jean-Luc Starck mentioned to me that the initial study with the compressive sensing encoding was made from some idealized star fields but that they now have access to a real sky simulator which should improve their confidence in the scheme. Utlimately, the proof will be in the pudding with the real data from the spacecraft. We will only know then if the CS scheme was a good solution to transfer data to the ground. Data should stream from the spacecraft (with no encoding/compression) within the first three months of the launch. A trial period will then take place to check the CS scheme. Eventually the hope is for the CS encoding scheme to be the one preferred to download data from the spacecraft. May 6th is our next date and then they should go in this trial mode at the beginning of August.

In other news, Muthu's arms are up :-)



In a different area, I think this is the first time we have this sort of encoding in MRI:

Considerable attention has been paid to compressed sensing (CS) in the MRI community recently (1,2). CS theory allows exact recovery of a sparse signal from a highly incomplete set of samples (3,4), and thus has the potential for significant reduction in MRI scan time. While most existing work has focused on Fourier encoding, non-Fourier encoding has shown some promise (5,6). In this abstract, we design a pulse sequence to implement the Toeplitz random encoding method proposed earlier (6). The experimental results show that Toeplitz random encoding can be realized in practice as an alternative method for CS MRI.

The attendant poster is here.

Printfriendly