The Rice Compressed Sensing Repository has some new preprints/articles:
Sparse signal reconstruction from noisy compressive measurements using cross validation by Petros Boufounos, Marco Duarte, and Richard Baraniuk.
This paper sets up a cross validation procedure that tries to remove the heuristics of the residual stopping criterion used in either Basis Pursuit or Greedy algorithms.
Compressive sensing is a new data acquisition technique that aims to measure sparse and compressible signals at close to their intrinsic information rate rather than their Nyquist rate. Recent results in compressive sensing show that a sparse or compressible signal can be reconstructed from very few incoherent measurements. Although the sampling and reconstruction process is robust to measurement noise, all current reconstruction methods assume some knowledge of the noise power or the acquired signal to noise ratio. This knowledge is necessary to set algorithmic parameters and stopping conditions. If these parameters are set incorrectly, then the reconstruction algorithms either do not fully reconstruct the acquired signal (underfitting) or try to explain a significant portion of the noise by distorting the reconstructed signal (overfitting). This paper explores this behavior and examines the use of cross validation to determine the stopping conditions for the optimization algorithms. We demonstrate that by designating a small set of measurements as a validation set it is possible to optimize these algorithms and reduce the reconstruction error. Furthermore we explore the trade-off between using the additional measurements for cross validation instead of reconstruction.
Multiscale random projections for compressive classification by Marco Duarte, Mark Davenport, Michael Wakin, Jason Laska, Dharmpal Takhar, Kevin Kelly, and Richard Baraniuk
Random projections for manifold learning [Technical Report] by Chinmay Hegde, Michael Wakin, and Richard Baraniuk.
We propose a framework for exploiting dimension-reducing random projections in detection and classification problems. Our approach is based on the generalized likelihood ratio test;It looks to me like a very nice outgrowth of the smashed filter introductory paper mentioned here. In this paper though, some thoughts is given to the fact that target identification must be done at different scales because images appearance manifolds are not smooth manifolds as initially shown by Donoho and Grimes. In this paper, they use regularizing kernels of different supports to provide the ability to smooth out image manifolds and then use nearest neighbor approaches to compare unknown objects and pose to libraries of known objects and pose. No description of these kernels is given though.
in the case of image classification, it exploits the fact that a set of images of a fixed scene under varying articulation parameters forms a low-dimensional, nonlinear manifold. Exploiting
recent results showing that random projections stably embed a smooth manifold in a lower-dimensional space, we develop the multiscale smashed filter as a compressive analog of the familiar matched filter classifier. In a practical target classification problem using a single-pixel camera that directly acquires compressive image projections, we achieve high classification rates using many fewer measurements than the dimensionality
of the images.
This paper tries to answer a central problem is dimensionality reduction. What is the intrinsic dimension of a manifold using random projections ? This is a useful tool.
We propose a novel method for linear dimensionality reduction of manifold modeled data. First, we show that with a small number M of random projections of sample points in RN belonging to an unknown K-dimensional Euclidean manifold, the intrinsic dimension (ID) of the sample set can be estimated to high accuracy. Second, we rigorously prove that using only this set of random projections, we can estimate the structure of the underlying manifold. In both cases, the number of random projections required is linear in K and logarithmic in N, meaning that K <>
Multichannel image estimation via simultaneous orthogonal matching pursuit, Ray Maleh and Anna Gilbert,
The abstract reads:
Sparse gradient image reconstruction done faster by Ray Maleh, Anna Gilbert, and Martin Strauss, The abstract reads:
In modern imaging systems, it is possible to collect information about an image on multiple channels. The simplest example is that of a color image which consists of three channels (i.e. red, green, and blue). However, there are more complicated situations such as those that arise in hyperspectral imaging. Furthermore, most of these images are sparse or highly compressible. We need not measure thoroughly on all the channels in order to reconstruct information about the image. As a result, there is a great need for efficient algorithms that can simultaneously process a few measurements on all channels. In this paper, we discuss how the Simultaneous Orthogonal Matching Pursuit (SOMP) algorithm can reconstruct multichannel images from partial Fourier measurements, while providing more robustness to noise than multiple passes of ordinary Orthogonal Matching Pursuit (OMP) on every channel. In addition, we discuss the use of SOMP in extracting edges from images that are sparse in the total variational sense and extend the ideas presented in this paper to outline how sparse-gradient multichannel images can be recovered by this powerful algorithm.
In a wide variety of imaging applications (especially medical imaging), we obtain a partial set or subset of the Fourier transform of an image. From these Fourier measurements, we want to reconstruct the entire original image. Convex optimization is a powerful, recent solution to this problem. Unfortunately, convex optimization in its myriad of implementations is computationally expensive and may be impractical for large images or for multiple images. Furthermore, some of these techniques assume that the image has a sparse gradient (i.e., that the gradient of the image consists of a few nonzero pixel values) or that the gradient is highly compressible. In this paper, we demonstrate that we can recover such images with GRADIENTOMP, an efficient algorithm based upon Orthogonal Matching Pursuit (OMP), more effectively than with convex optimization. We compare both the qualitative and quantitative performance of this algorithm to the optimization techniques.
Again convexity is a good way of doing business because there are currently mathematically provable bounds. But it also looks like there is some heuristics showing that a non-convex problem can provide faster solution. I am going to have to come back to that at some point because it looks like there are many methods that seem to trade convexity for faster, non-convex and heuristically optimal solution techniques.
Compressed sensing image reconstruction via recursive spatially adaptive filtering by
Karen Egiazarian, Alessandro Foi, and Vladimir Katkovnik, the abstract reads:
We introduce a new approach to image reconstruction from highly incomplete data. The available data are assumed to be a small collection of spectral coefficients of an arbitrary linear transform. This reconstruction problem is the subject of intensive study in the recent field of compressed sensing (also known as compressive sampling). Our approach is based on a quite specific recursive filtering procedure. At every iteration the algorithm is excited by injection of random noise in the unobserved portion of the spectrum and a spatially adaptive image denoising Þlter, working in the image domain, is exploited to attenuate the noise and reveal new features and details out of the incomplete and degraded observations. This recursive algorithm can be interpreted as a special type of the Robbins-Monro stochastic approximation procedure with regularization enabled by a spatially adaptive filter. Overall, we replace the conventional parametric modeling used in CS by a nonparametric one. We illustrate the effectiveness of the proposed approach for two important inverse problems from computerized tomography: Radon inversion from sparse projections and limited-angle tomography. In particular we show that the algorithm allows to achieve exact reconstruction of synthetic phantom data even from a very small number projections. The accuracy of our reconstruction is in line with the best results in the compressed sensing field.In light of this entry, two other extended previews grabbed my interest:
- Mark A. Davenport, Chinmay Hegde, Michael B. Wakin and Richard G. Baraniuk, Manifold-Based Approaches for Improved Classification, To appear in NIPS Workshop on Topology Learning, December 2007.
- Chinmay Hegde, Mark A. Davenport, Michael B. Wakin and Richard G. Baraniuk, Efficient Machine Learning Using Random Projections, To appear in NIPS Workshop on Efficient Machine Learning, December 2007
Sparsity in Time–Frequency Representations , by Gotz Pfander and Holger Rauhut, the abstract reads:
Note on sparsity in signal recovery and in matrix identification by Gotz Pfander, the abstract reads:
We consider signals and operators in finite dimension which have sparse time-frequency representations. As main result we show that an S-sparse Gabor representation in Cn with respect to a random unimodular window can be recovered by Basis Pursuit with high probability provided that S Cn/ log(n). Our results are applicable to the channel estimation problem in wireless communications and they establish the usefulness of a class of measurement matrices for compressive sensing.
We describe a connection between the identification problem for matrices with sparse representations in given matrix dictionaries and the problem of sparse signal recovery. This allows the application of novel compressed sensing techniques to operator identification problems such as the channel measurement problem in communications engineering.
Credit Photo: NASA, the International Space Station as seen from the Shuttle (STS-120) shortly after 5:00 AM (CST) yesterday.