Pages

Tuesday, August 05, 2008

CS: DNA Microarrays, Random Convolution, Counting Faces, Bakground Substraction, How to not do RIP, Remote Sensing, a code and a video

In the past fifteen days, we discovered water and perchlorate salts on Mars and the following papers have appeared on the net and on the Rice Compressive Sensing page, I am listing the ones I have not fully covered before:













  • Wei Dai, Mona Sheikh, Olgica Milenkovic, and Richard Baraniuk, Compressive sensing DNA microarrays.The abstract reads:
    Compressive Sensing Microarrays (CSM) are DNAbased sensors that operate using group testing and compressive sensing (CS) principles. In contrast to conventional DNA microarrays, in which each genetic sensor is designed to respond to a single target, in a CSM each sensor responds to a set of targets. We study the problem of designing CSMs that simultaneously account for both the constraints from compressive sensing theory and the biochemistry of probe-target DNA hybridization. An appropriate cross-hybridization model is proposed for CSMs, and several methods are developed for probe design and CS signal recovery based on the new model. Our lab experiments suggest that, in order to achieve accurate hybridization profiling, consensus probe sequences are required to have sequence homology of at least 80% with all targets to be detected. Furthermore, out of-equilibrium datasets are usually as accurate as those obtained from equilibrium conditions. Consequently, one can use CSMs in applications for which only short hybridization times are allowed.
  • Here is an important paper, I'll come back to it later. Volkan Cevher, Aswin Sankaranarayanan, Marco Duarte, Dikpal Reddy, Richard Baraniuk, and Rama Chellappa, Compressive sensing for background subtraction. The abstract reads:
    Compressive sensing (CS) is an emerging field that provides a framework for image recovery using sub-Nyquist sampling rates. The CS theory shows that a signal can be reconstructed from a small set of random projections, provided that the signal is sparse in some basis, e.g., wavelets. In this paper, we describe a method to directly recover background subtracted images using CS and discuss its applications in some communication constrained multi-camera computer vision problems. We show how to apply the CS theory to recover object silhouettes (binary background subtracted images) when the objects of interest occupy a small portion of the camera view, i.e., when they are sparse in
    the spatial domain. We cast the background subtraction as a sparse approximation problem and provide different solutions based on convex optimization and total variation. In our method, as opposed to learning the background, we learn and adapt a low dimensional compressed representation of it, which is sufficient to determine spatial innovations; object silhouettes are then estimated directly using the compressive samples without any auxiliary image reconstruction. We also discuss simultaneous appearance recovery of the objects using compressive measurements. In this case, we show that it may be necessary to reconstruct one auxiliary image. To demonstrate the performance of the proposed algorithm, we provide results on data captured using a compressive single-pixel camera. We also illustrate that our approach is suitable for image coding in communication constrained problems by using data captured by multiple conventional cameras to provide 2D tracking and 3D shape reconstruction results with compressive measurements.
  • Marco Duarte, Shriram Sarvotham, Dror Baron, Michael Wakin , and Richard Baraniuk, Performance limits for jointly sparse signals via graphical models. I covered it here before but not the technical report.
  • Yonina Eldar and Moshe Mishali, Robust recovery of signals from a union of subspaces. The abstract reads:
    Traditional sampling theories consider the problem of reconstructing an unknown signal x from a series of samples. A prevalent assumption which often guarantees a unique signal consistent with the given measurements is that x lies in a known subspace. Recently, there has been growing interest in nonlinear but structured signal models, in which x is assumed to lie in a union of subspaces. An example is the case in which x is a finite length vector that is sparse in a given basis. In this paper we develop a general framework for robust and efficient recovery of such signals from a given set of samples. More specifically, we treat the case in which x lies in a finite union of finite dimensional spaces and the samples are modelled as inner products with an arbitrary set of sampling functions. We first develop conditions under which unique and stable recovery of x is possible, albeit with algorithms that have combinatorial complexity. To derive an efficient and robust recovery algorithm, we then show that our problem can be formulated as that of recovering a block sparse vector, namely a vector whose non-zero elements appear in fixed blocks. To solve this problem, we suggest minimizing a mixed ℓ2/ℓ1 norm subject to the measurement equations. We then develop equivalence conditions under which the proposed convex algorithm is guaranteed to recover the original signal. These results rely on the notion of block restricted isometry property (RIP), which is a generalization of the standard RIP used extensively in the context of compressed sensing. A special case of the proposed framework is that of recovering multiple measurement vectors (MMV) that share a joint sparsity pattern. Specializing our results to this context leads to new MMV recovery methods as well as equivalence conditions under which the entire set can be determined efficiently.
  • Yin Zhang, On theory of compressive sensing via ell-1-minimization: Simple derivations and extensions. As mentioned before here.
    Compressive (or compressed) sensing (CS) is an emerging methodology in computational signal processing that has recently attracted intensive research activities. At present, the basic CS theory includes recoverability and stability: the former quantifies the central fact that a sparse signal of length n can be exactly recovered from much less than n measurements via L1-minimization or other recovery techniques, while the latter specifies how stable is a recovery technique in the presence of measurement errors and inexact sparsity. So far, most analyses in CS rely heavily on a matrix property called Restricted Isometry Property (RIP). In this paper, we present an alternative, non-RIP analysis for CS via L1-minimization. Our purpose is three-fold: (a) to introduce an elementary treatment of the CS theory free of RIP and easily accessible to a broad audience; (b) to extend the current recoverability and stability results so that prior knowledge can be utilized to enhance recovery via L1-minimization; and (c) to substantiate a property called uniform recoverability of L1-minimization; that is, for almost all random measurement matrices recoverability is asymptotically identical. With the aid of two classic results, the non-RIP approach enables us to derive from scratch all basic results for the extended theory with short and simple derivations.
  • Jianwei Ma and Francois-Xavier Le Dimet, Deblurring from highly incomplete measurements for remote sensing.The abstract reads:
    When we take photos, we often get blurring pictures because of hand shake, motion, insufficient light, unsuited focal length, or other disturbances. Recently, a compressed-sensing (CS) theorem which provides a new sampling theory for data acquisition has been applied for medical and astronomic imaging. The CS make possible to take super-resolution photos only using one or a few pixels rather than million pixels by conventional digital camera. Here we further consider a so-called compressed-sensing deblurring problem: can we still obtain clear pictures from highly incomplete measurements when blurring disturbances occur? A decoding algorithm based on Poisson singular integral and iterative curvelet thresholding is proposed to recover the deblurring problem with surprisingly incomplete measurements. It permits to design robust and practical compressed-imaging instruments involving less imaging time, less storage space, less power consumption, smaller size and cheaper than current-used CCD (charged coupled device) cameras, which match effective needs especially for probes sent very far away. It essentially shifts the on-board imaging cost to off-line recovery computational cost. Potential applications in aerospace remote sensing of Chinese Chang'e-1 lunar probe are presented.
  • S. Dekel, Adaptive compressed image sensing based on wavelet-trees.The abstract reads:
    We present an architecture for an image acquisition process that enables to acquire and compress high resolution visual data, without fully sampling the entire data at its highest resolution, using significantly less measurements. In some cases, our approach simplifies and improves upon the existing methodology of the new and emerging field of compressed sensing, by replacing the ‘universal’ acquisition of pseudo-random measurements with a direct and fast method of adaptive wavelet coefficient acquisition. The main advantages of this direct approach are that the decoding algorithm is significantly faster and that it allows more control over the compressed image quality, in particular, the sharpness of edges.
  • David Donoho and Jared Tanner, Counting the faces of radomly-projected hypercubes and orthants, with applications.The abstract reads:
    Let A be an n by N real valued random matrix, and HN denote the N-dimensional hypercube. For numerous random matrix ensembles, the expected number of k-dimensional faces of the random n-dimensional zonotope AHN obeys the formula Efk(AHN)/fk(HN) = 1 − PN−n,N−k, where PN−n,N−k is a fair-coin-tossing probability: PN−n,N−k Prob{N − k − 1 or fewer successes in N − n − 1 tosses }. The formula applies, for example, where the columns of A are drawn i.i.d. from an absolutely continuous symmetric distribution. The formula exploits Wendel’s Theorem[19]. Let RN+ denote the positive orthant; the expected number of k-faces of the random cone ARN+ obeys Efk(ARN+)/fk(RN+) = 1 − PN−n,N−k. The formula applies to numerous matrix ensembles, including those with iid random columns from an absolutely continuous, centrally symmetric distribution. The probabilities PN−n,N−k change rapidly from nearly 0 to nearly 1 near k 2n − N. Consequently, there is an asymptotically sharp threshold in the behavior of face counts of the projected hypercube; thresholds known for projecting the simplex and the cross-polytope, occur at very different locations. We briefly consider face counts of the projected orthant when A does not have mean zero; these do behave similarly to those for the projected simplex. We consider non-random projectors of the orthant; the ’best possible’ A is the one associated with the first n rows of the Fourier matrix. These geometric face-counting results have implications for signal processing, information theory, inverse problems, and optimization. Most of these flow in some way from the fact that face counting is related to conditions for uniqueness of solutions of underdetermined systems of linear equations. a) A vector in RN+ is called k-sparse if it has at most k nonzeros. For such a k-sparse vector x0, let b = Ax0, where A is a random matrix ensemble covered by our results. With probability 1 − PN−n,N−k the inequality-constrained system Ax = b, x 0 has x0 as its unique nonnegative solution. This is so, even if n less than N, so that the system Ax = b is underdetermined.
    b) A vector in the hypercube HN will be called k-simple if all entries except at most k are at the bounds 0 or 1. For such a k-simple vector x0, let b = Ax0, where A is a random matrix ensemble covered by our results. With probability 1 − PN−n,N−k the inequality-constrained system Ax = b, x 2 HN has x0 as its unique solution in the hypercube.
  • Justin Romberg, Compressive sensing by random convolution. The abtract reads:
    This paper outlines a new framework for compressive sensing: convolution with a random waveform followed by random time domain subsampling. We show that sensing by random convolution is a universally effcient data acquisition strategy in that an n-dimensional signal which is S sparse in any fixed representation can be recovered from m superior to S log n measurements. We discuss two imaging scenarios ( radar and Fourier optics) where convolution with a random pulse allows us to seemingly super-resolve ne-scale features, allowing us to recover high-resolution signals from low-resolution measurements.
  • Justin Romberg, Imaging via compressive sampling. (IEEE Signal Processing Magazine, 25(2), pp. 14 - 20, March 2008)

Also I found this on the internets:

Simon Foucart has updated his code webpage to include the code implement the algorithm in Sparsest solutions of underdetermined linear systems via -minimization for . by himself and Ming-Jun Lai. The code is here. It is also listed in the Big Picture reconstruction section now.

There was a discussion on RIP a while back on this blog, and Alexandre d'Aspremont promised some type of results. It looks this is the beginning of this process in Testing the Nullspace Property using Semidefinite Programming by Alexandre d'Aspremont and Laurent El Ghaoui. The abstract reads:

Recent results in compressed sensing show that, under certain conditions, the sparsest solution to an underdetermined set of linear equations can be recovered by solving a linear program. These results rely on nullspace properties of the system matrix. So far, no tractable algorithm is known to test these conditions and most current results rely on asymptotic properties of sparse eigenvalues of random matrices. Given a matrix A, we use semidefinite relaxation techniques to test the nullspace property on A and show on some numerical examples that these relaxation bounds can prove perfect recovery of sparse solutions with relatively high cardinality.

Finally a video made at ITA 08 on "A short Introduction to Compressed Sensing" by Emmanuel Candes, can be watched at scivee.tv.

Photo credit: NASA/JPL-Caltech/University of Arizona/Texas A&M University

No comments:

Post a Comment