Pages

Wednesday, October 26, 2011

Compressive Sensing Literature This Week

I knew that one of the side effect of Napoleon was the invention of photography but did not realize that another was the Fourier transform. Napoleon removed Fourier from office in March 1815 which led him to take a position back in Paris.

Before we get onto the important stuff, Yaroslav Bulatov has an internship opening at Google on Machine Vision.


Today we have several papers and a Master's thesis:


This paper proposes a binarization scheme for vectors of high dimension based on the recent concept of anti-sparse coding, and shows its excellent performance for approximate nearest neighbor search. Unlike other binarization schemes, this framework allows, up to a scaling factor, the explicit reconstruction from the binary representation of the original vector. The paper also shows that random projections which are used in Locality Sensitive Hashing algorithms, are significantly outperformed by regular frames for both synthetic and real data if the number of bits exceeds the vector dimensionality, i.e., when high precision is required.

Hervé let me know that the package which includes the L_infinity solver of Jean-Jacques Fuchs is now online here. Thanks Hervé.


To lower the sampling rate in a spread spectrum communication system using Direct Sequence Spread Spectrum (DSSS), we show that compressive signal processing can be applied to demodulate the received signal. This may lead to a decrease in power consumption in wireless receivers using spread spectrum technology. In classical compressive sensing, the receiver must mix the received signal with a pseudo-random noise signal. Our receiver structure does not require this, as we exploit the spreading of the signal already done in the transmitter. Our theoretical work is exemplified with a numerical experiment using the IEEE 802.15.4 standard and the 2.4 GHz band specification. The numerical results support our theoretical findings and indicate that compressive sensing may be used successfully in spread spectrum communication systems. The penalty is the noise folding that occurs.


In this paper, we address the problem of stabilization in continuous time linear dynamical systems using state feedback when compressive sampling techniques are used for state measurement and reconstruction. In [5], we had introduced the concept of using l1 reconstruction technique, commonly used in sparse data reconstruction, for state measurement and estimation in a discrete time linear system. In this work, we extend the previous scenario to analyse continuous time linear systems. We investigate the effect of switching within a set of sparsifiers, introduced in [5], on the stability of a linear plant in continuous time settings. Initially, we analyze the problem of stabilization in low dimensional systems, following which we generalize the results to address the problem of stabilization in systems of arbitrary dimensions.

(1+eps)-approximate Sparse Recovery by Eric PriceDavid P. Woodruff. The abstract reads:
The problem central to sparse recovery and compressive sensing is that of stable sparse recovery: we want a distribution of matrices A in R^{m\times n} such that, for any x \in R^n and with probability at least 2/3 over A, there is an algorithm to recover x* from Ax with ||x* - x||_p \le C min_{k-sparse x'} ||x - x'||_p for some constant C \lt 1 and norm p. The measurement complexity of this problem is well understood for constant C \lt 1. However, in a variety of applications it is important to obtain C = 1 + eps for a small eps \gt 0, and this complexity is not well understood. We resolve the dependence on eps in the number of measurements required of a k-sparse recovery algorithm, up to polylogarithmic factors for the central cases of p = 1 and p = 2. Namely, we give new algorithms and lower bounds that show the number of measurements required is (1/eps^{p/2})k polylog(n). For p = 2, our bound of (1/eps) k log(n/k) is tight up to constant factors. We also give matching bounds when the output is required to be k-sparse, in which case we achieve (1/eps^p) k polylog(n). This shows the distinction between the complexity of sparse and non-sparse outputs is fundamental.

On the Power of Adaptivity in Sparse Recovery by Piotr Indyk, Eric Price, David P. Woodruff. The abstract reads:
The goal of (stable) sparse recovery is to recover a $k$-sparse approximation $x*$ of a vector $x$ from linear measurements of $x$. Specifically, the goal is to recover $x*$ such that ||x-x*||_p \le C min_{k-sparse x'} ||x-x'||_q for some constant $C$ and norm parameters $p$ and $q$. It is known that, for $p=q=1$ or $p=q=2$, this task can be accomplished using $m=O(k \log (n/k))$ non-adaptive measurements [CRT06] and that this bound is tight [DIPW10,FPRU10,PW11]. In this paper we show that if one is allowed to perform measurements that are adaptive, then the number of measurements can be considerably reduced. Specifically, for $C=1+eps$ and $p=q=2$ we show - A scheme with $m=O((1/eps)k log log (n eps/k))$ measurements that uses $O(log* k \log \log (n eps/k))$ rounds. This is a significant improvement over the best possible non-adaptive bound. - A scheme with $m=O((1/eps) k log (k/eps) + k \log (n/k))$ measurements that uses /two/ rounds. This improves over the best possible non-adaptive bound. To the best of our knowledge, these are the first results of this type. As an independent application, we show how to solve the problem of finding a duplicate in a data stream of $n$ items drawn from ${1, 2, ..., n-1}$ using $O(log n)$ bits of space and $O(log log n)$ passes, improving over the best possible space complexity achievable using a single pass.

In this paper, we propose clipping noise cancellation scheme using compressed sensing (CS) for orthogonal frequency division multiplexing (OFDM) systems. In the proposed scheme, only the data tones with high reliability are exploited in reconstructing the clipping noise instead of the whole data tones. For reconstructing the clipping noise using a fraction of the data tones at the receiver, the CS technique is applied. The proposed scheme is also applicable to interleaved orthogonal frequency division multiple access (OFDMA) systems due to the decomposition of fast Fourier transform (FFT) structure. Numerical analysis shows that the proposed scheme performs well for clipping noise cancellation of both OFDM and OFDMA systems.

Turan's method and compressive sampling by Jean-Pierre Kahane. The abstract reads:
Turan's method, as expressed in his books, is a careful study of trigonometric polynomials from different points of view. The present article starts from a problem asked by Turan: how to construct a sequence of real numbers x(j) (j= 1,2,...n) such that the almost periodic polynomial whose frequencies are the x(j) and the coefficients are 1 are small (say, their absolute values are less than n d, d  given) for all integral values of the variable m between 1 and M= M(n,d) ? The best known answer is a random choice of the x(j) modulo 1. Using the random choice as Turan (and before him Erd\"os and Renyi), we improve the estimate of M (n, d) and we discuss an explicit construction derived from another chapter of Turan's book. The main part of the paper deals with the corresponding problem when R / Z is replaced by Z / NZ, N prime, and m takes all integral values modulo 1 except 0. Then it has an interpretation in signal theory, when a signal is representad by a function on the cyclic goup G = Z / NZ and the frequencies by the dual cyclic group G^ : knowing that the signal is carried by T points, to evaluate the probability that a random choice of a set W of frequencies allows to recover the signal x from the restriction of its Fourier tranform to W by the process of minimal extrapolation in the Wiener algebra of G^(process of Cand\`es, Romberg and Tao). Some random choices were considered in the original article of CRT and the corresponding probabilities were estimated in two preceding papers of mine. Here we have another random choice, associated with occupancy problems.

Recovering a Clipped Signal in Sparseland by Alejandro Weinstein, Michael Wakin. The abstract reads:
In many data acquisition systems it is common to observe signals whose amplitudes have been clipped. We present two new algorithms for recovering a clipped signal by leveraging the model assumption that the underlying signal is sparse in the frequency domain. Both algorithms employ ideas commonly used in the field of Compressive Sensing; the first is a modified version of Reweighted $\ell_1$ minimization, and the second is a modification of a simple greedy algorithm known as Trivial Pursuit. An empirical investigation shows that both approaches can recover signals with significant levels of clipping
Computational electromagnetic problems are becoming exceedingly complex and traditional computation methods are simply no longer good enough for our technologically advancing world. Compressive sensing theory states that signals, such as those used in computational electromagnetic problems have a property known as sparseness. It has been proven that through under sampling, computation runtimes can be substantially decreased while maintaining sufficient accuracy. Lawrence Carin and his team of researchers at Duke University developed an in situ compressive sensing algorithm specifically tailored for computational electromagnetic applications. This algorithm is known as the discrete cosine transform (DCT) method. Using the DCT algorithm, I have developed a compressive sensing software implementation. Through the course of my research I have tested both the accuracy and runtime efficiency of this software implementation proving its potential for use within the electromagnetic modeling industry. This implementation, once integrated with a commercial electromagnetics solver, reduced the computation cost of a single simulation from seven days to eight hours. Compressive sensing is highly applicable to practical applications and my research highlights both its benefits and areas for improvement and future research.


Sparse coding, which is the decomposition of a vector using only a few basis elements, is widely used in machine learning and image processing. The basis set, also called dictionary, is learned to adapt to specific data. This approach has proven to be very effective in many image processing tasks. Traditionally, the dictionary is an unstructured "flat" set of atoms. In this paper, we study structured dictionaries which are obtained from an epitome, or a set of epitomes. The epitome is itself a small image, and the atoms are all the patches of a chosen size inside this image. This considerably reduces the number of parameters to learn and provides sparse image decompositions with shiftinvariance properties. We propose a new formulation and an algorithm for learning the structured dictionaries associated with epitomes, and illustrate their use in image denoising tasks.

Analyse et synthèse harmoniques by Jean-Pierre Kahane. The abstract reads:
This is the material for two lectures given at Ecole Polytechnique in May 2011 for the math teachers of "classes pr\'eparatoires"(parallel to the undergraduate classes in universities). The introduction is a personal overview on Fourier analysis, its history, the terms in use and a few references. The first part is also an overview but on a restricted subject : the link between statistics and Fourier, from Legendre and the role of l^2 to Donoho and the role of l^1. The third part starts from a theorem of Cand\`es, Romberg and Tao on compressive sampling and applies the basic idea to a quite different question : the reconstruction of a function whose unknown spectrum has large gaps from Its restriction to an interval. The description and extention of the method of Cand\`es, Romberg and Tao in Appendix 2 was developed later in a note aux Comptes rendus and an article submitted to Annales de l'Institut Fourier.



1 comment:

  1. I replied to the Google job posting
    http://yaroslavvb.blogspot.com/2011/10/google-internship-in-visionml.html as follows:

    Dick Gordon said...
    Seems a backwards approach to me: you are ignoring context. Even with the example of Chinese 1s, the observer picks up a few obvious cases and then can conclude that the rest are variants. Context means, for example, that in a string of characters likely to be a word, when non-letters are embedded in the string, they usually need to be replaced by letters that make the string make sense as a word. The sense comes from a dictionary at one level, and from the sentence the word is in at the next level making grammatical sense.

    There is also the challenge, which I heard a librarian express as a dream, of 3D imaging of books at sufficient resolution and contrast to machine read them without opening them and turning the pages. Soft x-ray microbeams (for scatter reduction) might work here, using computed tomography combined with compressive sensing, especially given the binary nature of most text (black ink on white paper). This would speed scanning of books by orders of magnitude, and eliminate the harm to old, brittle books.
    Yours, -Dick Gordon gordonr@cc.umanitoba.ca

    ReplyDelete