Pages

Thursday, January 24, 2008

Compressed Sensing: Links, ITA'08 at UCSD and EUSIPCO 2008 in Lausanne

I have been told by one reader how uneasy it was to find this blog on the subject of compressed sensing. I checked with Google and other sites and found that there is indeed a small issue. It looks as though there is much spam on subjects that are very new and specifically an inability for Google to reference the subject through the labeling provided by the Blogger/Google system. Some of the difficulties may have to do with the fact that people tend to link to a specific entry rather than the full list of entries. I think one of the ways to solve this problem is for whoever wants to link to this blog on the subject of Compressed Sensing, Compressive Sampling or Compressive Sensing should link to the address below:


I also use
http://nuit-blanche.blogspot.com/search/label/compressed%20sensing

so there is no need to change anything on the websites that already have that address. These addresses have the distinct appeal of removing all my other ramblings not related to the subject :-)

Jort mentions that the 2008 European Signal Processing Conference (EUSIPCO-2008) taking place in Lausanne, Switzerland, will feature a special session on Sparsity in Signal Processing organized by Holger Rauhut, Georg Tauboeck, and Franz Hlawatsch. The deadline for submission is February 8th. The conference is taking place August 25th-29th.

Another reader pointed out that several talks will be given at Information Theory and Applications '08 at UCSD that we have not seen yet on the Rice Repository, I am going to mention the abstracts, I'd be very interested in seeing preprints whenever they are available (there is also a post-workshop course with some of the leading people as teachers).
Compressed sensing and linear codes over real numbers by Fan Zhang and Henry Pfister (Henry, I understand now why the preprint won't be available before the end of the month :-)).

Compressed sensing (CS) is a new area of signal processing and statistics that focuses on signal reconstruction from a small number of linear (e.g., dot product) measurements. In this paper, we analyze CS using tools from coding theory because CS can also be viewed as a syndrome-based source coding of sparse vectors using linear codes over real numbers. While coding theory does not typically deal with codes over real numbers, many results can be modified to apply in this case. In particular, there is a very close relationship between CS and error-correcting codes over large discrete alphabets.

Exploiting recent advances in capacity-approaching codes, we propose new approaches for CS of exactly sparse vectors. These methods are based on irregular low-density parity-check codes and result in provable oversampling thresholds with reconstruction complexities that are linear in the signal dimension $n$. Reconstruction succeeds with high probability (as $n\rightarrow\infty$) using a random signal and measurement model. We also discuss extensions to deterministic signal models and "approximately sparse" signals.


Algorithms and Bounds for Sensing Capacity and Compressed Sensing by M. Zhao, S. Aeron, V. Saligrama

We consider the problem of recovering sparse phenomena from projections of noisy data, a topic of interest in compressed sensing. We consider a notion of Sensing Capacity which is defined as the supremum of the ratio of the number of signal dimensions that can be identified per projection. Sensing Capacity is a function of sensing diversity, SNR, sensed environment (sparsity) as well as desired distortion up to which the sensed phenomena must be reconstructed. In this paper we will focus on the importance of SNR on sensing capacity. First, we prove algorithm independent results, i.e. find necessary and sufficient conditions on the number of projections required for reconstruction to given fidelity using information theoretic inequalities. Next, we present linear programming (LP) methods for recovering sparse signals in the presence of noise. The proposed algorithm does not require tuning parameters in contrast to other algorithms such as Lasso, which use tuning as a means of incorporating prior knowledge of noise statistics. We show that $SNR=O(\log n)$ is a necessary and sufficient condition for perfect recovery of discrete signals and exact support of continuous signals.
Integer compressed sensing via superimposed coding by Wei Dai and Olgica Milenkovic.

We introduce a new family of codes, termed weighted Euclidean superimposed codes (WESCs). This family generalizes the class of Euclidean superimposed codes, used in multiuser identification systems. WESCs allow for deterministic discrimination of bounded, integer-valued linear combinations of real-valued codewords, and can therefore also be seen as a specialization of compressed sensing schemes. We first present lower and upper bounds on the largest size a WESC, and show how to use classical coding-theoretic and new compressed sensing analytical tools to devise low-complexity decoding algorithms for these codes. Then we study sparse WESCs, in which the number of nonzero coordinates of codewords are significantly smaller than their length. Our results include a lower bound on the largest rate of sparse WESCs and a deterministic construction based on DeVore's method.


Combining geometry and combinatorics: A unified approach to sparse signal recovery by A.C. Gilbert, P. Indyk, H. Karloff, M.J. Strauss

There are two main algorithmic approaches to sparse signal recovery: geometric and combinatorial. The geometric approach starts with a geometric constraint on the measurement matrix $\Phi$ and then uses linear programming to decode information about $x$ from $\Phi x$. The combinatorial approach constructs $\Phi$ and a combinatorial decoding algorithm to match. We present a unified approach to these two classes of sparse signal recovery algorithms.

The unifying elements are the adjacency matrices of high-quality {\em unbalanced expanders}. We generalize the notion of {\em restricted isometry property} (RIP), crucial to compressed sensing results for signal recovery, from the Euclidean norm to the $\ell_p$ norm for $p >
1$, and then show that unbalanced expanders are essentially equivalent to RIP-$p$ matrices. From known deterministic constructions for such matrices, we obtain new {\em deterministic} measurement matrix constructions and algorithms for signal recovery which, compared to previous deterministic algorithms, are superior in either the number of measurements or in noise tolerance.

Bayesian matching pursuit Phil Schniter and Lee C. Potter.
We propose a Bayesian approach to the compressive sensing problem. In our formulation, the measurements are modeled as a known linear combination of the sparse coefficients plus additive white Gaussian noise of known variance, and the sparse coefficients are modeled as independent Gaussian random variables whose unknown means and variances are drawn
independently from a finite set according to known priors. So that the coefficient vector is sparse, the most a priori probable {mean,variance} combination can be set to {0,0}. Leveraging the discrete nature of the Gaussian mixture parameters, we propose an efficient tree-search which finds the mixture vectors that account for the dominant posterior probabilities. The MMSE estimate of the sparse-coefficient vector then follows directly from an average over these dominant posteriors. A key feature of our tree-search is a fast metric update which operates on the "matched filter outputs" (i.e., the inner products between the measurement vector and the columns of the combining matrix), hence the name ``Fast Bayesian Matching Pursuit.''
Compressive learning and inference by Richard Baraniuk, Chinmay Hegde, Marco Duarte, Mark Davenport, and Michael Wakin

Compressive Sensing (CS) is a new approach to data acquisition in which analog signals are digitized for processing not via uniform sampling but via measurements using more general, even random, test functions. In contrast with conventional wisdom, the new theory asserts that one can combine "low-rate sampling" with an optimization-based reconstruction process for efficient and accurate signal acquisition. While the growing CS literature has focused on reconstructing signals, in many applications, we are ultimately interested only in making inferences about the data. Examples include detecting tumors in medical images, classifying the type of vehicle in airborne surveillance, or estimating the trajectory of an object in a video sequence. In this talk we will review our recent work on new algorithms for compressive learning and inference that enjoy the same general benefits as CS. Specific techniques include compressive versions of the matched filter (dubbed the "smashed filter"), intrinsic dimension estimation for point clouds, and manifold learning algorithms. In our techniques, the number of measurements required is linear in the complexity of the data (sparsity in the case of a basis representation or dimension in the case of a manifold model) and only logarithmic in the ambient dimension.
Compressed channel sensing by R. Nowak, W. Bajwa, and J. Haupt

Channel sensing and estimation is an important component of modern wireless communication systems, and it is a crucial element of emerging systems such as cognitive radio. Wireless channels are multidimensional; delay, Doppler, and spatial diversity provides rich signaling environments that can be exploited to enhance communication capabilities. The identification of such channels is usually carried out by fully exciting or illuminating all dimensions and then applying linear estimation techniques. For high-dimensional channels, this is prohibitive and often ineffective. Linear estimation methods are ineffective because they ignore a key feature of such channels. Because of their high-dimensionality and the physical causes underlying them, wireless channels are often quite sparse, with a small number of dominant modes characterizing their structure. Estimation methods that do not capitalize on this sparsity do not perform well. This paper addresses channel sensing and estimation from a new perspective that exploits the inherent sparsity in high-dimensional wireless communication channels. Adapting ideas from
the theory of compressed sensing, we make two key observations: 1) sparse channels can be identified from a small number of properly chosen excitations or illuminations, and exhaustive probing is unnecessary; 2) nonlinear, sparse estimation methods such as the Lasso yield superior channel estimates compared to linear techniques. One of the especially novel mathematical aspects of our work is that we show that Toeplitz-structured sensing matrices, which naturally arise when pseudorandom excitations are employed, are very well suited to the identification of sparse channels.
Iterative signal recovery from incomplete and inaccurate measurements by J. Tropp, D. Needell, and R. Vershynin

Compressive Sampling (CoSa) offers a new paradigm for acquiring signals that are compressible with respect to an orthobasis. The major algorithmic challenge in CoSa is to approximate a compressible signal from noisy samples. Until recently, all provably correct reconstruction
techniques have relied on large-scale optimization, which tends to be computationally burdensome.

This talk describes a new iterative, greedy recovery algorithm, called CoSaMP that delivers the same guarantees as the best optimization-based approaches. Moreover, this algorithm offers rigorous bounds on computational cost and storage. It is likely to be extremely efficient for practical problems because it requires only matrix--vector multiplies with the sampling matrix. For some cases of interest, the running time is just O(N*log^2(N)), where N is the length of the signal.


Credit: NASA/JPL, Mars Rover Spirit on Sol 1002 or about 417 sols ago (one sol a Martian day is about 25 Earth hours).

No comments:

Post a Comment