Wednesday, December 03, 2008

CS: GPUCamp, 2 comments, large scale version of the SaMP package, CS with Sequential Observations, Instance Optimal Decoding by Thresholding in CS




Stefano Marchesini has just put up a presentation shown at the UTK Colloquium. This is a 39 MB movie of the slides. If y'all are not reading the commentsStefano kindly mentioned the following aftter being featured here:

For fairness, the CS part was inspired by Matthew Moravec et al. "Compressive Phase Retrieval", Wavelets XII, SPIE 6701, No. 1. (2007)
It's funny, I did not see it listed on the Rice Repository. The closest I can find is this one.

Thong Do just released a large scale version of the Sparsity Adaptive Matching Pursuit package. More information can be found here. The link is also in the big picture reconstruction section.

Ramesh Raskar has updated his Camera Culture site. One can read more about his fascinating hardware development projects here. He is also recruiting.

Also found two new additions on the Rice Repository:


Compressed sensing allows perfect recovery of a signal that is known to be sparse in some basis using only a small number of measurements. The results in the literature have focused on the asymptotics of how many samples are required and the probability of making an error for a fixed batch of samples. We investigate an alternative scenario where observations are available in sequence and can be stopped as soon as there is reasonable certainty of correct reconstruction. For the random Gaussian ensemble we show that a simple stopping rule gives the absolute minimum number of observations required for exact recovery, with probability one. However, for other ensembles like Bernoulli or Fourier, this is no longer true, and the rule is modified to trade off delay in stopping and probability of error. We also describe a stopping rule for the nearsparse case which tells when enough observations are made to reach a desired tolerance in reconstruction. Sequential approach to compressed sensing involves the solution of a sequence of linear programs, and we outline how this sequence can be solved efficiently.
and Instance Optimal Decoding by Thresholding in Compressed Sensing by Albert Cohen, Wolfgang Dahmen, and Ronald DeVore. The abstract reads:
Compressed Sensing seeks to capture a discrete signal x element of R^N with a small number n of linear measurements. The information captured about x from such measurements is given by the vector y = \phi x element of R^n where  is an n x N matrix. The best matrices, from the viewpoint of capturing sparse or compressible signals, are generated by random processes, e.g. their entries are given by i.i.d. Bernoulli or Gaussian random variables. The information y holds about x is extracted by a decoder  mapping R^n into R^N. Typical decoders are based on l1-minimization and greedy pursuit. The present paper studies the performance of decoders based on thresholding. For quite general random families of matrices , decoders are constructed which are instance-optimal in probability by which we mean the following. If x is any vector in R^N, then with high probability applying \delta to y = \phi x gives a vector x :=\Delta (y) such that ||x-x^-|| less than  C_0 \sigma_k(x)_l2 for all k  less than a n / logN provided a is suciently small (depending on the probability of failure). Here \sigma_k(x)_l2 is the error that results when x is approximated by the k sparse vector which equals x in its k largest coordinates and is otherwise zero. It is also shown that results of this type continue to hold even if the measurement vector y is corrupted by additive noise: y = \phi x + e where e is some noise vector. In this case \sigma_k(x)_l2 is replaced by \sigma_k(x)_l2 + ||e||_l2 .

Finally, found on ArXiv,


L1Packv2 is a Mathematica package that contains a number of algorithms that can be used for the minimization of an $\ell_1$-penalized least squares functional. The algorithms can handle a mix of penalized and unpenalized variables. Several instructive examples are given. Also, an implementation that yields an exact output whenever exact data are given is provided.
The Mathematica package is here.

No comments:

Printfriendly