Pages

Thursday, August 21, 2008

CS: Instance-optimality in probability with an ell-1 decoder


The Rice Compressed Sensing site has new entries. I'll list the only one I haven't covered yet. Ronald DeVore, Guergana Petrova, and Przemysław Wojtaszczyk extend a proof to Bernouilli matrices in Instance-optimality in probability with an ell-1 decoder. The abstract reads (please take a look at the actual abstract, I am having problems with my latex translator)

Let and element of , be a family of n x N random matrices whose entries are independent realizations of a symmetric, real random variable with expectation and variance . Such matrices are used in compressed sensing to encode a vector x element of R^n by . The information y holds about x is extracted by using a decoder . The most prominent decoder is the l1-minimization decoder which gives for a given y element of the element element of which has minimal l1-norm among all z element of with . This paper is interested in properties of the random family (which guarantee that the vector will with high probability approximate x in to an accuracy comparable with the best k-term error of approximation in for the range . This means that for the above range of k, for each signal x element of , the vector satisfies
|| x - z||_{l_2^N}

with high probability on the draw of . Here, consists of all vectors with at most k nonzero coordinates. The first result of this type was proved by Wojtaszczyk [19] who showed this property when is a normalized Gaussian random variable. We extend this property to more general random variables, including the particular case where is the Bernoulli random variable which takes the values with equal probability. The proofs of our results use geometric mapping properties of such random matrices some of which were recently obtained in [14].


Credit photo: NASA, Non-optimal outcome of the Orion Capsule Parachute Test. The video is here.

No comments:

Post a Comment