Pages

Monday, September 20, 2010

CS: 1-bit Fest

I don't know if this was a local expression or just something known in the language of students, but back when I was one, some times, we would unplug and do crazy things for a little while. We would generally call that period of time: 'name-of-person-involved-in-the-crazy-partying'-Fest. Today we have something that might not be that wild but is amazing nonetheless; so I am going to call it: 1-Bit Fest and it comes in the shape of three papers. Maybe I should have a static page on the subject after all. Enjoy.

This paper considers the problem of identifying the support set of a high-dimensional sparse vector, from noise corrupted 1-bit measurements. We present passive and adaptive algorithms for this problem, both requiring no more than O(d log(D)) measurements to recover the unknown support. The adaptive algorithm has the additional benefit of robustness to the dynamic range of the unknown signal.
The recently emerged compressive sensing (CS) framework aims to acquire signals at reduced sample rates compared to the classical Shannon-Nyquist rate. To date, the CS theory has assumed primarily real-valued measurements; it has recently been demonstrated that accurate and stable signal acquisition is still possible even when each measurement is quantized to just a single bit. This property enables the design of simplified CS acquisition hardware based around a simple sign comparator rather than a more complex analog-to-digital converter; moreover, it ensures robustness to gross non-linearities applied to the measurements. In this paper we introduce a new algorithm — restricted-step shrinkage (RSS) — to recover sparse signals from 1-bit CS measurements. In contrast to previous algorithms for 1-bit CS, RSS has provable convergence guarantees, is about an order of magnitude faster, and achieves higher average recovery signal-to-noise ratio. RSS is similar in spirit to trust-region methods for non-convex optimization on the unit sphere, which are relatively unexplored in signal processing and hence of independent interest.
I have seen this trust but verify term used before, all y'all are reading too much Nuit Blanche and something sticks...

Scalar quantization is the most practical and straightforward approach to signal quantization. However, it has been shown that scalar quantization of oversampled or Compressively Sensed signals can be inefficient in terms of the rate-distortion trade-off, especially as the oversampling rate or the sparsity of the signal increases. In this paper, we modify the scalar quantizer to have discontinuous quantization regions. We demonstrate that with this modification it is possible to achieve exponential decay of the quantization error as a function of the oversampling rate instead of the quadratic decay exhibited by current approaches. We further demonstrate that it is possible to further reduce the quantization error by incorporating side information on the acquired signal, such as sparse signal models or signal similarity with known signals. Our approach is universal in the sense that prior knowledge of the signal model is not necessary in the quantizer design.
From the conclusion I note the following interesting tidbit:
Last, we should note that this quantization approach has very tight connections with locality-sensitive hashing (LSH) and `2 embeddings under the hamming distance (e.g., see [46] and references within). Specifically, our quantization approach effectively constructs such an embedding, some of the properties of which are examined in [47], although not in the same language. A significant difference is on the objective. Our goal is to enable reconstruction, whereas the goal of LSH and randomized embeddings is to approximately preserve distances with very high probability. A rigorous treatment of the connections on the connections of quantization and LSH is quite interesting and deserves a publication of its own. A preliminary attempt to view LSH as a quantization problem is performed in [48].


If you think this blog provides a service, please support it by ordering through the Amazon - Nuit Blanche Reference Store

No comments:

Post a Comment