Pages

Tuesday, April 23, 2013

Bending JL: Efficient Coding of Signal Distances Using Universal Quantized Embeddings


Igor, thanks. 
An initial page on quantization is now live: 
I also have a quick post on it: 
Incidentally, I think our work on coding of signal distances (http://www.boufounos.com/2013/03/20/coding-of-signal-distances/ and http://boufounos.com/Publications/BR_DCC2013.pdf) might be interesting to the readers of Nuit Blanche. I am not sure if you have featured it yet. It is a continuation of previous work you have featured (http://boufounos.com/Publications/LRB_MMSP12.pdf and http://boufounos.com/Publications/B_UniversalScalarQuantization.pdf). The paper analyzes the ambiguity of embeddings when used to estimate and/or encode signal distances. We also show that we can design randomized quantized embeddings that efficiently code only a small range of distances at lower bit-rate compared to quantized Johnson-Lindenstrauss embeddings that encode all distance ranges. In many classification and detection problems you only care about close distances around a signal and this kind of embedding provides a much more rate-efficient representation. It turns out there is a great efficiency gain from that! In fact we were quite surprised at how well it performed in practice! We expected more modest gains, but we managed to reduce the rate by 33% over our earlier efforts and by more than 50% over existing methods.
Let me know if you need more information.
Thanks,
Petros
Thanks Petros for the heads-up. The Resource on Quantization will be added to the list of highly technical reference pages.  Who knew bending the JL lemma was key to privacy !




The papers mentioned in Petros' email include: Efficient Coding of Signal Distances Using Universal Quantized Embeddings by Petros T. Boufounos and Shantanu Rane. The abstract reads:
Traditional rate-distortion theory is focused on how to best encode a signal using as few bits as possible and incurring as low a distortion as possible. However, very often, the goal of transmission is to extract specific information from the signal at the receiving end, and the distortion should be measured on that extracted information. In this paper we examine the problem of encoding signals such that sufficient information is preserved about their pairwise distances. For that goal, we consider randomized embeddings as an encoding mechanism and provide a framework to analyze their performance. We also propose the recently developed universal quantized embeddings as a solution to that problem and experimentally demonstrate that, in image retrieval experiments, universal embedding can achieve up to 25% rate reduction over the state of the art.

Abstract—Scalar quantization is the most practical and straightforward approach to signal quantization. However, it has been shown that scalar quantization of oversampled or compressively sensed signals can be inefficient in terms of the rate-distortion trade-off, especially as the oversampling rate or the sparsity of the signal increases. In this paper, we modify the scalar quantizer to have discontinuous quantization regions. We demonstrate that with this modification it is possible to achieve exponential decay of the quantization error as a function of the oversampling rate instead of the quadratic decay exhibited by current approaches. Our approach is universal in the sense that prior knowledge of the signal model is not necessary in the quantizer design, only in the reconstruction. Thus, we demonstrate that it is possible to reduce the quantization error by incorporating side information on the acquired signal, such as sparse signal models or signal similarity with known signals. In doing so, we establish a relationship between quantization performance and the Kolmogorov entropy of the signal model.
I also note: Privacy-Preserving Nearest Neighbor Methods by Petros Boufounos and Shantanu Rane.




Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

1 comment: