Monday, June 08, 2015

1-bit and Quantized Compressive Sensing

Here are few preprints on the subject of 1-bit and quantized compressive sensing that appeared in the past few months. In particular, Laurent wrote about his latest preprint on his blog at Quasi-isometric embeddings of vector sets with quantized sub-Gaussian projections. The paper is here: Small width, low distortions: quasi-isometric embeddings with quantized sub-Gaussian random projections
Laurent Jacques

Under which conditions a subset K of RN can be embedded in another one of δZM for some resolution δ>0? We address this general question through the specific use of a quantized random linear mapping A:RNδZM combining a linear projection of RN in RM associated to a random matrix ΦRM×N with a uniform scalar (dithered) quantization Q of RM in δZM. The targeted embedding relates the 2-distance of any pair of vectors in K with the 1-distance of their respective mappings in δZM, allowing for both multiplicative and additive distortions between these two quantities, i.e., describing a 2/1-quasi-isometric embedding. We show that the sought conditions depend on the Gaussian mean width w(K) of the subset K. In particular, given a symmetric sub-Gaussian distribution φ and a precision ϵ>0, if MCϵ5w(K)2 and if the sensing matrix Φ has entries i.i.d. as φ, then, with high probability, the mapping A provides a 2/1-quasi-isometry between K and its image in δZM. Moreover, in this embedding, the additive distortion is of order δϵ while the multiplicative one grows with ϵ. For non-Gaussian random Φ, the multiplicative error is also impacted by the sparsity of the vectors difference, i.e., being smaller for "not too sparse" difference. When K is the set of bounded K-sparse vectors in any orthonormal basis, then only MCϵ2log(cN/Kϵ3/2) measurements suffice. Remark: all values C,c>0 above only depend on δ and on the distribution φ.

but there were others:

One-Bit Compressive Sensing with Partial Support by Phillip North, Deanna Needell

The Compressive Sensing framework maintains relevance even when the available measurements are subject to extreme quantization, as is exemplified by the so-called one-bit compressed sensing framework which aims to recover a signal from measurements reduced to only their sign-bit. In applications, it is often the case that we have some knowledge of the structure of the signal beforehand, and thus would like to leverage it to attain more accurate and efficient recovery. This work explores avenues for incorporating such partial-support information into the one-bit setting. Experimental results demonstrate that newly proposed methods of this work yield improved signal recovery even for varying levels of accuracy in the prior information. This work is thus the first to provide recovery mechanisms that efficiently use prior signal information in the one-bit reconstruction setting.

Joint Sparsity Pattern Recovery with 1-bit Compressive Sensing in Sensor Networks
Vipul Gupta, Bhavya Kailkhura, Thakshila Wimalajeewa, Pramod K. Varshney

We study the problem of jointly sparse support recovery with 1-bit compressive measurements in a sensor network. Sensors are assumed to observe sparse signals having the same but unknown sparse support. Each sensor quantizes its measurement vector element-wise to 1-bit and transmits the quantized observations to a fusion center. We develop a computationally tractable support recovery algorithm which minimizes a cost function defined in terms of the likelihood function and the $l_{1,\infty}$ norm. We observe that even with noisy 1-bit measurements, jointly sparse support can be recovered accurately with multiple sensors each collecting only a small number of measurements.

Amplitude-Aided 1-Bit Compressive Sensing Over Noisy Wireless Sensor Networks
Ching-Hsien Chen, Jwo-Yuh Wu

Abstract-One-bit compressive sensing (CS) is known to be particularly suited for resource-constrained wireless sensor networks (WSNs). In this paper, we consider 1-bit CS over noisy WSNs subject to channel-induced bit flipping errors, and propose an amplitude-aided signal reconstruction scheme, by which (i) the representation points of local binary quantizers are designed to minimize the loss of data fidelity caused by local sensing noise, quantization, and bit sign flipping, and (ii) the fusion center adopts the conventional minimization method for sparse signal recovery using the decoded and de-mapped binary data. The representation points of binary quantizers are designed by minimizing the mean square error (MSE) of the net data mismatch, taking into account the distributions of the nonzero signal entries, local sensing noise, quantization error, and bit flipping; a simple closed-form solution is then obtained. Numerical simulations show that our method improves the estimation accuracy when SNR is low or the number of sensors is small, as compared to state-of-the-art 1-bit CS algorithms relying solely on the sign message for signal recovery.

Quantization of compressive samples with stable and robust recovery
Rayan Saab, Rongrong Wang, Ozgur Yilmaz

In this paper we study the quantization stage that is implicit in any compressed sensing signal acquisition paradigm. We propose using Sigma-Delta quantization and a subsequent reconstruction scheme based on convex optimization. We prove that the reconstruction error due to quantization decays polynomially in the number of measurements. Our results apply to arbitrary signals, including compressible ones, and account for measurement noise. Additionally, they hold for sub-Gaussian (including Gaussian and Bernoulli) random compressed sensing measurements, as well as for both high bit-depth and coarse quantizers, and they extend to 1-bit quantization. In the noise-free case, when the signal is strictly sparse we prove that by optimizing the order of the quantization scheme one can obtain root-exponential decay in the reconstruction error due to quantization.

Poisson Matrix Recovery and Completion
We extend the theory of low-rank matrix recovery and completion to the case when Poisson observations for a linear combination or a subset of the entries of a matrix are available, which arises in various applications with count data. We consider the (now) usual matrix recovery formulation through maximum likelihood with proper constraints on the matrix $M$ of size $d_1$-by-$d_2$, and establish theoretical upper and lower bounds on the recovery error. Our bounds for matrix completion are nearly optimal up to a factor on the order of $\mathcal{O}(\log(d_1 d_2))$. These bounds are obtained by combing techniques for compressed sensing for sparse vectors with Poisson noise and for analyzing low-rank matrices, as well as adapting the arguments used for one-bit matrix completion (although these two problems are different in nature) and the adaptation requires new techniques exploiting properties of the Poisson likelihood function and tackling the difficulties posed by the locally sub-Gaussian characteristic of the Poisson distribution. Our results highlight a few important distinctions of the Poisson case compared to the prior work including having to impose a minimum signal-to-noise requirement on each observed entry and a gap in the upper and lower bounds. We also develop a set of efficient iterative algorithms and demonstrate their good performance on synthetic examples and real data.

One-Bit Massive MIMO: Channel Estimation and High-Order Modulations
We investigate the information-theoretic throughout achievable on a fading communication link when the receiver is equipped with one-bit analog-to-digital converters (ADCs). The analysis is conducted for the setting where neither the transmitter nor the receiver have a priori information on the realization of the fading channels. This means that channel-state information needs to be acquired at the receiver on the basis of the one-bit quantized channel outputs. We show that least-squares (LS) channel estimation combined with joint pilot and data processing is capacity achieving in the single-user, single-receive-antenna case.
We also investigate the achievable uplink throughput in a massive multiple-input multiple-output system where each element of the antenna array at the receiver base-station feeds a one-bit ADC. We show that LS channel estimation and maximum-ratio combining are sufficient to support both multiuser operation and the use of high-order constellations. This holds in spite of the severe nonlinearity introduced by the one-bit ADCs.

Limited Feedback in Multiple-Antenna Systems with One-Bit Quantization
Jianhua Mo, Robert W. Heath Jr

Communication systems with low-resolution analog-to-digital-converters (ADCs) can exploit channel state information at the transmitter (CSIT) and receiver. This paper presents initial results on codebook design and performance analysis for limited feedback systems with one-bit ADCs. Different from the high-resolution case, the absolute phase at the receiver is important to align the phase of the received signals when the received signal is sliced by one-bit ADCs. A new codebook design for the beamforming case is proposed that separately quantizes the channel direction and the residual phase.

EXIT Chart Analysis of Turbo Compressed Sensing Using Message Passing De-Quantization
Amin Movahed, Mark C. Reed, Shahriar Etemadi Tajbakhsh

We propose an iterative decoding method, which we call turbo-CS, for the reception of concatenated source-channel encoded sparse signals transmitted over an AWGN channel. The turbo-CS encoder applies 1-bit compressed sensing as a source encoder concatenated serially with a convolutional channel encoder. At the turbo-CS decoder, an iterative joint source-channel decoding method is proposed for signal reconstruction. We analyze, for the first time, the convergence of turbo-CS decoder by determining an EXIT chart of the constituent decoders. We modify the soft-outputs of the decoder to improve the signal reconstruction performance of turbo-CS decoder. For a fixed signal reconstruction performance RSNR of 10 dB, we achieve more than 5 dB of improvement in the channel SNR after 6 iterations of the turbo-CS. Alternatively, for a fixed SNR of -1 dB, we achieve a 10 dB improvement in RSNR.

We give efficient protocols and matching accuracy lower bounds for frequency estimation in the local model for differential privacy. In this model, individual users randomize their data themselves, sending differentially private reports to an untrusted server that aggregates them.
We study protocols that produce a succinct histogram representation of the data. A succinct histogram is a list of the most frequent items in the data (often called "heavy hitters") along with estimates of their frequencies; the frequency of all other items is implicitly estimated as 0.
If there are $n$ users whose items come from a universe of size $d$, our protocols run in time polynomial in $n$ and $\log(d)$. With high probability, they estimate the accuracy of every item up to error $O\left(\sqrt{\log(d)/(\epsilon^2n)}\right)$ where $\epsilon$ is the privacy parameter. Moreover, we show that this much error is necessary, regardless of computational efficiency, and even for the simple setting where only one item appears with significant frequency in the data set.
Previous protocols (Mishra and Sandler, 2006; Hsu, Khanna and Roth, 2012) for this task either ran in time $\Omega(d)$ or had much worse error (about $\sqrt[6]{\log(d)/(\epsilon^2n)}$), and the only known lower bound on error was $\Omega(1/\sqrt{n})$.
We also adapt a result of McGregor et al (2010) to the local setting. In a model with public coins, we show that each user need only send 1 bit to the server. For all known local protocols (including ours), the transformation preserves computational efficiency.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments: