Thomas Strohmer sent me an e-mail mentioning the following:

I'll add those shortly to the Compressive Sensing Calendar. Thank you Thomas !

Hadi Zayyani asked me to host his paper on my site, which is something I don't do often:

Compressed Sensing Block Map-LMS Adaptive Filter for Sparse Channel Estimation and a Bayesian Cramer-Rao Bound by Hadi Zayyani, Massoud Babaie-Zadeh and Christian Jutten. The abstract reads:

In light of this hosting, here is a paper with a quite fitting title (from the Rice repository)

Democracy in Action: Quantization, Saturation, and Compressive Sensing by by Jason Laska, Petros Boufounos , Mark Davenport and Richard Baraniuk. The abstract reads:

Maybe you want to include in the calendar of events that a few compressed sensing sessions will take place in connection with the SPIE Wavelets conference, see this link: http://spie.org//app/program/index.cfm?fuseaction=conferencedetail&conference=7446&jsenabled=1

I'll add those shortly to the Compressive Sensing Calendar. Thank you Thomas !

Hadi Zayyani asked me to host his paper on my site, which is something I don't do often:

Compressed Sensing Block Map-LMS Adaptive Filter for Sparse Channel Estimation and a Bayesian Cramer-Rao Bound by Hadi Zayyani, Massoud Babaie-Zadeh and Christian Jutten. The abstract reads:

This paper suggests to use a Block MAP-LMS (BMAPLMS) adaptive filter instead of an Adaptive Filter called MAP-LMS for estimating the sparse channels. Moreover to faster convergence than MAP-LMS, this block-based adaptive filter enables us to use a compressed sensing version of it which exploits the sparsity of the channel outputs to reduce the sampling rate of the received signal and to alleviate the complexity of the BMAP-LMS. Our simulations show that our proposed algorithm has faster convergence and less final MSE than MAP-LMS, while it is more complex than MAP-LMS. Moreover, some lower bounds for sparse channel estimation is discussed. Specially, a Cramer-Rao bound and a Bayesian Cramer-Rao bound is also calculated.

In light of this hosting, here is a paper with a quite fitting title (from the Rice repository)

Democracy in Action: Quantization, Saturation, and Compressive Sensing by by Jason Laska, Petros Boufounos , Mark Davenport and Richard Baraniuk. The abstract reads:

Recent theoretical developments in the area of compressive sensing (CS) have the potential to significantly extend the capabilities of digital data acquisition systems such as analog-todigital converters and digital imagers in certain applications. The key hallmark of CS that has been the focus of the community so far is the fact that CS enables sub-Nyquist sampling for signals, images, and other data that have a sparse representation in some basis. In this paper, we explore and exploit another heretofore relatively unexplored hallmark, the fact that certain CS measurement systems are democratic, which means that each measurement carries roughly the same amount of information about the signal being acquired. Using the democracy property, we re-think how to quantize the compressive measurements in practical CS systems. If we were to apply the conventional wisdom gained from conventional Shannon-Nyquist uniform sampling, then we would scale down the analog signal amplitude (and therefore increase the quantization error) to avoid the gross saturation errors that occur when the signal amplitude exceeds the quantizer’s dynamic range. In stark contrast, we demonstrate a CS system achieves the best performance when we operate at a significantly nonzero saturation rate. We develop two methods to recover signals from saturated CS measurements. The first directly exploits the democracy property by simply discarding the saturated measurements. The second integrates saturated measurements as constraints into standard linear programming and greedy recovery techniques. Finally, we develop a simple automatic gain control system that uses the saturation rate to optimize the input gain.

Also found on the interwebs, some of these items are a little "old" but I don't think I cover them before:

A Fast Posterior Update for Sparse Underdetermined Linear Models by Lee Potter, Phil Schniter, and Justin Ziniel. The abstract reads:

A Bayesian approach is adopted for linear regression, and a fast algorithm is given for updating posterior probabilities. Emphasis is given to the underdetermined and sparse case, i.e., fewer observations than regression coefficients and the belief that only a few regression coefficients are non-zero. The fast update allows for a low-complexity method of reporting a set of models with high posterior probability and their exact posterior odds. As a byproduct, this Bayesian model averaged approach yields the minimum mean squared error estimate of unknown coefficients. Algorithm complexity is linear in the number of unknown coefficients, the number of observations and the number of nonzero coefficients. For the case in which hyperparameters are unknown, a maximum likelihood estimate is found by a generalized expectation maximization algorithm.

A Sparsity Detection Framework for On-Off Random Access Channels by Alyson Fletcher, Sundeep Rangan, Vivek Goyal. The abstract reads:

This paper considers a simple on–off random multiple access channel (MAC), where n users communicate simultaneously to a single receiver. Each user is assigned a single codeword which it transmits with some probability \lambda over m degrees of freedom. The receiver must detect which users transmitted. We show that detection for this random MAC is mathematically equivalent to a standard sparsity detection problem. Using new results in sparse estimation we are able to estimate the capacity of these channels and compare the achieved performance of various detection algorithms. The analysis provides insight into the roles of power control and multi-user detection.Found on the Arxiv site:

Distributed MIMO radar using compressive sampling by Athina Petropulu, by Yao Yu, Athina Petropulu, H. Vincent Poor, H. Vincent Poor. The abstract:

Presentations also found include:A distributed MIMO radar is considered, in which the transmit and receive antennas belong to nodes of a small scale wireless network. The transmit waveforms could be uncorrelated, or correlated in order to achieve a desirable beampattern. The concept of compressive sampling is employed at the receive nodes in order to perform direction of arrival (DOA) estimation. According to the theory of compressive sampling, a signal that is sparse in some domain can be recovered based on far fewer samples than required by the Nyquist sampling theorem. The DOAs of targets form a sparse vector in the angle space, and therefore, compressive sampling can be applied for DOA estimation. The proposed approach achieves the superior resolution of MIMO radar with far fewer samples than other approaches. This is particularly useful in a distributed scenario, in which the results at each receive node need to be transmitted to a fusion center.

- Andy Yagle, Non-Iterative Reconstruction of Sparse Images from Limited Data
- Phil Schniter, Lee Potter, and Subhojit Som, Sparse Reconstruction via Bayesian Variable Selection and Bayesian Model Averaging

Finally, Laurent Jacques mentions on his site that his recent paper entitled "Dequantizing Compressed Sensing with Non-Gaussian Constraints" co-written with D. K. Hammond and M. J. Fadili has been (slightly) updated.

Image Credit: NASA/JPL/Space Science Institute, image of Saturn taken on July 23 from Cassini.

Image Credit: NASA/JPL/Space Science Institute, image of Saturn taken on July 23 from Cassini.