In yesterday's post I mentioned Bayesian Compressive Sensing via Belief Propagation by Dror Baron, Shriram Sarvotham, and Richard Baraniuk. Here are the Matlab files used to simulate the CS-BP algorithm. The code was rewritten by Danny Bickson in March 2009 and his implementation appears here. I note the interesting timing of this bayesian algorithm with other reconstruction codes like CoSAMP or IHT. Even though it is slower compared to IHT, I am not sure that other bayesian approaches can compare quite nicely to the other algorithm. This test also undermines the need for an all-out benchmarking exercise.
Today, we also have Rachel Ward's thesis entitled Freedom through Imperfection: Exploiting the flexibility offered by redundancy in signal processing. The abstract reads:
This thesis consists of four chapters. The first two chapters pertain to the design of stable quantization methods for analog to digital conversion, while the third and fourth chapters concern problems related to compressive sensing. In the first chapter, we study the -encoder and golden ratio encoder, and show that these quantization schemes are robust with respect to uncertainty in the multiplication element of their implementation, whereby x - \beta x. In particular, we use a result from analytic number theory to show that the possibly unknown value of \beta can be reconstructed as the unique root of a power series with coefficients in (-1; 0; 1) obtained from the quantization output of test input x and its negative, x. The focus of our attention in Chapter 2 is Sigma Delta (\Sigma \Delta) modulation, a course quantization method in analog to digital conversion that is widely used for its simplicity of implementation and robustness to component imperfections. A persistent problem in \Sigma \Delta quantization of audio signals is the occurrence of undesirable tones arising from periodicities in the bit output; these tones are particularly intolerable in the space between successive audio tracks. As one of the contributions of this thesis, we show that the standard second order 1-bit \Sigma \Delta scheme can be modified so as to eliminate such periodicities, and this modification can be achieved without sacrificing accuracy or introducing significant complexity. The emerging area of compressed sensing guarantees that sufficiently sparse signals can be recovered from significantly fewer measurements than their underlying dimension suggests, based on the observation that an N dimensional real-valued vector can be reconstructed eciently to within a factor of its best k-term approximation by taking m = k logN measurements, or inner products. However, the quality of such a reconstruction is not assured if the underlying sparsity level of the signal is unknown. In Chapter 3, we show that sharp bounds on the errors between a signal and p estimates (corresponding to a dierent sparsity hypotheses, say) can be achieved by reserving only O(log p) measurements of the total m measurements for this purpose. The proposed technique is reminiscent of cross validation in statistics and learning theory, and theoretical justication in the context of compressed sensing is provided by the Johnson Lindenstrauss lemma. In Chapter 4, we study the relation between nonconvex minimization problems arising in compressed sensing and those known as free-discontinuity problems, which are often encountered in the areas of image segmentation and edge detection. We adapt recent thresholding techniques in the area of sparse recovery to a class of nonconvex functionals that contains both compressed sensing and free-discontinuity-type problems. Along the way, we establish a connection between these two types of problems, using the notion of gamma convergence, and gain new insights into the minimizing solutions of such functionals.
Finally, I just found this on the interwebs: Lossy Speech Compression Via Compressed Sensing-Based Kalman Filtering by Avishy Carmi, Dimitri Kanevsky, Bhuvana Ramabhadran.The abstract reads:
We present a new algorithm for lossy speech compression. The new algorithm is based on a simple technique for embedding a compressed sensing mechanism within a conventional Kalman filter. As such, it is capable of constructing compressed representations using significantly less samples than what is usually considered necessary.