Monday, December 14, 2009

CS: This week's long entry



It's Monday, before you gulp that morning cup of joe, you need to read this Fake Steve blog entry first otherwise you might be spilling all that coffee on your new LCD screen. Now that you are back, bada bing bada boom, here is the long post of the week. First, I'll be at MIA'09 and the hashtag for it on twitter will be #MIA09. Gabriel Peyre tells me there will be free wifi. woohoo.

Second, hat tip to the readers of the blog Daryl Lim, Ben Moran and Tanish Agrawal.

Daryl Lim (who incidentally is applying to graduate school in the US in the Fall 2010) let me know the following:
Just letting you know that I found a couple errors in the Cosamp algorithm on Nuit Blanche (cosamp2.m in the MMA14 archive)

Before being fixed, it only worked for signals which were positive (like the test images) as a modulus function was performed in the least squares estimation step (which should not have been performed)

I have fixed it so it works for real valued signals now. Attached is the modified file.

By the way, your blog rocks.

The edited version is called cosamp3.m and is available here. Good call Daryl, now that you have been featured on Nuit Blanche, I am sure people will look at your graduate application with additional interest. By the way, I agree with you, Daryl, Nuit Blanche rocks!

Ben Moran let me know that the python code solving the Sudoku problem with l1-minimization is here. Thanks Ben!

Tanish Agrawal asked a very interesting question (that in my view is not being asked often):
As a part of my research work, I am working on compressed sensing for OFDM signals in UWB. I have been following some papers from Rice University DSP repository.

In particular, I have some doubts regarding the following paper: 'Beyond Nyquist: Efficient Sampling of Sparse Bandlimited Signals' by J. Tropp et al. They discuss about implementing a random demodulator in the paper. In the part where they propose a matrix for the accumulator and dump sampler (page 5), it works by adding consecutive entries for which one needs to have all the samples. So, how is it working at sub-Nyquist rate when we still need to collect all the samples to integrate them?

Could you please take out sometime in helping me understand this?

To what I responded:
The number of eventual measurements, i.e. linear combination of samples is much less than what the Nyquist rate would indicate. I agree with you that the signal is being sampled at a high rate.

The issue that needs to be understood is that there are instrumentation for which acquiring those linear combination of samples is easier than acquiring each and everyone of these samples.

In this case, the random modulator is used to obtain signals that are in the GHz range. While sampling at that high rate (i.e. take each sample and store them) is not currently feasible, it is however possible to flip the polarity of the sensors at a fast enough rate (GHz) to implement the modulator proposed in the paper. In effect, in this instance, Compressive Sensing enables a new capability that could not exist otherwise. I think Tanish's question is a central one in that using the term subsampling or sub-Nyquist is an impediment for understanding that the new "samples" are actually linear combination of "normal" samples and there is nothing magic: We cannot have superresolution with compressive sensing if the hardware does have the capability to "touch" that superresolution in the first place. Because there are few systems where this subsampling is an obvious enabler, we need to think of these new technologies that have not been implemented yet. It is also important to have a living document on the current technologies implementing compressive sensing explaining how different they are from the current state of the art. Thanks Tanish !

And finally, David Smith produced a blog entry entitled: Because it's friday detecting Cylons ... with group testing. Let us recall that group testing is connected to compressive sensing.

Now here is what I found on the interwebs:

Performance analysis for sparse support recovery by Gongguo Tang, Arye Nehorai. The abstract reads:
The performance of estimating the common support for jointly sparse signals based on their projections onto lower-dimensional space is analyzed. Support recovery is formulated as a multiple-hypothesis testing problem. Both upper and lower bounds on the probability of error are derived for general measurement matrices, by using the Chernoff bound and Fano's inequality, respectively. The upper bound shows that the performance is determined by a quantity measuring the measurement matrix incoherence, while the lower bound reveals the importance of the total measurement gain. The lower bound is applied to derive the minimal number of samples needed for accurate direction-of-arrival (DOA) estimation for a sparse representation based algorithm. When applied to Gaussian measurement ensembles, these bounds give necessary and sufficient conditions for a vanishing probability of error for majority realizations of the measurement matrix. Our results offer surprising insights into sparse signal recovery. For example, as far as support recovery is concerned, the well-known bound in Compressive Sensing with the Gaussian measurement matrix is generally not sufficient unless the noise level is low. Our study provides an alternative performance measure, one that is natural and important in practice, for signal recovery in Compressive Sensing and other application areas exploiting signal sparsity.

On Reducing the Complexity of Tone-Reservation Based PAPR Reduction Techniques by Compressive Sensing by Eprahim B. Al-Safadi and Tareq Y. Al-Naffouri. The abstract reads:
In this paper, we describe a novel design of a Peak-to-Average-Power-Ratio (PAPR) reducing system, which exploits the relative temporal sparsity of Orthogonal Frequency Division Multiplexed (OFDM) signals to detect the positions and amplitudes of clipped peaks, by partial observation of their frequency content at the receiver. This approach uses recent advances in reconstruction of sparse signals from rank-deficient projections using convex programming collectively known as compressive sensing. Since previous work in the literature has focused on using the reserved tones as spectral support for optimum peak-reducing signals in the time-domain [5], the complexity at the transmitter was always a problem. In this work, we alternatively use extremely simple peak-reducing signals at the transmitter, then use the reserved tones to detect the peak-reducing signal at the receiver by a convex relaxation of an other-wise combinatorially prohibitive optimization problem. This in effect completely shifts the complexity to the receiver and drastically reduces it from a function of N (the number of subcarriers in the OFDM signal), to a function of m (the number of reserved tones) which is a small subset of N.

On the Exact Space Complexity of Sketching and Streaming Small Norms by Daniel M. Kane, Jelani Nelson, David P. Woodru ff. The abstract reads:

We settle the 1-pass space complexity of (1 +- \epsilon)- approximating the Lp norm, for real p with 1 \lt p \lt 2, of a length-n vector updated in a length-m stream with updates to its coordinates. We assume the updates are integers in the range [-M;M]. In particular, we show the space required is O(\epsilon^(-2 )log(mM) + log log(n)) bits. Our result also holds for 0 \lt p \lt 1; although Lp is not a norm in this case, it remains a well-de ned function. Our upper bound improves upon previous algorithms of [Indyk, JACM '06] and [Li, SODA '08]. This improvement comes from showing an improved derandomization of the Lp sketch of Indyk by using k-wise independence for small k, as opposed to using the heavy hammer of a generic pseudorandom generator against space-bounded computation such as Nisan's PRG. Our lower bound improves upon previous work of [Alon-Matias-Szegedy, JCSS '99] and [Woodru , SODA '04], and is based on showing a direct sum property for the 1-way communication of the gap-Hamming problem.


and after compressed sensing for vectors, matrices (low-rank), we now have compressed sensing for multilinear forms’ (tensor variables): Multiarray signal processing: tensor decomposition meets compressed sensing by Lek-Heng Lim and Pierre Comon. The abstract reads:
We discuss how recently discovered techniques and tools from compressed sensing can be used in tensor decompositions, with a view towards modeling signals from multiple arrays of multiple sensors. We show that with appropriate bounds on coherence, one could always guarantee the existence and uniqueness of a best rank-r approximation of a tensor. In particular, we obtain a computationally feasible variant of Kruskal’s uniqueness condition with coherence as a proxy for k-rank. We treat sparsest recovery and lowest-rank recovery problems in a uniform fashion by considering Schatten and nuclear norms of tensors of arbitrary order and dictionaries that comprise a continuum of uncountably many atoms.

Lek-Heng Lim made two presentations at NIPS 09, Part I: "Rank aggregation via Hodge theory," and Part II: "Rank aggregation via nuclear norm minimization.

I also found three presentations:

From the Rice repository, I seem to have missed the following:

In this work, we propose algorithms to recursively and causally reconstruct a sequence of natural images from a reduced number of linear projection measurements taken in a domain that is “incoherent” with respect to the image’s sparsity basis (typically wavelet) and demonstrate their application in real-timeMR image reconstruction. For a static version of the above problem, Compressed Sensing (CS) provides a provably exact and computationally efficient solution. But most existing solutions for the actual problem are either offline and non-causal or cannot compute an exact reconstruction (for truly sparse signal sequences), except using as many measurements as those needed for CS. The key idea of our proposed solution (modified-CS) is to design a modification of CS when a part of the support set is known (available from reconstructing the previous image). We demonstrate the exact reconstruction property of modified-CS on full-size image sequences using much fewer measurements than those required for CS. Greatly improved performance over existing work is demonstrated for approximately sparse signals or noisy measurements.

No comments:

Printfriendly