Pages

Wednesday, January 21, 2009

CS: KF-CS code, Weighted l1 with prior, Comment on CGD, Cognitive Radio, a Postdoc, TSW-CS, Impact and Trends and a Video.

Namrata Vaswani just released two papers on a Kalman filtered reconstruction solver in Analyzing Least Squares and Kalman Filtered Compressed Sensing. The abstract reads:

In recent work, we studied the problem of causally reconstructing time sequences of spatially sparse signals, with unknown and slow time-varying sparsity patterns, from a limited number of linear “incoherent” measurements. We proposed a solution called Kalman Filtered Compressed Sensing (KF-CS). The key idea is to run a reduced order KF only for the current signal’s estimated nonzero coefficients’ set, while performing CS on the Kalman filtering error to estimate new additions, if any, to the set. KF may be replaced by Least Squares (LS) estimation and we call the resulting algorithm LS-CS. In this work, (a) we bound the error in performing CS on the LS error and (b) we obtain the conditions under which the KF-CS (or LS-CS) estimate converges to that of a genie-aided KF (or LS), i.e. the KF (or LS) which knows the true nonzero sets.

and in Real-time Dynamic MR Image Reconstruction using Kalman Filtered Compressed Sensing
byChenlu Qiu, Wei Lu and Namrata Vaswani. The abstract reads:
In recent work, Kalman Filtered Compressed Sensing (KF-CS) was proposed to causally reconstruct time sequences of sparse signals, from a limited number of “incoherent” measurements. In this work, we develop the KF-CS idea for causal reconstruction of medical image sequences from MR data. This is the first real application of KF-CS and is considerably more difficult than simulation data for a number of reasons, for example, the measurement matrix for MR is not as “incoherent” and the images are only compressible (not sparse). Greatly improved reconstruction results (as compared to CS and its recent modifications) on reconstructing cardiac and brain image sequences from dynamic MR data are shown.
The code implementation can be found on the KF-CS page. I wonder how this solver is different from the more recent papers featured last week from IBM. Kudos to Namrata Vaswani for releasing the implementation.

Here is a new interesting preprint entitled Weighted $\ell_1$ Minimization for Sparse Recovery with Prior Information by M. Amin Khajehnejad, Weiyu Xu, Salman Avestimehr, Babak Hassibi. The abstract reads:
In this paper we study the compressed sensing problem of recovering a sparse signal from a system of underdetermined linear equations when we have prior information about the probability of each entry of the unknown signal being nonzero. In particular, we focus on a model where the entries of the unknown vector fall into two sets, each with a different probability of being nonzero. We propose a weighted $\ell_1$ minimization recovery algorithm and analyze its performance using a Grassman angle approach. We compute explicitly the relationship between the system parameters (the weights, the number of measurements, the size of the two sets, the probabilities of being non-zero) so that an iid random Gaussian measurement matrix along with weighted $\ell_1$ minimization recovers almost all such sparse signals with overwhelming probability as the problem dimension increases. This allows us to compute the optimal weights. We also provide simulations to demonstrate the advantages of the method over conventional $\ell_1$ optimization.
I note the following interesting bit that gives some context into how this new approach is different:

The analysis uses the high-dimensional geometry techniques first introduced by Donoho and Tanner [1], [3] (e.g., Grassman angles) to obtain sharp thresholds for compressed sensing. However, rather than use the neighborliness condition used in [1], [3], we find it more convenient to use the null space characterization of Xu and Hassibi [4], [13]. The resulting Grassmanian manifold approach is a general framework for incorporating additional factors into compressed sensing: in [4] it was used to incorporate measurement noise; here it is used to incorporate prior information and weighted ℓ1 optimization. Our analytic results allow us to compute the optimal weights for any p1, p2, n1, n2. We also provide simulation results to show the advantages of the weighted method over standard ℓ1 minimization.

On a different note, Dirk Lorenz, a reader of this blog, kindly emailed me the following:
...when I checked your entry on "A Coordinate Gradient Descent Method for l_1-regularized Convex Minimization" on Nuit Blanche, I started to check the paper in detail. Well, I do not agree on your remark that it seems that CGD is faster than any other method. At least from their tables, they report always a quite large objective value for CGD. I guess that other methods may be much faster when solving the problem just to that accuracy.
After some back and forth e-mail discussion, he and I agree that the examples shown in the paper show an anedoctal fact. The CGD solver seems to be going much faster than some of the fastest reconstruction codes like GPSR_BB and FPC for the examples featuring a compressed sensing example (partial fourier and gaussian matrix, table 1-4). In some case, it is 10 times faster. At the same time, the same solver seems to be doing worse that the others for the blurring example. As I said, it is anecdotal, but for somebody who sees through the prism of CS, it sure is intriguing. I realize that mathematicians have other stronger standards. Thanks Dirk for pointing this out!

On a related note, it's a hunch but I would not be overly surprised when we do have CS imagers that they would have an advantage over wavefront coding imaging systems since the mixing in Compressive Sensing imagers would be more like the gaussian mixing mentioned above than a specific blurring kernel instatiated by the cubic-PM phase mask as used in wavefront coding. But then again I am not specialist like Ramesh.



Dirk Lorenz also released the slides of his presentation at the SIAM Conference on Imaging Science in San Diego entitled: Semismooth Newton and active set methods for sparse reconstruction.

In a different area, Martin Braun, Jens Elsner, Friedrich Jondral just released another use of Compressive sensing in Cognitive radios in Signal Detection for Cognitive Radios with Smashed Filtering. The abstract reads:
Compressed Sensing and the related recently introduced Smashed Filter are novel signal processing methods, which allow for low-complexity parameter estimation by projecting the signal under analysis on a random subspace. In this paper the Smashed Filter of Davenport et al. is applied to a principal problem of digital communications: pilot-based time offset and frequency offset estimation. An application, motivated by current Cognitive Radio research, is wide-band detection of a narrowband signal, e.g. to synchronize terminals without prior channel or frequency allocation. Smashed Filter estimation and maximum likelihood-based, uncompressed estimation for a signal corrupted by additive white Gaussian noise (Matched Filter estimation) are compared. Smashed Filtering adds a degree of freedom to signal detection and estimation problems, which effectively allows to trade signal-to-noise ratio against processing bandwidth for arbitrary signals.
Other reports from the Karlsruhe group can be found here. Cognitive Radios have been added to the Compressive Sensing Hardware page even though I do not know of specific hardware implementation. Let us hope it find its way as a module of some sort in the GNU Radio project. If you are interested in Cognitive Radios, Compressive Sensing and Sensor Networks, there is an announcement for a Postdoc position at Computer Science & Engineering, Electrical Engineering, Mathematics & Statistics at University of Vigo (Doctoral University) in Spain. More can be found on the Compressive Sensing Jobs pages.

The Bayesian Compressive Sensing page has updated another newer version of TSW-CS: It can be downloaded at: tswcs.zip (Last Updated: Jan. 19, 2009). It would not hurt if there was a version number associated with each of these releases. Again all these solvers are listed in the reconstruction section of the Big Picture.

As some of you know, I am interested in the real impact of this blog. One of the ways to do this is through an evaluation of "clean" statistics i.e. statistics showing how this site (and no other site) bring folks to read a specific paper or download an application. About two weeks ago, Mark Iwen made available the code for his Ann Arbor Fast Fourier Transform i.e. "a Fast Fourier Transform (FFT) method for frequency sparse functions. Recovers the k most energetic frequencies hidden in a bandwidth-N function in k*polylog(N) time."

This is an important algorithm. The result of this "clean trial" yielded the following stats (see above) in a week when most people have not gotten back to school yet (it was a low stats period for the blog too): about 50 hits the first day and a sustained 20 hits per day for five days. As usual, the amount in the tail roughly counts cumulatively as much as the first day. I think those numbers should be higher in a higher traffic period.

The Ann Arbor Fast Fourier Transform is available for download at sourceforge. The code is implemented in Empirical Evaluation of a Sub-Linear Time Sparse DFT Algorithm.

The number of people reading this blog through an RSS feed is in the realms of 240 people. This is a little less than the number of people coming directly to the site everyday (255/day), and while this two sets overlap, the intersection of these two sets is not a large numbercompared to either sets since most of the folks coming to the site do so after a search of Google or through wikipedia.

Another stat caught my eyes in the large increase of people coming to check the videos / online talks. A five fold increase in eight months and a near two-fold increase in December thanks to the link put on the Rice repository. I think it says a lot about the future of information sharing even in the intellectual realm. Next time, you organize a workshop or a conference, you may want to think about putting some monies for the video production!
Right now, the latest videos are on the top but I am not sure it is a good way of putting these together as some of the earlier talks at IMA are still very important for newcomers to watch first. I should probably put more of them inside the big picture and the other static pages. Other statistics numbers are provided by Youtube for every videos. I have looked at a small sample of them and it looks like when they are featured on the site, one can expect an average 160 people to watch it. To the students who went through the pain of producing a video on the subject, please, pretty please, do make sure your information on Youtube link back to your website.



To get a sense of the traffic: the traffic on the video site is currently one fifth that of the big picture and one tenth that of the blog, but it is also picking up in speed.


While we 're at it, I mentioned before the 28MB video presenting the paper entitled: Compressive Structured Light for Recovering Inhomogeneous Participating Media by Jinwei Gu, Shree Nayar, Eitan Grinspun, Peter Belhumeur, and Ravi Ramamoorthi. It is now on Youtube. Enjoy.


No comments:

Post a Comment