Pages

Sunday, August 31, 2008

CS Community: Impact Statistics and Related Numbers.

For those of you interested in readership statistics, I have tried to put my hands on as many numbers as possible. Without further due, here they are:

Currently, in order to provide information to this blog and attendant pages:
  • there are about 500 pages being crawled every day,
  • I have on average between one and two good discussions with one of you or a researcher outside of this community on the subject of compressive sensing per week.
  • There has been about 280 entries on Compressive Sensing, most of them featuring extremely recent preprints.
On the readership's side:
  • ~30 people are reading this blog directly in their e-mail box,
  • 102 people are reading this blog through Google Reader, while another 60 maybe reading this RSS feed thanks to Feedburner.
  • ~180 visitors/day or about ~5000 visitors/months (not unique) are coming to the site with about half of that traffic from the search engines and from wikipedia.
  • The Compressive Sensing LinkedIn group has 24 members since its inception a week ago.
  • The readership and linkage to Nuit Blanche have enabled it to reach a PageRank of 5, while the Big Picture site has a PageRank of 3.
  • Some people come back often to either the blog or to the Big Picture site and some people stay for long periods of time (see the stats below gathered over a year and a half time period)


On the impact side, I have had some widely differing accounts. This mostly stems from the fact that:
  • I link directly to papers and Google Analytics or some other counters do not do a good job at counting those hits,
  • Sometimes Nuit Blanche is not the only blog/site pointing to the page or paper,
  • People read the blog through e-mails, rss readers, google cache or through the site itself,
For all these reasons, I currently estimate that about 60 to 100 people will click through to the paper listed on the blog the day it is posted. I also estimate that over time that number is doubled. One thing is important to remember, once it is featured on the Rice Compressive Sensing repository page or on this blog, Google knows about it and it becomes visible much before it eventually is published on a journals' sites. Sometimes this may lead to a very large audience as evidenced in the statistics of this video: It has now reached 2950 viewers from 250 a month ago, I am not sure how that happened, if one of you know more about this please let me know.

I have renovated the Big Picture on Compressive Sensing site but it needs additional cleaning and attention.

LinkedIn just sent me this this week:

First, thank you for managing your group on LinkedIn. We sincerely appreciate the time and effort you devote to your members, and we know they value it. Together you have made Groups one of the top features on LinkedIn.

This Friday, we will be adding several much-requested features to your group:

  • Discussion forums: Simple discussion spaces for you and your members. (You can turn discussions off in your management control panel if you like.)
  • Enhanced roster: Searchable list of group members.
  • Digest emails: Daily or weekly digests of new discussion topics which your members may choose to receive. (We will be turning digests on for all current group members soon, and prompting them to set to their own preference.)
  • Group home page: A private space for your members on LinkedIn.

We're confident that these new features will spur communication, promote collaboration, and make your group more valuable to you and your members. We hope you can come by LinkedIn on Friday morning to check out the new functionality and get a group discussion going by posting a welcome message.

I am wondering how this can be used for the Compressive Sensing LinkedIn group. Initially, it would be a great way for companies and employers to advertize about jobs that require an understanding of Compressive Sensing as listed in CSjobs. Thoughts ?

Saturday, August 30, 2008

CS: FRI, CS based estimation of doubly selective channels, Deterministic Dictionaries, CC, Several talks/videos and a IEEE CfP on CS

Thanks to my webcrawler, here is what I have found in the past two days:

An article that might trigger beginning electrical engineers in dabbing in compressive sensing:
A Generalized Sampling Method for Finite-Rate-of-Innovation-Signal Reconstruction by Chandra Sekhar Seelamantula, Michael Unser. The abstract reads:

The problem of sampling signals that are not admissible within the classical Shannon framework has received much attention in the recent past. Typically, these signals have a parametric representation with a finite number of degrees of freedom per time unit. It was shown that, by choosing suitable sampling kernels, the parameters can be computed by employing high-resolution spectral estimation techniques. In this paper, we propose a simple acquisition and reconstruction method within the framework of multichannel sampling. In the proposed approach, an infinite stream of nonuniformly-spaced Dirac impulses can be sampled and accurately reconstructed provided that there is at most one Dirac impulse per sampling period. The reconstruction algorithm has a low computational complexity and the parameters are computed on the fly. The processing delay is minimal—just the sampling period. We propose sampling circuits using inexpensive passive devices such as resistors and capacitors. We also show how the approach can be extended to sample piecewise-constant signals with a minimal change in the system configuration. We provide some simulation results to confirm the theoretical findings.
Here is an EUSIPCO paper made available entitled: Compressed sensing based estimation of doubly selective channels using a sparsity-optimized basis expansion by Georg Tauböck and Franz Hlawatsch. The abstract reads:

Most transmission schemes for MIMO channels assume (i) a block fading channel without delay spread and (ii) availability of channel state information at the receiver. Here, we extend the space-time matrix modulation (STMM) scheme and the iterative demodulation algorithm that we introduced previously to unknown, doubly selective MIMO channels, i.e., delayDoppler-spread MIMO channels that are unknown to the receiver. We show that the structure inherent in STMM allows perfect reconstruction of the data when transmitting over an unknown doubly selective channel (apparently, this is not currently possible with other transmission schemes). Numerical simulations demonstrate significant performance advantages of STMM over Cayley differential unitary space-time modulation.

Of relevance to dictionary building here is: On some deterministic dictionaries supporting sparsity by Shamgar Gurevich, Ronny Hadani, Nir Sochen. The abstract reads:

We describe a new construction of an incoherent dictionary, referred to as the oscillator dictionary, which is based on considerations in the representation theory of finite groups. The oscillator dictionary consists of order of p^5 unit vectors in a Hilbert space of dimension p, where p is an odd prime, whose pairwise inner products have magnitude of at most 4/sqrt(p). An explicit algorithm to construct a large portion of the oscillator dictionary is presented.
Here is one paper that keeps on raising my interest, i.e. I keep on going back to it often. If you know how to count, then you might be interested in the concept of compressed counting in A Very Efficient Scheme for Estimating Entropy of Data Streams Using Compressed Counting
by Ping Li. The abstract reads:

Compressed Counting (CC) was recently proposed for approximating the αth frequency moments of data streams, for , especially α ≈ 1. One direct application of CC is to estimate the entropy, which is an important summary statistic in Web/network measurement and often serves a crucial “feature”for data mining. The R´enyi entropy and the Tsallis entropy are functions of the αth frequency moments; and both approach the Shannon entropy as α → 1. Previous studies used the standard algorithm based on symmetric stable random projections to approximate the αth frequency moments and the entropy. Based on maximally-skewed stable random projections, Compressed Counting (CC) dramatically improves symmetric stable random projections, especially when α ≈ 1. This study applies CC to estimate the R´enyi entropy, the Tsallis entropy, and the Shannon entropy. Our experiments on some Web crawl data demonstrate significant improvements over previous studies. When estimating the frequency moments, the R´enyi entropy, and the Tsallis entropy, the improvements of CC, in terms of accuracy, appear to approach “infinity” as α → 1. When estimating Shannon entropy using R´enyi entropy or Tsallis entropy, the improvements of CC, in terms of accuracy, are roughly 20- to 50-fold. When estimating the Shannon entropy from R´enyi entropy, in order to achieve the same accuracy, CC only needs about 1/50 of the samples (storage space), compared to symmetric stable random projections. When estimating the Shannon entropy from Tsallis entropy, however, symmetric stable random projections can not achieve the same accuracy as CC, even with 500 times more samples.
The following are a list of talks/presentation and attendant videos of relevance to either compressive sensing, l1 minimization and related subjects:

Lawrence Carin will talk about Exploiting Structure in Statistical Compressive-Sensing Inversion on Thursday, September 25, 2008, from 4:00 PM to 5:00 PM at Rice.
Location: 1070 Duncan Hall, Rice University, 6100 Main St, Houston, Texas, USA
The abstract of the presentation reads:
Compressive sensing involves performing sensing measurements based on projections, with the projection vectors typically constituted randomly. It is assumed that the underlying signal is compressible in a particular orthonormal basis, and the objective is to infer the sparse set of transform coefficients from the projection measurements. One therefore must solve an inverse problem to recover the underlying signal from the associated projection measurements. We examine this problem from a statistical standpoint, inferring a posterior distribution on the underlying signal. A focus is placed on non-parametric Bayesian formalisms, wherein several forms of structure within the data are exploited, to enhance inversion quality. This includes leveraging inter-relationships between multiple compressive-sensing measurements, as well as structure within the underlying data itself (e.g., structure within the transform coefficients, particularly for a wavelet basis). This talk with describe the underlying theory associated with the statistical methods, and will present several examples.

Videolectures.net seems to cover ICML '08, yet I could not find this presentation: Y. Qi, D. Liu, D.B. Dunson and L. Carin, Multi-Task Compressive Sensing with Dirichlet Process Priors, to appear in Int. Conf. Machine Learning, 2008. But Lawrence Carin's other presentation at ICML entitled:


Hierarchical Kernel Stick-Breaking Process for Multi-Task Image Analysis
is there.






L1-based relaxations for sparsity recovery and graphical model selection in the high-dimensional regime
by Martin J. Wainwright. The abstract of the talk reads:

The problem of estimating a sparse signal embedded in noise arises in various contexts, including signal denoising and approximation, as well as graphical model selection. The natural optimization-theoretic formulation of such problems involves "norm" constraints (i.e., penalties on the number of non-zero coefficients), which leads to NP-hard problems in general. A natural approach is to consider the use of the l1-norm as a computationally tractable surrogate, as has been pursued in both signal processing and statistics.

I recently came across the interesting talks of Roman Vershynin, they are very clear for an engineer like me:


Finally, here is a Call for Paper on the subject of Compressive Sensing.

Call for Papers, IEEE Signal Processing Society, IEEE Journal of Selected Topics in Signal Processing: Special Issue on Compressive Sensing. The introduction states:

At its core, the young field of compressive sensing (CS) is nothing more than the ability of simple algorithms to solve the seemingly intractable problem of finding the sparsest solution of a severely underdetermined linear system of equations, under realistic conditions. This leads immediately to new signal reconstruction methods that are successful with surprisingly few measurements, which in turn leads to signal acquisition methods that effect compression as part of the measurement process (hence “compressive sensing”). These recent realizations (though built upon prior work exploiting signal sparsity) have spawned an explosion of research yielding exciting results in a wide range of topics, encompassing algorithms, theory, and applications.
Original papers, previously unpublished and not currently under review by another journal, are solicited for this special issue. The scope of this special issue includes, but is not limited to:
  • algorithms for CS
  • CS methods that are tolerant to noise, signal non-sparsity, or measurement nonlinearity
  • measurement/sampling procedures
  • mathematical theory of CS
  • CS for multiple signals or with additional information
  • CS for analog signals
  • signal processing of compressive measurements
  • nonadaptive signal compression or streaming dataset reduction
  • hardware implementation of CS systems
  • applications of CS
Submission information is available at http://www.ece.byu.edu/jstsp. Prospective authors are required to follow the Author’s Guide for manuscript preparation of the IEEE Transactions on Signal Processing at http://ewh.ieee.org/soc/sps/tsp. Manuscripts will be peer reviewed according to the standard IEEE process.
  • Manuscript submission due: Feb. 20, 2009
  • First review completed: May 15, 2009
  • Revised manuscript due: Jul. 1, 2009
  • Second review completed: Sep. 15, 2009
  • Final manuscript due: Oct. 15, 2009

Credit: NASA/JPL/University of Arizona
, Layered Deposits within Unnamed Crater in Arabia Terra (PSP_009180_1840).

Varying radioactive decay constants: Are we missing something ?


From Wikipedia:
The current scientific explanation for the Earth's temperature gradient is a combination of the heat left over from the planet's initial formation, the decay of radioactive elements, and the freezing of the inner core. Another possible explanation includes the georeactor hypothesis, though this theory has not gained acceptance in the scientific community.
Hence, if the radioactive decay constants vary with the location in the solar system, then the core of our planet is heating anisotropically: i.e. there are places that are colder than others, most probably inducing convection movements at the center of the Earth. Wouldn't it have been noticed by now ? Let us also remember that Radium-226 that is studied in this article is a decay product of U-238, and we know there is something fishy about the ratio U-235/U238.

A reader on Slashdot suggests that a bias might come from a shift in frequency due the power load on the electrical grid:

...As someone who made the equipment that the scientist probably used to do the counting, I have one possible explanation. Most Multichannel Analyzers (MCAs) of the time used a line clock to determine the time. They assume that the power company delivered 60Hz power (or 50 Hz in Europe), This frequency was almost never precise but varied by .1 to .2% (one plant where I measured the frequency put out 58.8Hz for example, a real mess for us) from time to time. A systemic variation due to power loads (heating in winter/ AC in summer) could easily bias the power frequency by about the right amount with the right periodicity. The universe might well be safe...


This is not entirely far fetched. One of the thing that one can notice in the graphs of the article are the apparent shifts between the oscillations. In order to get a better idea, I went ahead and downloaded some of the data for the French electrical grid in 2007 to check this potential bias mentioned by the slashdot commenter and as one can see there is indeed some peaking occurring away from January 1.

Furthermore, in the article, one can see the following item:
This possibility is supported by the data we report in Ref. [18] in which we present evidence for the possible detection of a change in the decay rate of 54Mn during the solar flare of 13 December 2006. As noted in Ref. [18], the coincidence in time between the change in the 54Mn counting rate and the solar flare, along with other observations, is consistent with a mechanism based on a change in phi_nu during the solar flare.
Again, this is also entirely consistent with how geomagnetic storms influence the power grid and one of the reasons, the NOAA provides information on this to electric utilities.

Friday, August 29, 2008

CS: Insight in Performance limits for jointly sparse signals via graphical models.


Here is some additional inisight on the paper entitled Performance limits for jointly sparse signals via graphical models by Marco Duarte, Shriram Sarvotham, Dror Baron, Michael Wakin , and Richard Baraniuk. I have covered it here and here before, but the poster is new.

Furthermore, Dror Baron who is currently doing compressed sensing in Israel wanted to point out the following insight that will hopefully push more people into thinking of CS as a way of implementing distributed systems:

The interesting point is that in the world of noiseless measurement of strictly sparse signals, dimensionality plays a volumetric role analogous to entropy in the data compression world. We have shown a generic bound that applies in the following settings:
  1. Single signal CS: one signal being measured noiselessly.
  2. Joint CS: one encoder looks at an ensemble of signals and measures them jointly (together), the CS measurements are multiplication of a matrix by the vector of the entire ensemble. Not surprisingly, the bounds here are similar to single signal CS.
  3. Distributed CS: again there are multiple signals, but here they are measured in a distributed manner, and each is encoded (measured) differently by a different measurement matrix. We provide a region describing the number of noiseless measurements required for each of the the signals. Surprisingly, the sum of the number of measurements is very similar to before.

In my view, the real contribution here is that in the idealized world of noiseless strictly sparse signals, the dimensionality of the signal ensemble under evaluation provides a precise characterization of the number of measurements required. This really enhances and magnifies the role that dimensionality plays in these systems. Although this is perhaps an idealized application, it can provide insights into noisy measurement systems, which is of interest to me.

Thursday, August 28, 2008

CS: Sparse MRI, Theory and Application of CS, Model-based CS

Michael Lustig just released his Ph.D thesis entitled SPARSE MRI.

The Rice repository has two new entries:

Richard Baraniuk just made his "tutorial" presentation at EUSIPCO available. It is entitled Theory and applications of compressive sensing.

Here is a very noteworthy work: Model-based compressive sensing by Richard Baraniuk, Volkan Cevher, Marco Duarte, and Chinmay Hegde. The abstract reads:
Compressive sensing (CS) is an alternative to Shannon/Nyquist sampling for acquisition of sparse or compressible signals that can be well approximated by just elements from an N-dimensional basis. Instead of taking periodic samples, we measure inner products with random vectors and then recover the signal via a sparsity-seeking optimization or greedy algorithm. The standard CS theory dictates that robust signal recovery is possible from M = O(K log(N/K)) measurements. The goal of this paper is to demonstrate that it is possible to substantially decrease M without sacrificing robustness by leveraging more realistic signal models that go beyond simple sparsity and compressibility by including dependencies between values and locations of the signal coefficients.We introduce a model based CS theory that parallels the conventional theory and provides concrete guidelines on how to create model-based recovery algorithms with provable performance guarantees. A highlight is the introduction of a new class of model-compressible signals along with a new sufficient condition for robust model compressible signal recovery that we dub the restricted amplification property (RAmP). The RAmP is the natural counterpart to the restricted isometry property (RIP) of conventional CS. To take practical advantage of the new theory, we integrate two relevant signal models — wavelet trees and block sparsity — into two state-of-the-art CS recovery algorithms and prove that they offer robust recovery from just M = O(K) measurements. Extensive numerical simulations demonstrate the validity and applicability of our new theory and algorithms.

Wednesday, August 27, 2008

CS: Literature Review on Sparse Optimization


After having made available a detailed explanation of Antonin Chambolle's algorithm for the resolution of compressed sensing with TV regularization, Gabriel Peyre has written a much welcomed literature review on some of the iterative algorithms used with L1/TV regularization in a paper entitled: Literature Review on Sparse Optimization. The abstract reads:
Some thoughts about algorithms for minimizing l1 and TV norms for signal and image recovery. In particular, I put the emphasis on the connections between Lagrangian and l2 constrained optimizations, and between analysis and synthesis priors.

This a nice addition to guide some of us in the "new" techniques that are showing up (they were only three directly downloadable a year and half ago). One important fact is that this recent surge in new methods has not kept up with an actual quantification of the speed of convergence on known examples or benchmarks (see end of list for images) as featured in the Sparco easy-to-use-and-solver-agnostic framework. More importantly the connection between these solvers and specific CS measurement matrices is still ripe for further investigation. In Compressive Sensing (CS), we are not only interested in the convergence speed (which is of interest to numerical analysts looking at these schemes) but we are also interested in the minimum number of compressed measurements required to recover the original signal/image. Maybe some enterprising student would want to duplicate for images something like what David Mary did in this post ( A Comparison of the Reconstruction Capability of CoSaMP, OMP, Subspace Pursuit and Reweighted Lp) for 1-d signals and help the community out ? On top of being a low hanging fruit, I will make sure it gets advertized here and it should receive some attention.


Images Credit: NASA/JPL-Caltech/University of Arizona/Texas A&M, Image taken by the SSI instrument on the Phoenix lander on Sol 87.

Tuesday, August 26, 2008

CS: Interpolating solutions of the Helmhotz equation with compressed sensing


Here is a very nice talk by Evgeniy Lebed with Felix Herrmann, Yogi Erlangga and Tim Lin entitled Interpolating solutions of the Helmhotz equation with compressed sensing or in another words, how to solve the Helmoltz equation with CS:

The introduction reads:

We present an algorithm which allows us to model wavefields with frequency-domain methods using a much smaller number of frequencies than that typically required by the classical sampling theory in order to obtain an alias-free result. The foundation of the algorithm is the recent results on the compressed sensing, which state that data can be successfully recovered from an incomplete measurement if the data is sufficiently sparse. Results from numerical experiment show that only 30% of the total frequency spectrum is need to capture the full wavefield information when working in the hard 2D synthetic Marmousi model.

Monday, August 25, 2008

CS: SPGL1, SPARCO, CVX news, updated nuclear norm paper, Bayesian NMF, Bregman for Deblurring and another 3D reconstruction from a single image paper.

A new version of the SPGL1 solver (version 1.6) was just released on August 20th, 2008. As gathered from the main page:

In version 1.6, the core SPGL1 function was generalized to solve the above three problems with 1 replaced by any norm . In order to do so, it requires that functions are provided for computing: (1) the (primal) norm , (2) the corresponding dual norm , and (3), the (possibly weighted) Euclidean projection:

project(x):=argminpx2subject top

Using this framework we implemented two new solvers (for more information, see spgdemo.m and Extensions):

Multiple measurement vectors (MMV)

The MMV version of BPDN currently implemented solves

(MMV)minimizeX12subject toAXB22,

where the mixed (pq)-norm, Xpq, (pq1), defined by

Xpq:=imXiTpq1p ,

with X an mn matrix and where Xi denotes the i-th row of X. When weights are given, they apply to each row of X.

Group-sparse BPDN

In group-sparse BPDN each entry of x is assigned a group. Denoting by i the indices of group i, the group-sparse BPDN problem, for 0, is given by:

(Group)minimizeixi2subject toAxb2 .

It is assumed that the i are disjoint and that their union gives the set 1n. When weights are given, they apply to each group.



For more information you may want to check this page. This is a nice addition as we have seen the use of mixed norm popping up several times before and particularly in:
Michael Friedlander, one of the author of SPGL1, further pointed out the following when shown the SPGL1 and TV minimization post by Laurent Jacques:

A while ago I had wondered about extending SPGL1 to other norms, including TV. But I couldn't think of a good way to do projections onto the "TV-norm ball". (Projection onto the one-norm ball is a crucial part of the SPGL1 algorithm.)...The solver now handles "group" one-norms. These are useful when subsets of coefficients need to light up or shut off as a group. Complex-valued problems are a special case, where we expect that the real and complex components should be jointly sparse.


SPARCO is still at version 1.2 but an overview of the Sparco toolbox is given in this technical report.

Way to go Michael and Ewout!


The latest news for CVX ( Matlab Software for Disciplined Convex Programming ) by Michael Grant, Stephen Boyd and Yinyu Ye, are:
Fuller support for 64-bit platforms

CVX version 1.2 now supports 64-bit versions of Matlab on Windows, Linux, and Macintosh platforms; precompiled MEX files are supplied for each of these platforms. Previously, only SDPT3 was supported on 64-bit platforms; now SeDuMi is as well. However, due to the evolution of Matlab's MEX API, Matlab version 7.3 or later is required for 64-bit platforms; and SeDuMi requires version 7.5 or later. (Earlier versions of Matlab are still supported for 32-bit platforms.)

Support for the exponential family of functions


On an unrelated news: