Pages

Wednesday, September 30, 2009

CS: Some basic CS examples, C-SALSA, Distributed Sensing, Dictionary learning, Replica Method and Applications to CS, SLEP

Alejandro Weinstein a PhD student in the Engineering Department at the Colorado School of Mines sent me the following:

I just wrote some Matlab code with basic CS examples, using L1-Magic and CVX. I also wrote a document describing the code. They are both available at

http://control.mines.edu/mediawiki/index.php/CS_sample_code

I think they might be useful for people new to CS, so it will be great if you can mention this in your blog....
The four examples are described below:

Matlab Files Description

  • ex1.m: Sensing matrix phi is random, the representation basis Psi is the canonical basis (the signal is sparse in the time domain). The recovery is done using L1-magic.
  • ex2.m: Same signal and measurement as ex1.m, but the recovery is done using cvx.
  • ex3.m: Same signal as ex2, but the measurement is done using the Fourier basis.
  • ex4.m: Sensing matrix phi is random, the representation basis Phi is the Fourier basis (the signal is sparse in the frequency domain). The recovery is done using CVX.

Thank you Alejandro !




It looks we have a contender to SPGL1 in the shape of C-SALSA as witnessed in this paper: A fast algorithm for the constrained formulation of compressive image reconstruction and other linear inverse problems by Manya V. Afonso, J. Bioucas-Dias, and Mario Figueiredo. The abstract reads:
Ill-posed linear inverse problems (ILIP), such as restoration and reconstruction, are a core topic of signal/image processing. A standard approach to deal with ILIP uses a constrained optimization problem, where a regularization function is minimized under the constraint that the solution explains the observations sufficiently well. The regularizer and constraint are usually convex; however, several particular features of these problems (huge dimensionality, non-smoothness) preclude the use of off-the-shelf optimization tools and have stimulated much research. In this paper, we propose a new efficient algorithm to handle one class of constrained problems (known as basis pursuit denoising) tailored to image recovery applications. The proposed algorithm, which belongs to the category of augmented Lagrangian methods, can be used to deal with a variety of imaging ILIP, including deconvolution and reconstruction from compressive observations (such as MRI). Experiments testify for the effectiveness of the proposed method.
Mario let me know that the code should be made available online, within a few weeks. Thanks Mario !


I also found the following on the interwebs:

Distributed Sensing of Signals under a Sparse Filtering Model, by Ali Hormati , Olivier Roy , Yue M. Lu and Martin Vetterli. The abstract reads:
We consider the task of recovering correlated vectors at a central decoder based on fixed linear measurements obtained by distributed sensors. Two different scenarios are considered: In the case of universal reconstruction, we look for a sensing and recovery mechanism that works for all possible signals, whereas in the case of almost sure reconstruction, we allow to have a small set (with measure zero) of unrecoverable signals. We provide achievability bounds on the number of samples needed for both scenarios. The bounds show that only in the almost sure setup can we effectively exploit the signal correlations to achieve effective gains in sampling efficiency. In addition, we propose an efficient and robust distributed sensing and reconstruction algorithm based on annihilating filters.
Dictionary learning and sparse coding for unsupervised clustering by Pablo Sprechmann and Guillermo Sapiro. The abstract reads:
A clustering framework within the sparse modeling and dictionary learning setting is introduced in this work. Instead of searching for the set of centroid that best fit the data, as in k-means type of approaches that model the data as distributions around discrete points, we optimize for a set of dictionaries, one for each cluster, for which the signals are best reconstructed in a sparse coding manner. Thereby, we are modeling the data as the of union of learned low dimensional subspaces, and data points associated to subspaces spanned by just a few atoms of the same learned dictionary are clustered together. Using learned dictionaries makes this method robust and well suited to handle large datasets. The proposed clustering algorithm uses a novel measurement for the quality of the sparse representation, inspired by the robustness of the l_1 regularization term in sparse coding. We first illustrate this measurement with examples on standard image and speech datasets in the supervised classification setting, showing with a simple approach its discriminative power and obtaining results comparable to the state-of-the-art. We then conclude with experiments for fully unsupervised clustering on extended standard datasets and texture images, obtaining excellent performance.


I also found some news from the Rice Compressive Sensing repository:

Asymptotic Analysis of MAP Estimation via the Replica Method and Applications to Compressed Sensing by Sundeep Rangan, Alyson K. Fletcher, Vivek K Goyal. The abstract reads:

The replica method is a non-rigorous but widely-accepted technique from statistical physics used in the asymptotic analysis of large, random, nonlinear problems. This paper applies the replica method to non-Gaussian maximum a posteriori (MAP) estimation. It is shown that with random linear measurements and Gaussian noise, the asymptotic behavior of the MAP estimate of an n-dimensional vector decouples as n scalar MAP estimators. The result is a counterpart to Guo and Verdu's replica analysis of minimum mean-squared error estimation.
The replica MAP analysis can be readily applied to many estimators used in compressed sensing, including basis pursuit, lasso, linear estimation with thresholding, and zero norm-regularized estimation. In the case of lasso estimation the scalar estimator reduces to a soft-thresholding operator, and for zero norm-regularized estimation it reduces to a hard threshold. Among other benefits, the replica method provides a computationally-tractable method for exactly computing various performance metrics including mean-squared error and sparsity pattern recovery probability.

and a software code: SLEP: A Sparse Learning Package

Tuesday, September 29, 2009

CS: Compressive Sensing for Spectroscopy and Polarimetry, Q&A with Andres Asensio Ramos, Some news from HERSCHEL/PACS

Andres Asensio Ramos from the Instituto de Astrofísica de Canarias mentioned to me his recent preprint entitled; Compressive Sensing for Spectroscopy and Polarimetry by Andres Asensio Ramos and Arturo Lopez Ariste. The abstract reads:
We demonstrate through numerical simulations with real data the feasibility of using compressive sensing techniques for the acquisition of spectro-polarimetric data. This allows us to combine the measurement and the compression process into one consistent framework. Signals are recovered thanks to a sparse reconstruction scheme from projections of the signal of interest onto appropriately chosen vectors, typically noise-like vectors. The compressibility properties of spectral lines are analyzed in detail. The results shown in this paper demonstrate that, thanks to the compressibility properties of spectral lines, it is feasible to reconstruct the signals using only a small fraction of the information that is measured nowadays. We investigate in depth the quality of the reconstruction as a function of the amount of data measured and the influence of noise. This change of paradigm also allows us to define new instrumental strategies and to propose modifications to existing instruments in order to take advantage of compressive sensing techniques.

After reading his paper we started an impromptu discussion that I am editing and copying with permission. I initially asked:
...While you are discussing CS techniques as applied to spectrographs and spectropolarimeters, I did not see a deep discussion on the reason as to why a CS hardware would provide an edge as opposed to a simple hardware. Can you enlighten me on this ? Are you and your co-author currently designing hardware or is it just an exercise in evaluating potential CS technologies ?...
Andres replied with:
...It is true that there is not a deep discussion on why CS techniques can be of interest because I think it is clear in our community that going fast when acquiring data is good. So, any improvement leading to faster instruments without compromising too much (or even not compromising at all) spectral and spatial resolution will be welcomed (once they understand the maths behind CS techniques). At least in solar physics, it is impossible to measure 2D images with spectro-polarimetric information in each pixel with a spectral resolution of, say, 20 mA per pixel, and a polarimetric sensitivity of 10^-5 (detect one polarized photon per 100000). Either long integration times have to be used (very poor temporal resolution) or reduce the polarimetric sensitivity (weak magnetic fields cannot be detected). In my opinion, CS can lead to an improvement in this field because the acquisition times can be reduced by a large factor, thus leading to a boost in time resolution.
The main concern I have encountered in our community (I'm referring to solar and stellar physics) is more philosophical than practical. I have given some talks introducing to my colleagues the ideas of CS and how novel instruments can be built under this framework and I always find the same response: "I don't believe it. Of course, if you put a prior on the signal you measure, you will never find 'surprises' " (surprises in the sense of strange signals that do not follow the main trend). That's the main reason why my colleague and myself decided to publish a paper presenting the potential of CS for spectroscopy and spectropolarimetry, showing that signals are compressible and that 'surprises' can be, in principle, found if you measure enough times and propose a sufficiently complete basis set.
Concerning the development of the ideas we propose, we are now working on a modification to an instrument existing in the french telescope THEMIS (http://www.themis.iac.es/). This instrument is a high-spectral-resolution spectro-imager (it measures 2D images of the solar surface with each column of the image at a slightly different wavelength) but using multiplexing techniques. Our plans include doing l1 reconstructions of the multiplexed measurements and our first numerical tests indicate a good behavior. I'm discussing with other colleagues on applying CS techniques to the Fabry-Perot interferometer of Solar Orbiter, but this is still in a very initial phase. With our paper I hope to convince our instrumental colleagues that it is worth pushing towards this new measuring paradigm....

Igor:
...With regards to THEMIS, you said "..This instrument is a high-spectral-resolution spectro-imager (it measures 2D images of the solar surface with each column of the image at a slightly different wavelength) but using multiplexing techniques...". Does that mean that it doesn't use multiplexing right now but that you will try multiplexing, now that you interested in CS, to see if there is a better way of acquiring signals that way ?...
Andres:
...At the moment, it is working in "single wavelength" mode, meaning that there is only one wavelength per column in the image. The wavelength information is recovered by scanning either moving a slit or moving a grating. Our idea is to multiplex many wavelengths in each column, so that, assuming sparsity, we can recover the signal with a reduced amount of scan steps as compared with the single wavelength mode....
Igor:
...Have you considered instrumentation that could provide superresolution ?...
Andres:
...Not in the CS framework. I have considered doing Bayesian superresolution taking advantage of the natural jittering of some telescopes (like the Sunrise balloon and some wind-exposed coelostats) or inducing this jittering mechanically. Unfortunately, I haven't had time to push this approach to an end and the analytical calculations are still in my notebooks and some working numerical codes in my laptop. Are you aware of applications of CS to superresolution?...
Igor:
With regards to superresolution yes: check the blog for work by Roummel Marcia an Rebecca Willett at Duke on performing superresolution with coded aperture and also the work of Justin Romberg ... you should find trace of his imagers on the blog and the CS hardware page.... I am interested in hearing about how you guys would try to perform the (cheap) multiplexing in hardware so that you provide much better resolution than the current cameras you currently have. I think it is an extremely hard sell to say that CS can provide faster acquisition at the cost of some accuracy when most people publish papers with accuracy in mind. Hence, CS has to go beyond providing the same data in order to become something that people want as opposed to something would like to have because it is the new fad :-)
Andres:
I have to disagree :) For us doing solar physics time resolution is crucial for detecting events happening at very short timescales. If these events happen in less than 1 second, we cannot spend 5 seconds integrating to gain signal-to-noise. Therefore, reducing a factor 5-10 the integration time while maintaining the spectral resolution and the polarimetric sensitivity at similar levels opens up the possibility of studying very fast events.....Obviously, it would be great to be able to go beyond the resolution imposed by the cameras, although our resolution is almost always limited by the atmospheric stability. Some present telescopes are equipped with adaptive optics systems so that diffraction limited images can be obtained in some moments of good atmospheric stability. Of course, I can imagine systems that could carry out superresolution during these moments of good stability, but they also need to be fast and accurate doing spectroscopy at a resolution of 10 mA per pixel and polarimetry better than 10^-4, something that I doubt can be done presently. If you tell me to choose between superresolution and accuracy, I clearly prefer to see things at a lower spatial resolution (limited by my telescope) but be able to follow fast events with great precision. That's my opinion and I'm sure if you ask other colleagues, you'll find a panoply of different opinions :)
Igor:
....Adaptive optics to me is like fighting nature not measuring with it....I have some views on this and this is why I recently mentioned this issue of imaging with nature...
Andres:
Yes, I've read the[se entries]. I have been thinking on the possibility of taking advantage of the atmosphere to improve the quality of our solar images, but I find it difficult mainly because we don't know the "atmospheric sensing matrix".
Igor:
Yes I absolutely agree but I think we need to be inventive on that front. Specifically:
  • Right now some people are using turbulence to perform what they call Lucky imaging or Turbulence aided micro-lensing, can we used that phenomenon effect to our advantage especially in the context of time multiplexing ?
  • Do we have the means of figuring out atmospheric turbulences using other instruments/cameras not used for that purpose (specifically cameras on-board current satellites)
  • Since the Moon is a known object, can we probe the atmospheric turbulence by evaluating what the Moon should look like and what it looks like on the sensor of interest. What about doing that type of studies with inter star angles ?
  • Should we use interrogative means such as laser beams as the ones used for computing the distance between the Moon and the earth.
  • Others.....
Andres:
....Some colleagues at the IAC have imaged stellar systems using lucky imaging cameras. Although the concept is simple, I have always considered that it is a waste of photons and something smarter can be found....The atmosphere's PSF varies typically on the order of milliseconds. Perhaps it's possible to estimate how good the atmosphere behaves for reconstructions of sparse signals... More things to do for the future...
Igor:
Absolutely, [in particular with regards to Lucky imaging], I think it is a waste to let the unfocused part of the image go to waste. Connecting lucky imaging and the atmospheric PSF lead back to something called blind deconvolution, I mentioned something about it recently...
The discussion went on a little further. Thank you very much Andres for this thoughful and insightful discussion!

For information, the Planetary Society blog has a small description of the difference between an image taken by the Hubble and adaptive optics on the Keck Observatory. If you want to know more about the Hubble, you may want to go to Rice University tomorrow for a presentation on "Servicing the Hubble Space Telescope" by Michael Massimino, one of the astronauts who did the recent retrofitting of the Space Telescope. Finally, Jean-Luc Starck tells me that unlike HIFI, PACS (IAC site)works wonderfully, that it is in a calibration mode and that the CS mode should be tried in October. Woohoo, thanks Jean-Luc !


Finally, as an aside, I am glad that I am now on Andres' e-mail list because if the La Palma Volcano slides into the ocean and triggers a mega-tsunami, I am sure he'll send an e-mail to all his E-mail contacts warning them about not being anywhere near a coast in Africa, Europe, the U.S., the Carribean or South America.

Saturday, September 26, 2009

CS: A Compressive Sensing Newsletter ?

I was watching the stats of the blog recently and here is what I found:

One would think that starting 2009, interest stabilized except for the localized interest due to the SPARS'09 meeting and the Duke Workshop. Appearances are deceiving though as the readership of the blog turned to using the blog's feed. Here are the stats from feedburner (one third of the full RSS readership):


The 200 mark on the left hand side refers to the number of readers per day. Since I generally post during the week, this translate into 200 x 22 = 4400 hits per month for Feedreader or about 10,000 hits per months from the feed readers. This is on top of the stats shown above of 10,000 hits per months. In all about 20,000 hits per month (or a potential 20,000/22 = 900 readers/post). This number does not include the 214 people receiving every posts by e-mail.

In all, while the hits on the blog are probably newcomers (half of the traffic comes from search engine), I am more concerned about the generic number of committed readers (feed or e-mail): This number is currently at 764 and I will refer to this one number from now on instead of hits/months which in the end does not mean much.

After more than 500 posts on the subject, I am concerned that there is an information overload (see the result of the poll in the I-can't-keep-up post) and I am wondering if, strike that, I am certain that there is an interest for a monthly or bi-monthly (?) newsletter. It would be a subscription based newsletter (you'd have to pay to get it) and here is the pitch: there would be absolutely nothing new in that newsletter. However, it would NOT be a newsletter about nothing a la Seinfeld :-)



Rather the newsletter would provide some sort of version control and some context about the current investigations/ideas happening in the field. Nothing in the newsletter would be new in the sense that the blog, the attendant pages (CS big picture, CS Jobs, CS Online Talks/Videos, CS Hardware) would function in exactly the same manner and remain open to everybody. The newsletter would provide some sorts of summary to the folks who, for different reasons, cannot keep up with the daily flow of information in the field. Do you think there is an interest ? Please come to the site and let me know by answering the following poll (see below).





Thanks!

Friday, September 25, 2009

CS: ISD, A New Compressed Sensing Algorithm via Iterative Support Detection (and NSP vs RIP)

Some of you may recall my description of the DUMBEST algorithm. Well, it looks like Yilun Wang and Wotao Yin had a more powerful idea along similar lines of thoughts four months earlier and implemented it. Woohoo! This is likely to be going to be an important line of work in Compressive Sensing as it focuses on the idea that once some data has been gathered and your reconstruction algorithm fails there is still hope to recover something. Wotao Yin mentioned to me that they released ISD, A New Compressed Sensing Algorithm via Iterative Support Detection. From the website introduction:
ISD is as fast as L1-minimization but requires fewer measurements. ISD addresses wrong solutions of L1 construction due to insufficient measurements. It will learn from such wrong solutions and solve new minimization problems that return a perfect or a better solution. A demo is given on Pages 4 and 5 of our report.

You can download an efficient implementation of ISD, called threshold-ISD, for recovering signals with fast decaying distributions of nonzeros from compressive measurements. The package includes numerical experiments showing that ISD has significant overall advantages over the classical L1 minimization approach, as well as two other state-of-the-art algorithms: the iterative reweighted L1 minimization algorithm (IRL1) and the iterative reweighted least--squares algorithm (IRLS).
The technical report is Compressed Sensing via Iterative Support Detection by Yilun Wang and Wotao Yin. The astract reads:
We present a new compressive sensing reconstruction method "ISD", aiming to achieve fast reconstruction and a reduced requirement on the number of measurements compared to the classical l_1 minimization approach. ISD addresses failed cases of l_1 based construction due to insufficient measurements, in which the returned signals are not equal or even close to the true signals. ISD will learn from such signals and solve new minimization problems that return a perfect or a better signal. Specifically, given an incorrect signal, ISD detects an index set I that includes components most likely to be true nonzeros and solves minf P i62I jxij : Ax = bg for a new signal x, and it iterates these two steps alternatively using latest x and I from one another until convergence. ISD di ffers from the orthogonal matching pursuit (OMP) method, as well as its variants, in two aspects. First, although both methods generate index sets iteratively, the index set I in ISD is not necessarily nested or increasing over the iterations. Second, the OMP family methods at each of their iterations x xi, i 2 I, and update the remaining components of x, but the ISD minimization problem above updates all the components of x. To analyze the performance of ISD, we generalize the Null Space Property to Truncated Null Space Property and present our analysis on the latter. We introduce an efficient implementation of ISD, called threshold-ISD, for recovering signals with fast decaying distributions of nonzeros from compressive measurements. Numerical experiments show that threshold-ISD has signi cant overall advantages over the classical l_1 minimization approach, as well as two other state-of-the-art algorithms such as the iterative reweighted l_1 minimization algorithm (IRL1) and the iterative reweighted least-squares algorithm (IRLS). MATLAB code is available for download from http://www.caam.rice.edu/~optimization/L1/ISD/.
From the paper:
ISD enhances sparse and compressible signal reconstruction by learning from failed BP-based reconstructions that arise when the number of linear measurements is not sufficient. In a failed reconstruction, the solution of BP is not equal or even close to the true signal x. One would normally give up, but ISD learns from the incorrect signal and solves a new optimization problem that returns a perfect or a better solution, requiring no additional measurements.
This is great. As a sub-issue, I also noticed the somewhat detailed discussion about NSP and RIP based sufficient conditions for recovery:

....We start with introducing the truncated null space property (t-NSP), a generalization of the null space property (NSP) studied in [35, 36, 17, 11, 13]. The NSP is used in slightly di fferent forms and difference names in these papers....
and
...With \gamma \lt 1, the NSP says that any nonzero vector \eta in the null space of A cannot have an l_1-mass concentrated on any set of L or fewer elements. [12] proposes a semide definite relaxation to test the NSP for general matrices. A sufficient exact reconstruction condition for BP is given in [13] based on the NSP: the true k-sparse signal x^bar is the unique solution of BP if A has the NSP of order L \ge k and 0 \llt \gamma \lt 1. The more widely known restricted isometry property (RIP) is more restricted than the NSP for establishing recoverability and stability results for BP [8, 7]. .... The NSP is more relaxed than the RIP. ....
Ah! I recoiled from this statement since it looks to me that an opposite statement was made earlier (see my comment in this entry). I fired up an e-mail out to Wotao stating the following:
In your paper you mentioned that the NSP is more relaxed than the RIP-based sufficient property. Does this mean that there are cases where either:
  1. The NSP-based sufficient property can be fulfilled while at the same time the RIP-base sufficient property is not fulfilled OR
  2. The RIP-based sufficient property can be fulfilled while at the same time the NSP-base sufficient property is not fulfilled.


To what Wotao kindly responded with:
*1. The NSP-based sufficient property can be fulfilled while at the same time the
* RIP-base sufficient property is not fulfilled

This holds!

Suppose that A has both NSP and RIP (with certain parameters). Let B = PA where P is a non-singular matrix. Then, B has the same NSP but it is not clear whether it has RIP or not.
Thanks Wotao !

Thursday, September 24, 2009

CS: CS related ICML'09 videos, Distributed Spatio-Temporal Sampling of Diffusion Fields from Sparse Instantaneous Sources, Smart-Sample


The ICML'09 videos are out . Out of the ones relevant to some aspect of compressive sensing here is a sample:
I am sure I am missing some. If so please do let me know. In the meantime, I'll add them to the Compressive Sensing Videos/Online Talks page. I also found two papers of interest:Distributed Spatio-Temporal Sampling of Diffusion Fields from Sparse Instantaneous Sources by Yue M. Lu and Martin Vetterli. The abstract reads:
We study the spatio-temporal sampling of a diffusion field driven by K unknown instantaneous source distributions. Exploiting the spatio-temporal correlation offered by the diffusion model, we show that it is possible to compensate for insufficient spatial sampling densities (i.e. sub-Nyquist sampling) by increasing the temporal sampling rate, as long as their product remains roughly a constant. Combining a distributed sparse sampling scheme and an adaptive feedback mechanism, the proposed sampling algorithm can accurately and efficiently estimate the unknown sources and reconstruct the field. The total number of samples to be transmitted through the network is roughly equal to the number of degrees of freedom of the field, plus some additional costs for in-network averaging.

Finding useful related patterns in a dataset is an important task in many interesting applications. In particular, one common need in many algorithms, is the ability to separate a given dataset into a small number of clusters. Each cluster represents a subset of data-points from the dataset, which are considered similar. In some cases, it is also necessary to distinguish data points that are not part of a pattern from the other data-points. This paper introduces a new data clustering method named smart-sample and compares its performance to several clustering methodologies. We show that smart-sample clusters successfully large high-dimensional datasets. In addition, smart-sample out-performs other methodologies in terms of running-time.
A variation of the smart-sample algorithm, which guarantees eciency in terms of I/O, is also presented. We describe how to achieve an approximation of the in-memory smart-sample algorithm using a constant number of scans with a single sort operation on the disk.

Wednesday, September 23, 2009

CS: Adaptive feature-specific imaging for recognition of non-Gaussian classes.

Pawan Baheti now at Qualcomm jusst mentioned to me his latest paper on the Feature specific imager we covered before (see the compressive sensing hardware 1.1.3 item). The paper is behind a paywall but you can ask Pawan directly for a preprint. Here it is:


We present an adaptive feature-specific imaging (AFSI) system for application to an M-class recognition task. The proposed system uses nearest-neighbor-based density estimation to compute the (non-Gaussian) class-conditional densities. We refine the density estimates based on the training data and the knowledge from previous measurements at each step. The projection basis for the AFSI system is also adapted based on the previous measurements at each step. The decision-making process is based on sequential hypothesis testing. We quantify the number of measurements required to achieve a specified probability of error (Pe) and we compare the AFSI system with an adaptive-conventional (ACONV) system. The AFSI system exhibits significant improvement compared to the ACONV system at low signal-to-noise ratio (SNR), and it is shown that, for an M=4 hypotheses, SNR=−10 dB, and a desired Pe=10−2, the AFSI system requires 30 times fewer measurements than the ACONV system. Experimental results validating the AFSI system are presented.

They use an SLM to provide differet random measurements and use their ability to change the SLM to devise an adaptive strategy. Nice. Thanks Pawan!

Tuesday, September 22, 2009

CS:Sapphire, CS with Probabilistic Measurements: A Group Testing Solution, Quantum state tomography, Rate-Distortion of CS, Guaranteed sparse recovery

This is interesting, Project Sapphire is now a public story. Back in the days (1995), we had a presentation of it at the Nuclear Engineering Department seminar on what happened but then for reasons not known to us, it was never mentioned in detail in a public setting again.



From the article, an action that reminded me of a scene in The Peacemaker (without the adversarial action!):

..."And these trucks were sliding all over the place, and I'm thinking, 'I don't want to make the call to Washington saying one of the trucks with highly enriched uranium went off the bridge into the river, and we're trying to locate it.' But somehow, miraculously, we made it all safely to the airport."

It took three hours to load the plane. But before it could take off, the runway had to be cleared of snow. Sleet, ice and rain blanketed the airfield, a pilot later recalled. There were no plows to be seen. Finally, airport workers brought out a truck with a jet engine mounted on the back. They fired up the engine and blasted the runway free of snow.

The article ends with:
In late 1994, the Joint Atomic Energy Intelligence Committee prepared a report about the extent of the Russian nuclear materials crisis. The top-secret document concluded that not a single facility storing highly enriched uranium or plutonium in the former Soviet Union had safeguards up to Western standards.

This exercise was really a lightening rod in our perception on how some of these materials had to be protected. It unleashed a series of efforts toward securing these materials and also opened the door to the disposition of excess weapons plutonium,

Finding all the nuclear materials with a detector in a closed area in the plant in Ust-Kamenogorsk could be a nice application of a new kind of group testing presented in the following eye opening paper ( yes, CS and group testing are close) :

Detection of defective members of large populations has been widely studied in the statistics community under the name “group testing”, a problem which dates back to World War II when it was suggested for syphilis screening. There the main interest is to identify a small number of infected people among a large population using collective samples. In viral epidemics, one way to acquire collective samples is by sending agents inside the population. While in classical group testing, it is assumed that the sampling procedure is fully known to the reconstruction algorithm, in this work we assume that the decoder possesses only partial knowledge about the sampling process. This assumption is justified by observing the fact that in a viral sickness, there is a chance that an agent remains healthy despite having contact with an infected person. Therefore, the reconstruction method has to cope with two different types of uncertainty; namely, identification of the infected population and the partially unknown sampling procedure. In this work, by using a natural probabilistic model for “viral infections”, we design non-adaptive sampling procedures that allow successful identification of the infected population with overwhelming probability 1−o(1). We propose both probabilistic and explicit design procedures that require a “small” number of agents to single out the infected individuals. More precisely, for a contamination probability p, the number of agents required by the probabilistic and explicit designs for identification of up to k infected members is bounded by m = O(k2 log(n/k)/(1−p)) and m = O(k2 log2 n/(1 − p)2), respectively. In both cases, a simple decoder is able to successfully identify the infected population in
time O(mn).
wow. I have not found a toy code implementing the algorithm but I am sure it is in the making as this paper goes beyond traditional group testing. From the paper:
The idea behind our setup is mathematically related to compressed sensing [17], [18]. Nevertheless, they differ in a significant way: In compressed sensing, the samples are gathered as linear observations of a sparse real signal and typically tools such as linear programming methods is applied for the reconstruction. To do so, it is assumed that the decoder knows the measurement matrix a priori. However, this is not the case in our setup. In other words, using the language of compressed sensing, in our scenario the measurement matrix might be “noisy” and is not precisely known to the decoder. As it turns out, by using a sufficient number of agents this issue can be resolved.
Let us thank the authors for making this point clear to its compressed sensing audience. It becomes clear that this approach opens the door to new kinds of intelligent sensors with logging capabilities. Also I note the different application of group testing in their papers:
This is the main conceptual idea behind the classical group testing problem which was introduced by Dorfman [1] and later found applications in variety of areas. A few examples of such applications include testing for defective items (e.g., defective light bulbs or resistors) as a part of industrial quality assurance [2], DNA sequencing [3] and DNA library screening in molecular biology (see, e.g., [4], [5], [6], [7], [8] and symbols represent infected people among healthy people indicated by • symbols. The dashed lines show the people contacted by the agents. the references therein), multiaccess communication [9], data compression [10], pattern matching [11], streaming algorithms [12], software testing [13], and compressed sensing [14]. See the books by Du and Hwang [15], [16] for a detailed account of the major developments in this area.
I will surely dig in these references in the future. Also found on the interwebs:

Quantum state tomography via compressed sensing by David Gross, Yi-Kai Liu, Steven T. Flammia, Stephen Becker, Jens Eisert. The abstract reads:

We establish novel methods for quantum state and process tomography based on compressed sensing. Our protocols require only simple Pauli measurements, and use fast classical post-processing based on convex optimization. Using these techniques, it is possible to reconstruct an unknown density matrix of rank r using O(r d log^2 d) measurement settings, a significant improvement over standard methods that require d^2 settings. The protocols are stable against noise, and extend to states which are approximately low-rank. The acquired data can be used to certify that the state is indeed close to a low-rank one, so no a priori assumptions are needed. We present both theoretical bounds and numerical simulations.
On the Empirical Rate-Distortion Performance of Compressive Sensing by Adriana Schulz, Luiz Velho, Eduardo A. B. da Silva. The abstract reads:
Compressive Sensing (CS) is a new paradigm in signal acquisition and compression that has been attracting the interest of the signal compression community. When it comes to image compression applications, it is relevant to estimate the number of bits required to reach a specific image quality. Although several theoretical results regarding the rate-distortion performance of CS have been published recently, there are not many practical image compression results available. The main goal of this paper is to carry out an empirical analysis of the rate-distortion performance of CS in image compression. We analyze issues such as the minimization algorithm used and the transform employed, as well as the trade-off between number of measurements and quantization error. From the experimental results obtained we highlight the potential and limitations of CS when compared to traditional image compression methods.

The paper is not available but the code permitting the production of this paper can be found here. It features a Noiselet code, I wonder how different it is from this one.

Finally, we have: A note on guaranteed sparse recovery via $\ell_1$-minimization by Simon Foucart. The abstract reads:
It is proved that every s-sparse vector x 2 CN can be recovered from the measurement vector y = Ax element of C^m via l_1-minimization as soon as the 2s-th restricted isometry constant of the matrix A is smaller than  about 0.4627, or smaller than about 0.4679 for large values of s.

Sunday, September 20, 2009

CS:SPAMS, SPArse Modeling Software, a dictionary building facility and more...

Following up on friday's entry on Dictionary learning and as promised a month and a half ago on this blog, Julien Mairal, Francis Bach, Jean Ponce and Guillermo Sapiro came through with the release of SPArse Modeling Software or SPAMS (yes, I know, this message is going to do very well with spam filters ) which as you recall seems to be very promising in the rapid construction of dictionaries. From the introduction:
SPAMS (SPArse Modeling Software) is an optimization toolbox. It is composed of a set of binaries implementing algorithms to address various machine learning and signal processing problems involving a large number of small/medium size sparse decompositions. Examples of such problems are presented in [14, 13].

The library is coded in C++, is compatible with Linux 64bits and 32bits Operating Systems and Mac OS, and is interfaced with Matlab. Other OS/architectures (Windows) or environments (R, Python, …) will eventually be included in future releases. It exploits multi-core CPUs when available.

Version 1.0 of the toolbox includes

  • The online dictionary learning technique of [14, 13] and its variants for various matrix factorization problems.
  • Orthogonal Matching Pursuit, also known as Forward Selection [22, 19]. The implementation is based on a Cholesky-decomposition and optimized for solving in parallel a large number of decomposition problems.
  • The LARS algorithm [6]. The implementation is also Cholesky-based and optimized for solving in parallel a large number of decomposition problems.
  • A weighted version of LARS.
  • A coordinate-descent approach for ℓ1-decomposition problems [8, 7, 23], also optimized for solving in parallel a large number of problems.
  • A first-order proximal method as described in [21] and its accelerated version for solving, Lasso, Elastic-Net and Fused Lasso problems.
  • A homotopy method for the Fused-Lasso Signal Approximation as defined in [7] with the homotopy method proposed in [13].
  • A tool for projecting efficiently onto a few convex sets inducing sparsity such as the ℓ1-ball using the method of [12, 5], and Elastic-Net or Fused Lasso constraint sets as proposed in [13].
  • A few tools for performing linear algebra operations such as a conjugate gradient algorithm.
The number of functions is intended to grow. Especially, the authors would like to include in future major releases image processing tools corresponding to the paper [16] and classification tools such as the ones from [15, 18, 17].

SPAMS uses at the moment a proprietary license, but its usage is free for non-profit research or test purposes. For other usage, please contact the authors (see license for more details).

The software uses function call to BLAS/LAPACK functions [4, 3, 10, 1]. It can use either the Intel Math Kernel Library (run-time files are included in the software package) or the free implementation ATLAS (files are included as well in the software package).

The toolbox is written by Julien Mairal (INRIA), with the collaboration of Francis Bach (INRIA), Jean Ponce (Ecole Normale Supérieure) and Guillermo Sapiro (University of Minnesota).

Julien tells me that a Windows version might be available in the future. Windows users are people too! It would be a shame if this capability were to follow the example of the Linux only curvelet transform. Anyway, this software is evidently linked from the dictionary section of the Big Picture in Compressive Sensing page.


Image Credit: NASA/JPL/Space Science Institute, Saturn's ring on September 13th, 2009

Saturday, September 19, 2009

CS: Top 16 Thoughts/Ramblings on Compressive Sensing




To the question: What is Compressive Sensing ? one could give a series of answers, here is one of them:


Compressive Sensing is the multiplexed acquisition of a sparse signal.


But sometimes a definition is not enough, so here are my top 16 random and naive thoughts (it's the week-end) on the matter which may or may not bring some insights in the matter:



  • Most signals are sparse or compressible since power laws are pretty universal. Hence Compresssive Sensing applies to almost all signals.
  • Most current sensing instrumentation is based on single signal sensing. The challenge of Compressive Sensing is to open the door of multiplexing to the sensing world.
  • Multiplexing used to be fashionable in transmission, Compressive Sensing brings it to bear to the acquisition process.
  • Compressive Sensing and Group Testing are very close subject areas.
  • Compressive Sensing acquisition can be performed through different types of multiplexing including linear combination of random samples (which I call random multiplexing).
  • Random multiplexing seems to have a large number of additional properties including the one of not having to care about the reconstruction step.
  • Dictionary building is important in the sensing of a signal only when the acquisition is non-random mutiplexing. The converse of this statement is: One could care less about building dictionaries when acquiring a signal in a random multiplex fashion.
  • Decoding, demultiplexing or reconstructing a signal can be performed using a variety of techniques starting historically with Linear Programming but this technique has since been supplanted in speed with other faster techniques.
  • Different analyses do make a connection between the multiplexing process and the sparsity of the signal being sampled and attendant constraints.
  • The number of multiplexed measurements is less than the number of the naively sampled meassurements which makes people say it is Sub-Nyquist.
  • When acquired randomly, the number of measurements are smaller yet these multiplexed measurements keep the information of interest. This is why some people want to do manifold signal processing without going through the recontruction process.
  • Depending on the field you are working in: multiplexing can also mean encoding or sketching or being the embodiement of the measurement matrix
  • What is now important in Compressive Sensing is finding or inventing hardware or sensing mechanisms that can acquire signals in a multiplexed fashion ? it is also important for older multiplexed sensing techniques to see how compressive sensing and its mathematical machinery (reconstruction,..) can bring new insights or better results.
  • Trade studies are being performed to see how noise affects the multiplexed acquisition and eventual reconstruction.
  • If you have already acquired all your data why should you care about compressive sensing ? Maybe that data's dimensionality is too large and you want to reduce it and eventually play with it in a lower dimensional manifold
  • Compressive Sensing is already a breakthrough when hardware can be reprogrammed to handle multiplexing. Compressive Sensing will be a disruptive technology in areas where either single sample sensors do not or cannot exist or in areas where random materials can be rapidily evaluated.
More insights can be gained from watching any of the online talks on the subject.

Credit: NASA/JPL/Space Science Institute, Iapetus on September 14, 2009.

Friday, September 18, 2009

CS: Dictionary Identification - Sparse Matrix-Factorisation via L1-Minimisation, CS Based Opportunistic Protocol in Wireless Networks

Remi Gribonval just reminded about this important paper that I somehow missed: Dictionary Identification - Sparse Matrix-Factorisation via L1-Minimisation by Remi Gribonval and Karin Schnass. The abstract reads:
This article treats the problem of learning a dictionary providing sparse representations for a given signal class, via $\ell_1$-minimisation. The problem can also be seen as factorising a $\ddim \times \nsig$ matrix $Y=(y_1 \gt ... y_\nsig), y_n\in \R^\ddim$ of training signals into a $\ddim \times \natoms$ dictionary matrix $\dico$ and a $\natoms \times \nsig$ coefficient matrix $\X=(x_1... x_\nsig), x_n \in \R^\natoms$, which is sparse. The exact question studied here is when a dictionary coefficient pair $(\dico,\X)$ can be recovered as local minimum of a (nonconvex) $\ell_1$-criterion with input $Y=\dico \X$. First, for general dictionaries and coefficient matrices, algebraic conditions ensuring local identifiability are derived, which are then specialised to the case when the dictionary is a basis. Finally, assuming a random Bernoulli-Gaussian sparse model on the coefficient matrix, it is shown that sufficiently incoherent bases are locally identifiable with high probability. The perhaps surprising result is that the typically sufficient number of training samples $\nsig$ grows up to a logarithmic factor only linearly with the signal dimension, i.e. $\nsig \approx C \natoms \log \natoms$, in contrast to previous approaches requiring combinatorially many samples.
I'll add this paper shortly to the big picture in compressive sensing page. Also found on arxiv:

A key feature in the design of any MAC protocol is the throughput it can provide. In wireless networks, the channel of a user is not fixed but varies randomly. Thus, in order to maximize the throughput of the MAC protocol at any given time, only users with large channel gains should be allowed to transmit. In this paper, compressive sensing based opportunistic protocol for throughput improvement in wireless networks is proposed. The protocol is based on the traditional protocol of R-ALOHA which allows users to compete for channel access before reserving the channel to the best user. We use compressive sensing to find the best user, and show that the proposed protocol requires less time for reservation and so it outperforms other schemes proposed in literature. This makes the protocol particularly suitable for enhancing R-ALOHA in fast fading environments. We consider both analog and digital versions of the protocol where the channel gains sent by the user are analog and digital, respectively.


Credit: Tamas Ladanyi, Pee Over Hungary, By the Ruins of Essegvar.

Thursday, September 17, 2009

CS: Compressive sensing by white random convolution, a job


Here is a new paper I just found: Compressive sensing by white random convolution by Yin Xiang, Lianlin Li, Fang Li. The abstract reads:
A different compressive sensing framework,convolution with white random waveform which has independent random entries subsampled at fixed (not random selected) locations is studied in this paper. We show that it wins high recovery probability for signals which are sparse in the representation basis which has small coherence, denoted by μ, with the Fourier basis. In particular, a n-dimensional signal which is S-sparse in such representation basis can be recovered with probability exceed 1-δ from any fixed m~O(μ2Slog(n/δ)3/2) samples gathered from the output of the random convolution, such as equal interval down-samples.

I also found a job announcement, it is in French. It can be found here. The INRIA announcement does not make a mention of compressed sensing unlike this entry (this is the same job) as the job doesn't require specifically a knowledge of CS.

Wednesday, September 16, 2009

CS: Bayesian Manifold based CS

Woohoo! As most of you know, one of the potentially interesting application of Compressive Sensing is the ability to perform detection and other operations in the Compressed Measurement world as opposed to having to reconstruct the signal. As shown by many, CS measurements can form a manifold that is close to the full fledged manifold. This was the subject of investigation of Mike Wakin in his thesis for instance. It just so happens that in the Bayesian Compressive Sensing page, I caught the following new text:
Manifold based CS theory has shown that if a signal lives in a low-dimensional manifold, then the signal can be reconstructed using only a few compressed measurements. However, till now there is no practical algorithm to implement CS on manifolds. Our recent work fills the gap by employing a nonparametric mixture of factor analyzers (MFA) to learn the manifold using training data, and then analytically reconstructing testing signals with compressed measurements. We also give bounds of the required number of measurements based on the concept of block-sparsity. The proposed methodology is validated on several synthetic and real datasets.
Nonparametric Bayesian methods are employed to constitute a mixture of low-rank Gaussians, for data x element of R^N that are of high dimension N but are constrained to reside in a low-dimensional subregion of R^N. The number of mixture components and their rank are inferred automatically from the data. The resulting algorithm can be used for learning manifolds and for reconstructing signals from manifolds, based on compressive sensing (CS) projection measurements. The statistical CS inversion is performed analytically. We derive the required number of CS random measurements needed for successful reconstruction, based on easily computed quantities, drawing on block–sparsity properties. The proposed methodology is validated on several synthetic and real datasets.

And while I cannot seem to locate this MFA code within the Bayesian Compressive Sensing page, it is just a matter of time before it shows up. I note that the paper deals with smooth manifolds when sometimes these manifolds are non-differentiable.

Tuesday, September 15, 2009

CS: The Compressed-Sampling Filter (CSF)

I've mentioned it before but it looks like an improved version of The Design of Compressive Sensing Filter by Lianlin Li, Wenji Zhang, Yin Xiang, Fang Li. The newer publication presents two compressive sensing hardware implementation in The Compressed-Sampling Filter (CSF) by Lianlin Li, Wenji Zhang, Yin Xiang, Fang Li.. The abstract reads:
The common approaches to sample a signal generally follow the well-known Nyquist-Shannon's theorem: the sampling rate must be at least twice the maximum frequency presented in the signal. A new emerging field, compressed sampling (CS), has made a paradigmatic step to sample a signal with much less measurements than those required by the Nyquist-Shannon's theorem when the unknown signal is sparse or compressible in some frame. We call a compressed-sampling filter (CSF) one for which the function relating the input signal to the output signal is pseudo-random. Motivated by the theory of random convolution proposed by Romberg (for convenience, called the Romberg's theory) and the fact that the signal in complex electromagnetic environment may be spread out due to the rich multi-scattering effect, two CSFs via microwave circuit to enable signal acquisition with sub-Nyquist sampling have been constructed, tested and analyzed. Afterwards, the CSF based on surface acoustic wave (SAW) structure has also been proposed and examined by the numerical simulation. The results has empirically shown that by the proposed architectures the S-sparse n-dimensional signal can be exactly reconstructed with O(Slogn) real-valued measurements or O(Slog(n/S)) complex-valued measurements with overwhelming probability.

As one can read in the paper, the detailed parameters about this CSF for reproducing the results in this paper can be downloaded from to-cs.blog.sohu.com or can also be obtained by e-mail: lianlinli1980@gmail.com. This information can also be downloaded from here. Thanks Lianlin ! I note the following nuggets:
...Maybe, some defective product of SAW time delayer may be an excellent candidate for the purpose of compressed sampling measurement....
I've had similar thoughts about diverse systems and think it may be a good path toward starting disruptive technologies using unused assets or assets that people want to discard. Also in a different direction, we have:
...Of course, the general compressive sensing filter can be constructed along the identical idea by many other structures, the plasma with different electron density corresponding to different critical frequency. As a matter of fact, the ionosphere can be looked as the natural compressive sensing measurement system...
Emphasis is mine, this is the paper that made me think of the Imaging With Nature series of blog entries. The series won't stop at three entries and I am sure we won't stop hearing about it from Lianlin either (translation is here). Both hardware will be added to the Compressive Sensing Hardware page.

Monday, September 14, 2009

CS: Optimally Tuned Iterative Reconstruction Algorithms for CS, Limits of Deterministic CS Considering Arbitrary Orthonormal Basis for Sparsity

I was watching Angels and Demons the other day (don't ask) and realized that CS could have shortened this less than optimal viewing experience. See, part of the plot is for the good guys to find out where the bad guys have hidden some anti-matter bomb (in catch my gamma ray if you can, it was evident that you could not use antimatter for rockets and the same reasoning applies to bombs) that is located in a place lit by a lightbulb which can also be viewed on a live webcam. It so happens that since we are in the Vatican, the good guys have the ability to shut the power off however they want. They (actually the clerics) devise a scheme to shut off portions of the power grid one subdivision at a time and hope that by doing so and watching the live feed from the anti-matter bomb they will be able to find the location of the device. Suffice to say, the process is too slow and methods in CS and attendant Group Testing could have been a life saver for some of the characters of the movie and more importantly would have saved the viewer two precious hours....Oh well....Let's go back to something more interesting.

I mentioned this paper before (CS: Let us define the Current State of the Art, shall we ?) but since it just popped up on Arxiv, this maybe a new version. Anyway, it doesn't matter as this is an important paper:

We conducted an extensive computational experiment, lasting multiple CPU-years, to optimally select parameters for two important classes of algorithms for finding sparse solutions of underdetermined systems of linear equations. We make the optimally tuned implementations available at sparselab.stanford.edu; they run ‘out of the box’ with no user tuning: it is not necessary to select thresholds or know the likely degree of sparsity. Our class of algorithms includes iterative hard and soft thresholding with or without relaxation, as well as CoSaMP, subspace pursuit and some natural extensions. As a result, our optimally tuned algorithms dominate such proposals. Our notion of optimality is defined in terms of phase transitions, i.e. we maximize the number of nonzeros at which the algorithm can successfully operate. We show that the phase transition is a well-defined quantity with our suite of random underdetermined linear systems. Our tuning gives the highest transition possible within each class of algorithms. We verify by extensive computation the robustness of our recommendations to the amplitude distribution of the nonzero coefficients as well as the matrix ensemble defining the underdetermined system. Our findings include: (a) For all algorithms, the worst amplitude distribution for nonzeros is generally the constant amplitude random-sign distribution, where all nonzeros are the same amplitude. (b) Various random matrix ensembles give the same phase transitions; random partial isometries may give different transitions and require different tuning; (c) Optimally tuned subspace pursuit dominates optimally tuned CoSaMP, particularly so when the system is almost square.

also found on the web:

It is previously shown that proper random linear samples of a finite discrete signal (vector) which has a sparse representation in an orthonormal basis make it possible (with probability 1) to recover the original signal. Moreover, the choice of the linear samples does not depend on the sparsity domain. In this paper, we will show that the replacement of random linear samples with deterministic functions of the signal (not necessarily linear) will not result in unique reconstruction of -sparse signals except for . We will show that there exist deterministic nonlinear sampling functions for unique reconstruction of - sparse signals while deterministic linear samples fail to do so.

Friday, September 11, 2009

Spot: A Linear Operator Toolbox

Michael Friedlander and Ewout van der Berg have identified that one of the reason that Sparco was successful is the ability to handle linear operators nicely when used in Compressed Sensing examples. As a result they decided to produce a new Matlab toolbox called Spot. Here is their introductory message:

Dear Colleagues,

We're pleased to announce the first release of the Spot toolbox, which aims to bring the expressiveness of Matlab’s built-in matrix notation to problems for which explicit matrices are not practical. The toolbox is available here:


If you've used the Sparco test problems, you may be familiar with the included linear-operator library used behind the scenes to create the test problems. We've completely rewritten Sparco's operator library using the more modern object-oriented functionality in recent versions of Matlab, and will distribute and maintain it as stand-alone package.


n = 1000; x = (1:n)'; % the first column defines a circulant matrix
F = opDFT(n); % create a DFT operator
s = sqrt(n)*F*x; % eigenvalues of circulant operator
C = real( F'*opDiag(s)*F ); % the circulant operator
w = C*x; % apply C to a vector
y = C'*w; % apply the adjoint of C to a vector
z = C(end:-1:1,:)*y; % reverse the rows of C and apply to a vector
double( C(1:5,1) )' % extract a few elements of the first column
ans =
1 2 3 4 5

If you would like to give the package a whirl, download and unzip the distribution spotbox-1.0p.zip, then add the resulting directory to your path. Use the command "spottests" to make sure that everything is in working order (this command takes about 45 seconds to run on my machine).

We'd be delighted to hear from you if you find the toolbox useful for your own work, and if you have any suggestions or bug reports.

Best,
Michael and Ewout


Thanks Michael and Ewout !

Thursday, September 10, 2009

CS: CS Fall School near Paris, CS BAN, Spatio-Temporal Compressive Sensing and Internet Traffic Matrices


Djalil Chafaï, Olivier Guédon, Guillaume Lecué, Shahar Mendelson and Alain Pajor are organizing a School on Compressed Sensing nearby Paris. From the website:

Fall School Marne-la-Vallée , November 16-20, 2009, on

Compressed sensing - Random matrices - High dimensional geometry



This Fall School will discuss some interactions between compressed sensing, random matrices and high dimensional geometry. It will take place on the campus of Paris-Est Marne-la-Vallée from November 16 to 20, 2009. This school is addressed to non-specialists: participation of postdocs and PhD students is strongly encouraged.

Compressed Sensing is a quite new framework that enables approximate and exact reconstruction of sparse signals from incomplete measurements. It is strongly related to other problems of different fields such as approximation theory (diameter of sections), high dimensional geometry (neighborliness and asymptotic geometry of convex bodies), harmonic analysis (trigonometric diameter and selection of characters) and random matrices (asymptotic behavior of the largest and smallest singular values).

  • Gaussian ensembles and universality principles (Djalil Chafaï)
  • Empirical methods and selection of characters (Olivier Guédon)
  • Basic tools from empirical processes theory applied to the compress sensing problem (Guillaume Lecué)
  • Applications of chaining methods (Shahar Mendelson)
  • The Restricted Isometry Property (RIP) of some models of random matrices and High dimensional Geometry (Alain Pajor)


The first talk will start on Monday, November 16 at 10:00 and that the last talk will end on Friday, November 20 at 13:30.
You do recall the work mentioned by Pawan Baheti on using Compressive sensing for bio-signal monitoring in what is called Body Area Network (BAN). It is now making some headlines to the Silicon Valley public. I wonder if I should include them in the Compressive Sensing Hardware page (maybe I should since the network of sensor will eventually be different in terms of power and data bandwidth requirements).

What are the odds ? There is another Yin Zhang publishing in the low rank / compressive sensing literature who also lives in Texas. So things become clear, there is:
Austin and Houston are 166 miles apart.

Many basic network engineering tasks (e.g., traffic engineering, capacity planning, anomaly detection) rely heavily on the availabilityand accuracy of traffic matrices. However, in practice it is chal-lenging to reliably measure traffic matrices. Missing values arecommon. This observation brings us into the realm of compressivesensing, a generic technique for dealing with missing values thatexploits the presence of structure and redundancy in many real-world systems. Despite much recent progress made in compressive sensing, existing compressive-sensing solutions often perform poorly for traffic matrix interpolation, because real traffic matrices rarely satisfy the technical conditions required for these solutions.To address this problem, we develop a novel spatio-temporalcompressive sensing framework with two key components: (i) a new technique called SPARSITY REGULARIZED MATRIX FACTORIZATION (SRMF) that leverages the sparse or low-rank nature of real-world traffic matrices and their spatio-temporal properties,and (ii) a mechanism for combining low-rank approximations withlocal interpolation procedures. We illustrate our new framework and demonstrate its superior performance in problems involving interpolation with real traffic matrices where we can successfullyreplace up to 98% of the values. Evaluation in applications suchas network tomography, traffic prediction, and anomaly detection confirms the flexibility and effectiveness of our approach.


From this blog, here are some insight:


· How to fill missing valies in a matrix (could be be a traffic matrix, delay matrix, social proximity matrix)

· They focus on traffic matrices.

· Missing values are quite common – direct measurement in infeasible, measurement unreliable, anomalies appear, and future traffic has not yet appeard

· Many network tasks are sensitive to missing values

· Ideas: 1) Exploit low rank nature of Traffic Matrices (TM); 2) exploit spatio-temporal properties (TM rows and columns close to each other are often close in value); 3) Exploit local strucytures in TMs

· Compressive sensing is used (generic technique for dealing with missing values that exploits the presence of structure and redundancy). However, existing compressive-sensing solutions perform poorly for traffic matrix interpolation.

· In the paper a novel spatio-temporal compressive sensing framework was developed. Key components: 1) a new technique called Sparsity Regularized Matrix Factorization (SRMF) that leverages the low-rank nature of traffic matrices and their spatio-temporal properties; 2) a mechanism for combining low-rank approximations with local interpolation procedures.

· They used real datasets from 3 networks.

· Compared to all other algorithms, this one is always better. Even with 98% missing values, the error is only of about 20%.

Credit: Clair Perry, Charlottetown,Prince Edward Island, Canada, The International Space Station and the Shuttle as photographed from the ground on 9/9/9.