Wednesday, June 30, 2010

CS: A patent, DIY computational photography, NETRA, 2 comments.

Daniel Reetz just sent the following:

I'm a CompPhoto enthusiast, and discovered your blog while looking for interviews with Ramesh Raskar. Anyway, I'm not sure if this is compressed sensing, exactly, but it's an interesting patent, and very fresh:

Daniel Reetz

The abstract of the patent reads as follows:
An imaging system is presented for imaging objects within a field of view of the system. The imaging system comprises an imaging lens arrangement, a light detector unit at a certain distance from the imaging lens arrangement, and a control unit connectable to the output of the detection unit. The imaging lens arrangement comprises an imaging lens and an optical element located in the vicinity of the lens aperture, said optical element introducing aperture coding by an array of regions differently affecting a phase of light incident thereon which are randomly distributed within the lens aperture, thereby generating an axially-dependent randomized phase distribution in the Optical Transfer Function (OTF) of the imaging system resulting in an extended depth of focus of the imaging system. The control unit is configured to decode the sampled output of the detection unit by using the random aperture coding to thereby extract 3D information of the objects in the field of view of the light detector unit....

I responded to Daniel that I would urge the patent holder to make sure there is no prior art on this. The subject of patent is a natural one for a new technology such as compressive sensing. But the point at which we consider something to be obvious is going to be really difficult to ascertain. as the field is ripe for many new insights (without too much thinking!). Daniel's e-mail was sent a day after the U.S. Supreme Court decided on Bilski. For some background material, you may recall how a CT inversion algorithm played a role in algorithms being removed from patentability. From this entry, I wrote:

The knowledge that ART seems to be employed in current CT scanners seems at odd with the review entitled: Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction? by Xiaochuan Pan, Emil Sidky and Michael Vannier. Also of interest, Richard Gordon a co-inventor of ART for CT-scanners (with Gabor T. Herman an author of the paper above) responded to that review in this entry: MIA'09 and Richard Gordon's ART CT algorithm.

And if you think some of these patenting issues are germane, you might want to take a look at the Bilski case currently in front of the U.S. Supreme Court, from the wikipedia entry, there is a mention of CT algorithms:

....Some Federal Circuit decisions, however, had held some transformations of signals and data patent-eligible. For example, the Abele decision approved a dependent claim to a method transforming X-ray attenuation data produced in a X-Y field by an X-ray tomographic scanner to an image of body organs and bones — while at the same time the Abele court rejected a more generic and abstract independent claim to a process of graphically displaying variances from their average values of unspecified data obtained in an unspecified manner.
From the Abele decision, there was the following determination:

....In Johnson, supra, the interrelationship of the algorithm to the remaining limitations of a claim was held to be determinative of whether the claim defined statutory subject matter. Relying on the same reasoning, in In re Walter,618 F.2d 758, 205 USPQ 397 (CCPA 1980), the second part of the two-step analysis5 was defined as follows:

If it appears that the mathematical algorithm is implemented in a specific manner to define structural relationships between the physical elements of the claim (in apparatus claims) or to refine or limit claim steps (in process claims), the claim being otherwise statutory , the claim passes muster under §101. If, however, the mathematical algorithm is merely presented and solved by the claimed invention, as was the case in Benson and Flook, and is not applied in any manner to physical elements or process steps, no amount of post-solution activity will render the claim statutory; nor is it saved by a preamble merely reciting the field of use of the mathematical algorithm....

Is compressive sensing going to change patents ?

It looks like the Supreme Court provided a narrow ruling that still cannot help a lone inventor protect her ideas without having to resort to having access to some large amount of capital. More erudite understanding of what this means can be found here.

And while we are talking about CT reconstruction, Emil Sidky told us shortly after the post listed above that

While it may be true that there are specific protocols that use something other than FBP in CT. These are but a drop in the bucket. No one can really argue with the fact that FBP is the workhorse algorithm of medical CT, and considering the numbers of people that get scanned there is certainly room for substantial exposure reduction to the general population by engineering a general-purpose CT scanner with advanced image-reconstruction algorithms.

Knowing all that, I was looking forward to hearing the webinar Dick Gordon invited me to watch, on the subject of reconstruction algorithms used in clinical protocols. Thanks to the presentation, I learned that the big threes in CT are providing new offerings:
All of them use iterative algorithms aiming at reducing dose. Now the question is really trying to figure out what is being iterated since the algorithms are closely held secrets. It is not clear if they are iterating on some form of FBP or something else.

Last but not least, as I mentioned to him that I liked Daniel's projects, he responded with:

You might also like some of the stuff I've done here:

(perhaps my translation of PP Sokolov's 1911 paper on integral imaging
would be more generally interesting: ) He made an integral camera with a
copper plate and some film!
I need to come back to that paper later. By the way, did you all know that this whole photography thing was born because of Napoleon and the British blockade....muhhh...I thought so but that's for a different blog entry. But while we are on the subject of hacking material to make them do something else they were not intended to do, there is NETRA

NETRA: Interactive Display for Estimating Refractive Errors and Focal Range by Vitor F. Pamplona, Ankit Mohan, Manuel M. Oliveira, Ramesh Raskar. The abstract reads:

We introduce an interactive, portable, and inexpensive solution for estimating refractive errors in the human eye. While expensive optical devices for automatic estimation of refractive correction exist, our goal is to greatly simplify the mechanism by putting the human subject in the loop. Our solution is based on a high-resolution programmable display and combines inexpensive optical elements, interactive GUI, and computational reconstruction. The key idea is to interface a lenticular view-dependent display with the human eye at close range - a few millimeters apart. Via this platform, we create a new range of interactivity that is extremely sensitive to parameters of the human eye, such as the refractive errors, focal range, focusing speed, lens opacity, etc. We propose several simple optical setups, verify their accuracy, precision, and validate them in a user study.

The website is here and the papers are here:

In a different direction, Brian Chesney asks a question on his blog: Compressive Sensing and Sparse in Time and on twitter. To which I commented the following:

I don’t understand your predicament. You say

“However, if your signal is sparse in time, you probably still care exactly when your signal did the thing that is of interest to you, when it did whatever it was that you couldn’t afford to miss by sampling randomly.”

If your signal is sparse in time, then you sample that signal in the Fourier domain and pick a random set of frequencies of that signal. That’s it, you don’t randomly sample in the time domain so I don’t see where this issue of latency and group delay come about.


Did I miss something ?

Bill Harris asked a question on the issue of Sampling rate of human-scaled time series. He then responded on Andrew Gelman's blog and this is what he had to say:
Igor and Hal, you've given me much to study (I've already printed off several articles and earmarked a few more). I knew of compressed sensing, but I didn't know much about it, and I didn't really know where to enter the literature on the economics side. Both tips help a lot. Thanks to both of you (and to Andrew for hosting the question).

(I also know that my statement of the Nyquist-Shannon sampling theorem is incomplete; what really counts is the bandwidth of the signal, not the highest frequency.)

Is it fair to think that most economic and social science time series analysis does not take advantage of such methodologies? If that be true, do you have any assessment of the magnitude or type of erroneous inferences that are or reasonably could be made from those analyses?

I recently looked at an undersampled (in the Nyquist sense) time series (unfortunately not something I can talk about yet) in which that data gave seriously erroneous insights. In that case, I fortunately also had integrated data available, and the integration played the role of an anti-aliasing filter.
Does any of you have an insight ?

Tuesday, June 29, 2010

CS: Around the blogs in 80 hours, GraphLab, Compressive Direction Finding, Sparsity pattern Recovery, a postdoc.

Alex Dimakis who sometimes guest blog on The Ergodic walk reminded me of a discussion we had a while back:

Hi Igor,
I thought you might find this
interesting-- related to a conversation we had a while ago about checking RIP and expansion.

Thanks Alex!

I have already featured this entry before but it is worth re-reading. Other blogs featured the following entries of interest:

Gonzalo Vazquez-Vilar

Hal Daume III
Andrew Gelman

David Brady

who talks about Ramesh Raskar's new project NETRA. I'll come back to this later.

In other news:
The technical report of Ewout van der Berg and Michael Friedlander entitled Sparse optimization with least-squares constraints is out.

and on Arxiv, we saw the following three papers:

GraphLab: A New Framework for Parallel Machine Learning by Yucheng Low, Joseph Gonzalez, Aapo Kyrola, Danny Bickson, Carlos Guestrin, Joseph M. Hellerstein. The abstract reads:
Designing and implementing efficient, provably correct parallel machine learning (ML) algorithms is challenging. Existing high-level parallel abstractions like MapReduce are insufficiently expressive while low-level tools like MPI and Pthreads leave ML experts repeatedly solving the same design challenges. By targeting common patterns in ML, we developed GraphLab, which improves upon abstractions like MapReduce by compactly expressing asynchronous iterative algorithms with sparse computational dependencies while ensuring data consistency and achieving a high degree of parallel performance. We demonstrate the expressiveness of the GraphLab framework by designing and implementing parallel versions of belief propagation, Gibbs sampling, Co-EM, Lasso and Compressed Sensing. We show that using GraphLab we can achieve excellent parallel performance on large scale real-world problems.

Compressive Direction Finding Based on Amplitude Comparison by Ruiming Yang, Yipeng Liu, Qun Wan, Wanlin Yang. The abstract reads:
This paper exploits recent developments in compressive sensing (CS) to efficiently perform the direction finding via amplitude comprarison. The new method is proposed based on unimodal characteristic of antenna pattern and sparse property of received data. Unlike the conventional methods based peak-searching and symmetric constraint, the sparse reconstruction algorithm requires less pulse and takes advantage of CS. Simulation results validate the performance of the proposed method is better than the conventional methods.

Fundamental Tradeoffs for Sparsity Pattern Recovery by Galen Reeves and Michael Gastpar. The abstract reads:
Recovery of the sparsity pattern (or support) of a sparse vector from a small number of noisy linear samples is a common problem that arises in signal processing and statistics. In the high dimensional setting, it is known that recovery with a vanishing fraction of errors is impossible if the sampling rate and per-sample signal-to-noise ratio (SNR) are finite constants independent of the length of the vector. In this paper, it is shown that recovery with an arbitrarily small but constant fraction of errors is, however, possible, and that in some cases a computationally simple thresholding estimator is near-optimal. Upper bounds on the sampling rate needed to attain a desired fraction of errors are given in terms of the SNR and various key parameters of the unknown vector for two different estimators. The tightness of the bounds in a scaling sense, as a function of the SNR and the fraction of errors, is established by comparison with existing necessary bounds. Near optimality is shown for a wide variety of practically motivated signal models.

Finally, there is a postdoc position in France:

Open Postdoc Positions in Bandits and Reinforcement Learning at INRIA Lille

The project team SEQUEL (Sequential Learning) of INRIA Lille, France, is seeking to appoint several Postdoctoral Fellows. We welcome applicants with a strong mathematical background who are interested in theory and applications of reinforcement learning and bandit algorithms.
The research will be conducted under the supervision of Remi Munos, Mohammad Ghavamzadeh and/or Daniil Ryabko, depending on the chosen topics.

The positions are research only and are for one year, with possibility of being extended.
The starting date is flexible, from the Fall 2010 to Spring 2011.

INRIA is France’s leading institution in Computer Science, with over 2800 scientists employed, of which around 250 in Lille. Lille is the capital of the north of France, a metropolis with 1 million inhabitants, with excellent train connection to Brussels (30 min), Paris (1h) and London (1h30).
The Sequel lab is a dynamic lab at INRIA with over 25 researchers (including PhD students) which covers several aspects of machine learning from theory to applications, including statistical learning, reinforcement learning, and sequential learning.

The positions will be funded by the EXPLO-RA project (Exploration-Exploitation for efficient Resource Allocation), a project in collaboration with ENS Ulm (Gilles Stoltz), Ecole des Ponts (Jean Yves Audibert), INRIA team TAO (Olivier Teytaud), Univ. Paris Descartes (Bruno Bouzy), and Univ. Paris Dauphine (Tristan Cazenave).
See: for some of our activities.

Possible topics include:

* In Reinforcement learning: RL in high dimensions. Sparse representations, use of random projections in RL.
* In Bandits: Bandit algorithms in complex environments. Contextual bandits, Bandits with dependent arms, Infinitely many arms bandits. Links between the bandit and other learning problems.
* In hierarchical bandits / Monte-Carlo Tree Search: Analysis and developement of MCTS / hierarchical bandit algorithms, planning with MCTS for solving MDPs
* In Statistical learning: Compressed learning, use of random projections, link with compressed sensing.
* In sequential learning: Sequential prediction of time series

Candidates must have a Ph.D. degree (by the starting date of the position) in machine learning, statistics, or related fields, possibily with background in reinforcement learning, bandits, or optimization.

To apply please send a CV and a proposition of research topic to or, or

If you are planning to go to ICML / COLT this year, we could set up an appointment there.

Read more:

credit: The Aurora Australis, As Seen from the ISS on May 29 NASA

Sunday, June 27, 2010

Imaging With Nature: Cloud edition.

Every time I watch clouds, I don't see a simple mass of water moving slowly in the sky. I see a way for mother Nature to provide us with some sorts of coded aperture for Sun imaging in a typical CS fashion or upper atmosphere imaging through blind deconvolution, but that's just me.

Friday, June 25, 2010

Travel thoughts: Incoherent Seats and Pattern Matching on an un-developable surface

I thought this was funny or I am becoming too much of a nerd.

As I was boarding a plane recently, the gate attendant took my boarding pass and patched it through some reader. The LCD screen buzzed and blinked with the mention "Incoherent Seat". So a seat number is now a projection, what's up with that ? I thought.

I was also going some pretty thorough fingerprinting exercise in the same journey and was thinking: What type of software does pattern matching for patterns that are inscribed on a non-solid and non-cylindrical (un-developable) surface ? what are the chances this projection is different everytime you take a new fingerprint ? I'd like to know.

Thursday, June 24, 2010

CS: Around the blogs in 80 hours

On Twitter first:

Guiseppe asks
in your surveys on CS have you ever seen a problem like min_X norm1(A*X-B) + k*nuclearnorm(inv(X)) ?

More info can be found here.

Pierre commented on a comment I added to a retweet:
The reason NB has such a high impact "About 200,000 academic journals are published, average number of readers per article is 5" NB it's 50+
For the blogs here is what we have this week:

Bob Sturm:

Djalil Chafai

Image Sensor World:

Anand Sarwate


Frank Nielsen

Gonzalo Vazquez Vilar

Lianlin Li is back

Tanya Khovanova

  • My First Polymath Project on a subject that seems eerily close to the 12 ball problem I featured here before. The math is in trying to figure out the bounds for the numbre of measurements.

Andrew Gelman

James Lee

Wednesday, June 23, 2010

CS: CS IR and Video, One mans's noise, Derivative CS, Aperture Synthesis Imaging

Jong Chul Ye just sent me the following:

Hi Igor,

I would like you to point out our invited paper, which appeared on IJIST this month. The paper is more detailed description of our previous contribution: compressive sensing dynamic MRI called kt FOCUSS (, especially on how motion estimation and compensation can be implemented.
I believe that this concept can be used not only for dynamic MRI applications but also for compressive sensing video coding applications.
I would be happy to hear any feedback on our work.
Best regards,

Thanks Jong.

Compressed sensing has become an extensive research area in MR community because of the opportunity for unprecedented high spatio-temporal resolution reconstruction. Because dynamic magnetic resonance imaging (MRI) usually has huge redundancy along temporal direction, compressed sensing theory can be effectively used for this application. Historically, exploiting the temporal redundancy has been the main research topics in video compression technique. This article compares the similarity and differences of compressed sensing dynamic MRI and video compression and discusses what MR can learn from the history of video compression research. In particular, we demonstrate that the motion estimation and compensation in video compression technique can be also a powerful tool to reduce the sampling requirement in dynamic MRI. Theoretical derivation and experimental results are presented to support our view

As some of you know I am interested in compressive EEG as shown in the discussion with Esther Rodriguez-Villegas a while back on the subject (CS: Q&A with Esther Rodriguez-Villegas on a Compressive Sensing EEG). But I have a hard time getting references on what is really the type of inverse problem that is being solved. This week, the arxiv blog, points to a new description of what EEGs and ECGs signals actually measure in a paper entitled: Electrophysiology of living organs from first principles by Gunter Scharf, Lam Dang and Christoph Scharf. The abstract of the paper reads:

Based on the derivation of the macroscopic Maxwell’s equations by spatial averaging of the microscopic equations, we discuss the electrophysiology of living organs. Other methods of averaging (or homogenization) like the bidomain model are not compatible with Maxwell’s theory. We also point out that modelling the active cells by source currents is not a suitable description of the situation from first principles. Instead, it turns out that the main source of the measured electrical potentials is the polarization charge density which exists at the membranes of active cells and adds up to a macroscopic polarization. The latter is the source term in the Laplace equation, the solution of which gives the measured far-field potential. As a consequence it is the polarization or dipole density which is best suited for localization of cardiac arrhythmia.

So the question becomes "is the dipole density sparse in some basis ?"

Recall the Heard Island Feasibility Test entry ? where one man's noise is another's signal.

Here is a new twist: The folks at Goddard found out that the the noise background in the coded aperture X-ray cameras on-board current spacecraft like XMM or Herschel can be used to detect Space Weather events (check Astrophysics Noise: A Space Weather Signal )

Finally, here is a small list of documents found on the interwebs:

Chien-Chia Chen has a report on Compressed Sensing Framework on TinyOS. In it one learns about a set of compressed sensing tools on both sensor side using TinyOS and host PC side using Matlab.

Reconstruction of multidimensional signals from the samples of their partial derivatives is known to be an important problem in imaging sciences, with its fields of application including optics, interferometry, computer vision, and remote sensing, just to name a few. Due to the nature of the derivative operator, the above reconstruction problem is generally regarded as ill-posed, which suggests the necessity of using some a priori constraints to render its solution unique and stable. The ill-posed nature of the problem, however, becomes much more conspicuous when the set of data derivatives occurs to be incomplete. In this case, a plausible solution to the problem seems to be provided by the theory of compressive sampling, which looks for solutions that fit the measurements on one hand, and have the sparsest possible representation in a predefined basis, on the other hand. One of the most important questions to be addressed in such a case would be of how much incomplete the data is allowed to be for the reconstruction to remain useful. With this question in mind, the present note proposes a way to augment the standard constraints of compressive sampling by additional constraints related to some natural properties of the partial derivatives. It is shown that the resulting scheme of derivative compressive sampling (DCS) is capable of reliably recovering the signals of interest from much fewer data samples as compared to the standard CS. As an example application, the problem of phase unwrapping is discussed.

Compressed Sensing For Aperture Synthesis Imaging by Stephan Wenger, Soheil Darabiy, Pradeep Sen, Karl-Heinz Glaßmeier, and Marcus Magnor. The abstract reads:
The theory of compressed sensing has a natural application in interferometric aperture synthesis. As in many real-world applications, however, the assumption of random sampling, which is elementary to many propositions of this theory, is not met. Instead, the induced sampling patterns exhibit a large degree of regularity. In this paper, we statistically quantify the effects of this kind of regularity for the problem of radio interferometry where astronomical images are sparsely sampled in the frequency domain. Based on the favorable results of our statistical evaluation, we present a practical method for interferometric image reconstruction that is evaluated on observational data from the Very Large Array (VLA) telescope.

Tuesday, June 22, 2010

Compressed Sensing or Inpainting, part II

From Clients from Hell:

Client: “How come all the photos I took have the heads cut off?”

Me: “Hmm, Did you look though the view finder when you took them?”

Client: “I don’t know what that is. Can’t you just move the picture up so I can see their heads? I mean they’re digital pictures?”

Ever since the Wired article came out, it's been bothering me to the point where I had to write this entry on Why Compressed Sensing is NOT a CSI "Enhance" technology ... yet !. I am not alone, take for example Bob Sturm's recent entry entitled: "Compressed Sensing" and Undersampled Audio Reconstruction that talks about the Compressive Sensing Audio example featured in the NPR recent piece. In both cases, there is a sense that what is being mentioned is at the limit of compressed sensing. Why are these examples at the limit of what Compressive Sensing ? In Around the blogs in 80 hours I mentioned:
....This is why I wrote this example in Compressed Sensing: How to wow your friends back in 2007 that features an example with delta and sines. Let us note that in Compressive Sensing Audio, for the delta/sines assumption to hold, you really need have to sample enough within time-localized phenomena. In other words, Compressed Sensing is not a license to make appear something you did not catch with the sampling process. This needs to be said often, as in MRI or steady-state audio, the signal is being sampled with diracs in their appropriate phase spaces (Fourier for MRI and time for Audio) that will get you a result directly applicable by a compressive sensing approach. In other fields like imagery however, you do not sample directly in a good phase space and you need new types of exotic hardware to perform these new types of incoherent measurements...

But now with the Wired piece, and as featured in Compressed Sensing or Inpainting ? Part I Jarvis Haupt showed us an example of compressive sensing that looked like an inpainting example. Within the framework of Compressive Sensing, we really need two elements :
  • an encoding mechanism whose purpose is to multiplex/mix different signals (sometimes in a random fashion)
  • a decoding mechanism (most probably a nonlinear one)
In part I, an illustration of Compressive Sensing seemed to envision that Compressive Sensing could omit the multiplexing/encoding part while relying exclusively on the second part. But instead of arguing unproductively and taking sides between inpainting and compressive sensing, I am wondering the following question and I am looking forward to folks in the community to help me out:

Is your average point and click camera with missing pixels a compressive sensing system ?

To give some perspective, here is a presentation by Mark Neifeld made at the Duke-AFRL Meeting a year ago entitled Adaptation for Task-Specific Compressive Imaging and especially this slide:

So some people seem to think that way as well, but let us come back to a more substantive explanation about a point and click camera being a compressive sensing system. Let us first figure out if some mixing is occurring for this type of off-the-shelf camera by using a simple model of a "normal" camera and a CS one.

If randomly sampling on the CCD is really equivalent to having a camera with missing pixels. What a does a model for a missing pixel camera look like ? On a first approximation, one can think of it as a thick coded aperture camera. Talking about the thickness of a mask for coded aperture is new as the thickness of the mask is not an issue in most current computational photography undertakings (it is an issue in coded aperture cameras used in X-ray astronomy however) . In the following figures, I graphed/drew the thick coded aperture mask and the CCD/CMOS planes sideways. In the thick configuration below, rays of light do not mix with each other i.e each CCD/CMOS pixel receive an unmixed portion of the scene of interest through the mask. Images taken with this camera model translate into the Obama missing pixel of the Wired piece:

However, if you recall how coded aperture works when used in the framework of compressive sensing you'll realize that it is equivalent to a thin coded aperture mask as is shown in the following figure.

The red square on the CCD/CMOS is the recipient of light rays from many different holes on the mask: i.e. some mixing is occurring and therefore by having a random combination of holes on the mask, one can come apply the Compressive Sensing framework.

The question now becomes : how does this apply to the Obama picture in the the Wired piece knowing it was taken with the equivalent of a thick coded aperture not a thin one where compressive sensing would obviously work ? and why does the l_1 minimization seems to be working in getting back a useful image ?

Some would say that there is enough redundant information in the image with missing pixels so that with the right dictionary, the image can be somewhat perfectly reconstructed. However, I think ( and this is the reason, we shouldn't rush to judgment when saying that some system are CS or not) that some mixing is occurring that we somehow are overlooking. So how is a point and click camera an actual "weak" Compressive Sensing system ?

My take is that a point and click camera does not fit a simple coded aperture model . The optical engineering is in the business of designing systems so that a dirac in the world plane gets to be translated in as close as a dirac as possible on the CCD/CMOS plane for all wavelengths and for all depths. The problem is, we can't do it and this why the optical folks are having a ball selling us lenses of different kinds in a clearly segmented market (different type of depth, focus, abberation, field of view, blah blah blah)

In a typical optical system, a dirac in the object plane gets to be translated into a Airy function like this one on the CCD/CMOS plane.

As one can see the ripples near the peak allow for some of the light to be overflowing with other pixels nearby i.e. even in a simple system, some mixing is occurring with neighboring pixels if not at one wavelength, definitely at others. Other type of mixing in your average point and click system include:
  • Transport medium (air, ...)
  • JPEG transform
The next issue then becomes how come the l_1 minimization can work since the measurement matrix does not take these mixing into account ? I am not sure but maybe the results for multiplicative noise or Random Matrix Theory could help (arxiv: Mixed Operators in Compressed Sensing by Matthew Herman and Deanna Needell.)

Eventually the question is "are lower quality cameras or microphones "weak" compressive sensing systems " ? More to come later...

Monday, June 21, 2010

CS: Linkedin Discussions, a Correction and Two Papers

Two discussions on LinkedIn have been started and are looking for answers:
The group now has 450 members.

I also found this interesting blog entry: What I Don’t Know About Compressive Sensing by Brian Chesney

In a recent entry (CS: CS SPDE, Cramér-Rao Bound In Noisy CS, Compressive Fourier Transform Spectroscopy, compressive terahertz imaging and a job at Exxon ). I featured a paper entitled: On the Achievability of Cramér-Rao Bound In Noisy Compressed Sensing by Rad Niazadeh, Massoud Babaie-Zadeh, Christian Jutten. where one could read in the abstract that

...In this correspondence, we will first explain the mistake in the mentioned lemma in [Akackaya et. al., 2008] and will then state a new correct form of it...

Mehmet Akackaya responded in the comment section the following:

Dear Igor,

While reading your blog, I came across the paper "On the Achievability of Cramér-Rao Bound In Noisy Compressed Sensing," which claimed there was a mistake in one of our earlier works. I have contacted the authors of the work and clarified their source of confusion.

Our proof IS correct as it was published. Just wanted to let the community know :)

Thanks for running this blog, so that we become aware of such misunderstandings early on.


Thanks Mehmet for the heads-up. Let us now wait for a new version of On the Achievability of Cramér-Rao Bound In Noisy Compressed Sensing.

Finally, I found two papers on CS:

A review of the compressive sampling framework in the lights of spherical harmonics: applications to distributed spherical arrays by Bruno Masiero and Martin Pollow. The abstract reads:

Compressive Sampling proposes a new framework on how to effectively sample information (signals or any other physical phenomena) with a reduced number of sensors. The main idea behind this concept is that if the information to be sampled can be sparsely described in a space that is incoherent to the measurement space, then this information can be restored by minimization. In this paper we describe the Compressive Sampling framework and present one example of application, namely, to sample an outgoing acoustic field with a distributed spherical array composed of a reduced number of sensing microphones without suffering from aliasing errors.

Compressive Sensing based Lightweight Sampling for Large Robot Groups by Sameera Poduri , Garrett Marcotte , Gaurav S. Sukhatme. The abstract reads:
This paper presents a lightweight method for spatial sampling of environmental phenomenon using a large group of robots. Data is gathered using simple random walks, aggregated centrally, and the field is reconstructed using tools from compressive sensing. Key to our approach is a recent remarkable result that it is possible to reconstruct compressible fields using O(log n) (where n is the dimension of the field) nonadaptive measurements in a basis that is ‘incoherent ’ to the representation basis and, under certain conditions, random measurements are as ‘good ’ as any other measurements [1]. We show that random walk sampling with large groups of robots can effectively reconstruct natural fields that are typically compressible in the frequency domain. The method is described analytically and demonstrated in simulation with real and simulated data. We also study the tradeoff between number of robots required and length of the random walk.

Image Credit: NASA/JPL/Space Science Institute, N00155237.jpg was taken on June 05, 2010 and received on Earth June 07, 2010. The camera was pointing toward TITAN at approximately 279,287 kilometers away, and the image was taken using the CL1 and CB3 filters.

Sunday, June 20, 2010

One more word on monitoring the Deepwater Horizon well.

Following up on the recent entries on BP's Deepwater Horizon well and the need for its monitoring, it occurred to me that most petroleum engineers would recognize the words "compressive sensing" as being the technique that uses the l_1 sparsity inducing norm to reconstruct signals for seismic waves probing. It isn't so. Compressive Sensing is really about mixing the right type of probing waves (seismic, ultrasonic,...) or nuclear sources (neutron, gammas,...). By this I mean several sources located in different locations being fired in parallel (not sequentially). The way you mix them is what compressive sensing is really about. Much of the work that started since 2004 on the subject should allow interested parties to get the information they are looking for much faster. In effect, if mixing is done right, compressive sensing should provide some sorts good resolution in space or time of the changes happening under the seabed floor while the relief well is being dug.

To whom it may concern: this blog is being read by more than a thousand specialists from universities and industry everyday. If there is an interest I am sure that some of them could sign an NDA to provide guidance on how to do this right. Also, since the summer session is coming up, it is a good opportunity to get the academic folks to be working full time on this.

Image of burning methane at the Deepwater site courtesy of David Valentine (via Scientific American)

Saturday, June 19, 2010

Monitoring the Deepwater Horizon well.

Following up on yesterday's entry (Deepwater Horizon should not become our Event Horizon), here is one set-up that could be used to monitor the conditions of the Deepwater Horizon well and its surroundings as the relief well gets close to the main well. In this configuration, the wave is seismic but it can be adapted with nuclear and other types of probing measurements. Compressive Sensing provides a nice way to demultiplex all these signals but more importantly it can detect with few measurements changes in the configuration of the soil.

Credit: I am taking as an example a figure found from one of Felix Herrmann's presentation, SLIM/UBC . More can be found here.

Friday, June 18, 2010

Deepwater Horizon should not become our Event Horizon

If the situation is as bad a movie as what some people describe then it looks like it could be important to know when the whole 2 billion barrels reservoir will be wide open gushing into the Gulf of Mexico ... or/and if there is a plan F that can be developed as the seabed floor collapses. BP's current approach of drilling a relief well also requires to have a good idea of how the seabed conditions change with time as the relief drill bit gets closer to the primary broken well. In both instances, BP needs to have the means of monitoring and making sense of the very rapid conditions taking place underneath the seafloor. So let me make a statement:

I am sure BP has the right people to do it with either nuclear, seismic, electromagnetic or sonic measurement capabilities; I am sure BP can perform these inverse problems with the best codes/algos there are.

However, if there are any doubts, or if there is a need for innovative and robust ways to monitor the situation at a higher sampling rate or with higher resolution, with the current assets, then I suggest some of the BP folks get in touch with some of the people of the community reading this blog. It is being read by the best minds in the world when it comes to sampling issues using any kind of probes (particles, electromagnetic, acoustic, seismic...). I am sure they can also sign NDAs if the situation requires it.

Thursday, June 17, 2010

CS: Extreme Sampling

The generic story that goes into using compressive sensing is generally that it can provide sampling at a lower cost and lower complexity. While reading the news this week, I have been made aware of several extreme sampling and I am wondering how compressive sensing could or could have helped in these endeavors. The fascinating part is how extreme these sampling exercise were. Sometimes, the main constraints are:

Do you have any other example of extreme sampling ? I'd like to hear about them.

Credit photo: JAXA.

Wednesday, June 16, 2010

CS: CS SPDE, Cramér-Rao Bound In Noisy CS, Compressive Fourier Transform Spectroscopy, compressive terahertz imaging and a job at Exxon

Here are four papers that showed up on my radar screen and finally a job from ExxonMobil:

A non-adapted sparse approximation of PDEs with stochastic inputs by Alireza Doostan, Houman Owhadi. The abstract reads:
We propose a method for the approximation of solutions of PDEs with stochastic coefficients based on the direct, i.e., non-adapted, sampling of solutions. This sampling can be done by using any legacy code for the deterministic problem as a black box. The method converges in probability (with probabilistic error bounds) as a consequence of sparsity and a concentration of measure phenomenon on the empirical correlation between samples. We show that the method is well suited for truly high-dimensional problems (with slow decay in the spectrum).

On the Achievability of Cramér-Rao Bound In Noisy Compressed Sensing by Rad Niazadeh, Massoud Babaie-Zadeh, Christian Jutten. The abstract reads:
Recently, it has been proved in [Babadi et. al., 2009] that in noisy compressed sensing, a joint typical estimator can asymptotically achieve the Cram\'er-Rao lower bound of the problem. To prove this result, [Babadi et. al., 2009] used a lemma, which is provided in [Akackaya et. al., 2008], that comprises the main building block of the proof. This lemma contains a mathematical mistake in its statement and proof which should be corrected. One wonders then whether or not the main results of [Babadi et. al., 2009] are correct. In this correspondence, we will first explain the mistake in the mentioned lemma in [Akackaya et. al., 2008] and will then state a new correct form of it. Then we re-study the main results of [1], and we will show that fortunately they remain valid, that is, the Cram\'er-Rao bound in noisy compressed sensing is achievable and a joint typical estimator can achieve it.

Compressive Fourier Transform Spectroscopy by Ori Katz, Jonathan M. Levitt, Yaron Silberberg. The abstract reads:
We describe an approach based on compressive-sampling which allows for a considerable reduction in the acquisition time in Fourier-transform spectroscopy. In this approach, an N-point Fourier spectrum is resolved from much less than N time-domain measurements using a compressive-sensing reconstruction algorithm. We demonstrate the technique by resolving sparse vibrational spectra using less than 25% of the Nyquist rate samples in single-pulse CARS experiments. The method requires no modifications to the experimental setup and can be directly applied to any Fourier-transform spectroscopy measurement, in particular multidimensional spectroscopy.

And behind a paywall: Image reconstruction using spectroscopic and hyperspectral information for compressive terahertz imaging by Zhimin Xu and Edmund Y. Lam. The abstract reads:
Terahertz (THz) time-domain imaging is an emerging modality and has attracted a lot of interest. However, existing THz imaging systems often require a long scan time and sophisticated system design. Recently, a new design incorporating compressed sensing (CS) leads to a lower detector cost and shorter scan time, in exchange for computation in an image reconstruction step. In this paper, we develop two reconstruction algorithms that can estimate the underlying scene as accurately as possible. First is a single-band CS reconstruction method, where we show that by making use of prior information about the phase and the correlation between the spatial distributions of the amplitude and phase, the reconstruction quality can be significantly improved over previously published methods. Second, we develop a method that uses the multi-frequency nature of the THz pulse. Through effective use of the spatial sparsity, spectroscopic phase information, and correlations across the hyperspectral bands, our method can further enhance the recovered image quality. This is demonstrated by computation on a set of experimental THz data captured in a single-pixel THz system.

A job at Exxon:

AutoReqId 9655BR
Job or Campus Folder Research Scientist-Inversion Methods
Job Description ExxonMobil’s Corporate Strategic Research laboratory is seeking applications from talented individuals in physics, applied mathematics, or engineering with a strong record of achievements in fields related to non-linear inversion, compressive sensing, and their associated mathematical and numerical methods. Specific position is in the following areas of interest:

Research Scientist – Inversion methods and compressive sensing. Position involves developing algorithms involved in large-scale non-linear inversion. Candidates with experience with the mathematics of compressive sensing, signal/image processing, denoising, sub-Nyquist signal reconstruction, and sparse representation of data desired.

Job Requirements:

Applicants should have a Ph.D. in applied mathematics, physics, engineering, geophysics, or a related field, with a strong ability in their field of expertise. Proficiency with scientific programming languages and experience with large-scale, parallel, numerical simulations are definite advantages. The ability to communicate and interact with internal and external groups will be an important selection criterion. Candidates should have a strong publication record, excellent oral presentation and writing skills, and show a desire and ability to grow into new science areas.

The successful candidate will join a dynamic, multi-disciplinary group of world-class scientists who focus on performing breakthrough research and creating new approaches to solve our most challenging problems. Technical staff members in this position implement and report on independent research, participate in program development, as well as collaborate internationally with leading engineers and scientists from industry, universities, and other technical institutions.

ExxonMobil’s Corporate Strategic Research (CSR) laboratory is a powerhouse in energy research focusing on fundamental science that can lead to technologies having a direct impact on solving our biggest energy challenges. Our facilities are centrally located in scenic Annandale, New Jersey, approximately one hour from both New York City and Philadelphia.

ExxonMobil offers an excellent working environment and a competitive compensation and benefits package.

ExxonMobil is an Equal Opportunity Employer
Job Location Clinton, NJ

Tuesday, June 15, 2010

CS: RIP redux, Around the blog in 80 hours, Subspace Evolution and Transfer (SET) for Low-Rank Matrix Completion

From yesterday's presentation we had two comments. One from Laurent Jacques:

Thank you to Jared Tanner for these important references.

I think it is useful to precise that, even if Foucart and Lai's results does not provide the best achievable bounds compared the recent ones obtained by Blanchard and Thompson, they are not wrong anyway (for BP), just weaker.

Moreover, the inequality

b * lower_rip_{(b+1)k} + upper_rip_{bk} \lt b - 1

(e.g. with b=11) is of course still invariant under scaling of the matrix (i.e., A -> cA), since it is equivalent to

(1 + upper_rip_{(b-1)k})/(1 - lower_rip_{bk}) \lt b

It is true nevertheless that the insightful relations provided in the other referenced Blanchard et al. paper for different reconstruction methods (IHT, ROMP, ...) are not scale invariant. Except perhaps for CoSaMP.


and the other from Salman:

When talking about RIP, it is very important to see the relationship between dimensions of the matrix (say MxN) and sparsity of signal (say K). If RIP constant \delta = O(1/sqrt(K)), then number of measurements scale as M ~ K^2 poly(log N). And in this range there is little difference (if any) between coherence and RIP based results.

I also had to apologize to Ray for butchering the end of his message. See Blogger, the service that runs this blog has a real problem with the \lt and \gt signs.

Other blog entries of relevant to some of the subject covered here include:

Djalil Chafai:
Terry Tao
Gonzalo Vazquez Vilar
John Langford

Jason Moore has a take on the failure of GWAS, Seth has another.

Finally, there is a preprint on arxiv:

We describe a new algorithm, termed subspace evolution and transfer (SET), for solving low-rank matrix completion problems. The algorithm takes as its input a subset of entries of a low-rank matrix, and outputs one low-rank matrix consistent with the given observations. The completion task is accomplished by searching for a column space on the Grassmann manifold that matches the incomplete observations. The SET algorithm consists of two parts -- subspace evolution and subspace transfer. In the evolution part, we use a gradient descent method on the Grassmann manifold to refine our estimate of the column space. Since the gradient descent algorithm is not guaranteed to converge, due to the existence of barriers along the search path, we design a new mechanism for detecting barriers and transferring the estimated column space across the barriers. This mechanism constitutes the core of the transfer step of the algorithm. The SET algorithm exhibits excellent empirical performance for both high and low sampling rate regimes.

Monday, June 14, 2010

CS: Providing insight on RIP by Jared Tanner and Ray Maleh

We have some communications from two people today. First Jared Tanner adds some additional insight on a question about the RIP. Then, Ray Maleh wanted me to write a small piece on what is known with regards to RIP and recoverability of greedy algorithms but since I am not a specialist, I yielded the floor to him on the subject. Please note that the underlying reason for making a result known on this blog stems from the inability of current journals to go fast enough in publishing in a field that is growing as fast as compressive sensing. I guess the blog provides an avenue to make some results known more quickly to this community.

Thanks Jared and Ray, I always look forward to contributions that provide some context in our current state of knowledge. With no further babbling, first here is Jared 's contibution:

Dear Igor,

I read today a question/comment about the RIP and its lack of scale invariance. It was remarked that the important quantity is the ratio, which is scale invariant. This was the interpreted from Foucart and Lai's work. However, this is not the case. For most algorithms the best rip based proof involves a condition that depends on some function of the upper and lower rip constants, often of different degrees. For details on this see:


In this second, letter, you will read on pages 5 and 6 that, as best we know, the best known rip condition for l1 is: 11 * lower_rip_{12k} + upper_rip_{11k} less than 10 and not the more recent result of Foucart and Lai.

There is a lot of confusion about the RIP. It would be advisable for people to look at the bounds we provided in:
so that they have a better idea how the RIP constants, at least for Gaussian, behave.

All the best,

and then Ray's contribution:

Orthogonal Matching Pursuit (OMP) has long been considered a versatile and robust heuristic for solving compressive sensing problems. Up until lately, the only known theoretical results governing OMP's performance depended on the notion of dictionary coherence (see Tropp 04 and Gilbert et al. 03). It was not until last year that it was first shown that OMP can recover a $K$-sparse vector $x$ from measurements $y = \Phi x$ if the measurement matrix $\Phi$ satisfies a restricted isometry property (RIP).

In August 2009, Mark Davenport and Michael Wakin released a pre-print of the paper "Analysis of Orthogonal Matching Pursuit using the Restricted Isometry Property" where they show that given a measurement matrix $\Phi$ with a restricted isometry number $\delta_K$ satisfying $$\delta_K < \frac{1}{3sqrt{K}},$$ then OMP will recover any signal that is K-sparse from its measurements. They further conjecture the following \emph{unachievable} lower bound

$$\delta_K < \frac{1}{sqrt{K}}.$$

In January 2010, Entau Liu and V. N. Temlyakov released the preprint "Orthogonal Super Greedy Algorithm and Applications in Compressed Sensing" where they improve the result by showing that OMP can recover any K-sparse signal using a measurement matrix satisfying $$\delta_{K+1} < \frac{1}{(1+sqrt{2})\sqrt{K}}.$$ They mention that achieving a smaller restricted isometry bound is an interesting open problem.

In August 2009, while a Ph.D. student at the University of Michigan, Ray Maleh defended his dissertation "Efficient Sparse Approximation Methods for Medical Imaging." In Chapter 2 of his work, he shows that OMP can recover any K-sparse signal provided that

$$\delta_{K+1} \lt \frac{1}{1+sqrt{K}}.$$

This is an improvement over the results of Davenport et al. and Liu et al. Furthermore, his result very closely approaches the unachievable lower bound $\delta_K \lt 1/sqrt{K}$. In addition, Maleh proves RIP-based performance error bounds that characterize OMP's ability to recover non-sparse vectors possibly in the presence of measurement noise. While still forthcoming as a journal paper, his dissertation can be found at:

The author can be reached at

Credit: JAXA, The last view of Hayabusa before it crashed. Via the planetary society blog.

Sunday, June 13, 2010

The good and the worst this week-end.

The good: The Falcon capsule detached from Hayabusa which had soil sample from the Itokawa asteroid. The capsule was caught on video by a NASA plane over Australia as it reentered the atmosphere. Some people have called it the robotic equivalent of Apollo 13.
Justify Full

The bad: " is worse". It looks like the well is damaged (read here for more technical details) I am sure a lot many excellent engineers at BP are working overtime but maybe they need to have some outside views on this. I mean, for one, we developed a technology that seems eerily similar to the one that Kevin Costner has been selling to BP because in space we have devised systems that separate liquids of different densities.

and see how it works in microgravity:

But the issue here is not a water-oil separation, it's really about determining how the subsurface flow extend under the sea floor. And if you ask me, it really look like an interesting inverse problem. I don't care much about the politics underlying all this but so far as I can tell, it sure qualifies as the starting point of a catastrophe. Can we rise to this challenge and have a similar outcome as that of Apollo 13 ?