Friday, August 31, 2007

Compressed Sensing: Map of entries from this blog


I just used Freemind to produce a map of most of the entries on Compressed Sensing I have written so far. It should help in the navigation, I'll update as we go. It is here. The freemind file is here. The reverse chronological order is shown here.Please note that I made it a permanent link on the right side bar so that one can jump into it right away when coming to this blog. The map of entries is here: http://igorcarron.googlepages.com/compressedsensing
while the blog with items marked Compressed Sensing is here: http://nuit-blanche.blogspot.com/search/label/compressed%20sensing

Thursday, August 30, 2007

Compressed Sensing Videos: Where Is My Popcorn ?


 [Update: since 2007 when this entry was written, there have been numerous talks put on video on the matter of Compressed Sensing. All the videos are included on this page, including the ones mentioned here.]

I just found out about the presentations and videos of these presentations made at the Summer school on Compressive Sampling and Frontiers in Signal Processing that took place at IMA on June 4-15, 2007. They are all in Real format. This is a very nice initiative because there hadn't been too many videos before on the subject of compressed sensing. If you like these, I would suggest that you send a feedback to the organizers of the meeting and thank them for putting these videos online.

Richard Baraniuk
Emmanuel J. Candès
  • Sparsity, Talks(A/V) (ram)
    • After a rapid and glossy introduction to compressive sampling–or compressed sensing as this is also called–the lecture will introduce sparsity as a key modeling tool; the lecture will review the crucial role played by sparsity in various areas such as data compression, statistical estimation and scientific computing.
  • Sparsity and the l1 norm, Talks(A/V) (ram)
    • In many applications, one often has fewer equations than unknowns. While this seems hopeless, we will show that the premise that the object we wish to recover is sparse or compressible radically changes the problem, making the search for solutions feasible. This lecture discusses the importance of the l1-norm as a sparsity promoting functional and will go through a series of examples touching on many areas of data processing.
  • Compressive sampling: sparsity and incoherence , Talks(A/V) (ram)
    • Compressed sensing essentially relies on two tenets: the first is that the object we wish to recover is compressible in the sense that it has a sparse expansion in a set of basis functions; the second is that the measurements we make (the sensing waveforms) must be incoherent with these basis functions. This lecture will introduce key results in the field such as a new kind of sampling theorem which states that one can sample a spectrally sparse signal at a rate close to the information rate---and this without information loss.
  • The uniform uncertainty principle, Talks(A/V) (ram)
    • We introduce a strong form of uncertainty relation and discuss its fundamental role in the theory of compressive sampling. We give examples of random sensing matrices obeying this strong uncertainty principle; e.g. Gaussian matrices.
  • The role of probability in compressive sampling, Talks(A/V) (ram)
    • This lecture will discuss the crucial role played by probability in compressive sampling; we will discuss techniques for obtaining nonasymptotic results about extremal eigenvalues of random matrices. Of special interest is the role played by high- dimensional convex geometry and techniques from geometric functional analysis such as the Rudelson's selection lemma and the role played by powerful results in the probabilistic theory of Banach space such as Talagrand's concentration inequality.
  • Robust compressive sampling and connections with statistics, Talks(A/V) (ram)
    • We show that compressive sampling is–perhaps surprisingly–robust vis a vis modeling and measurement errors.
  • Robust compressive sampling and connections with statistics (continued) Talks(A/V) (ram)
    • We show that accurate estimation from noisy undersampled data is sometimes possible and connect our results with a large literature in statistics concerned with high dimensionality; that is, situations in which the number of observations is less than the number of parameters.
  • Connections with information and coding theory Talks(A/V) (ram)
    • We morph compressive sampling into an error correcting code, and explore the implications of this sampling theory for lossy compression and some of its relationship with universal source coding.
  • Modern convex optimization Talks(A/V) (ram)
    • We will survey the literature on interior point methods which are very efficient numerical algorithms for solving large scale convex optimization problems.
  • Applications, experiments and open problems Talks(A/V) (ram)
    • We discuss several applications of compressive sampling in the area of analog-to-digital conversion and biomedical imaging and review some numerical experiments in new directions. We conclude by exposing the participants to some important open problems
Ronald DeVore

Anna Gilbert




  • Algorithms for Compressed Sensing, I Slides (pdf), Talks(A/V) (ram)
    • What algorithmic problem do we mean by Compressed Sensing? There are a variety of alternatives, each with different algorithmic solutions (both theoretical and practical). I will discuss some of the different types of results from the combinatorial to the probabilistic.
  • Algorithms for Compressed Sensing, II Lecture notes (pdf) , Talks(A/V) (ram)
    • What do these algorithms all have in common? What are the common goals of the problems and how do they achieve them? I will discuss several known techniques and open problems.
Douglas N. Arnold

Leon Axel (New York University)

, Steen Moeller (University of Minnesota)

Short presentations by participants Short presentation by participants (A/V) (ram)
Discussion (A/V) (ram)
Discussion (A/V) (ram)
Short presentations by participants


Liked this entry ? subscribe to the Nuit Blanche feed, there's more where that came from

If you think this blog provides a service, please support it by ordering through the Amazon - Nuit Blanche Reference Store

Wednesday, August 29, 2007

Compressed Sensing: Streaming School in Denmark


A Summer school report on streaming algorithm was filed by Piotr Indyk on Geomblog.

The presentation are available on the MADALGO summer school website, a very nice initiative:

Enjoy.

Rice Compressed Sensing Site Update.

[Update: The Compressive Sensing Blog]

We all know about the Rice Compressed Sensing library that gathers all papers/links of papers relevant to Compressed Sensing. I just found out they also have a chronological listing. The Overview has also changed to reflect a better definition of Compressed Sensing. It includes:
Compressive sensing is also referred to in the literature by the terms: compressed sensing, compressive sampling, and sketching/heavy-hitters.
and features not only papers but also links to (mostly) Matlab Toolbox for Compressed Sensing.

Tuesday, August 28, 2007

Deadly resonance

Anne Fouillet, Grégoire Rey, Eric Jougla, Philippe Frayssinet, Pierre Bessemoulin and Denis Hémon just released a paper on the correlation between weather temperature and mortality rate. This is in part due to the 2003 abnormal death rate sustained in France during a heat wave that killed upward 18,000 people above what would have been expected. In recent memory, there were 4 other hot weather incident in France: summer 1975, 1976, 1983 and 2003. The figures shown below are from the paper and they tell a story:





When I saw these graphs, I could not but notice the following, there seems to be a resonance when the maximum temperature goes above 35 C (35 degree Celsius = 95 degree Fahrenheit). The average body temperature is 37 degree Celsius (or 98.6 degree Fahrenheit), skin temperature is an average of 34 C and we also know that water is very anomalous, and in particular that the specific heat capacity (CP) has a minimum at 36°C.
.
One can always imagine the following, between 34C and 37 C, the body (made of 90 % water) starts taking in heat from the surrounding (mainly from radiation) and it does so more quickly at around 36 C. Since, we are only accustomed to heat warming the body slowly at lower temperature (higher Cp at T less than 30 C), there is a compounding effect taking place above 34 C that yields catastrophes on populations not accustomed to these heat fluxes. One also wonder, if a lower-tech version of DARPA cooling glove would not be an efficient tool in these situations (besides drinking and air conditioning).

[1] A predictive model relating daily fluctuations in summer temperatures and mortality rates.

Sunday, August 26, 2007

Hard Problems: Walking Dinosaurs wearing make-ups while sleeping.

I am always a little astonished by some things that I do not see implemented because they are too hard. Yet, I don't even see them being even attempted in the first place even though there is a very large market for each of those. Here they are:
  • How come we don't have Jurassic Park with walking dinosaurs ? everytime there is an animotronics coming in town, you have lines of kids waiting to see those things even if the time spent waiting will be longer than the time spent watching them and yet we still don't have walking dinosaurs (except when a human is inside). How come ? (my interest here lie in muscle, autonomous ). It looks as though we have only been able to devise their gait recently. Some people are already making a business case that building them will get people to come.
  • Knowing that some women spent as much as two hours every day to do their make-ups, how come there is not a Make-up robot for women ? ( autonomous ). This is all the more interesting that much technology goes into changing the face/shape of women in magazines. How come there isn't a similar technology to evaluate if the make-up is good enough ? Think Snow White mirror.
  • People spend an average of 8 hours sleeping yet there is no real good technology to improve sleep. How come there isn't an autonomous pillow that shapes itself around one's head over the course of the sleep. Or since a 32 GB SD card can allow people to record entire sleeping patterns for over 8 hour. What is the software that will allow to check if the pattern is a good one or a detrimental one ?

Friday, August 24, 2007

Compressed Sensing: Why does Rice Play Texas or How is CS a disruptive technology ? Part I


For those of you who do not know much about Texas, the question "Why does Rice Play Texas ?" was rhetorically asked by the late President John F.  Kennedy at the Rice University Stadium in the famous Moon speech:

.. Why, 35 years ago, fly the Atlantic? Why does Rice play Texas? We choose to go to the moon. We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one which we intend to win, and the others, too. It is for these reasons that I regard the decision last year to shift our efforts in space from low to high gear as among the most important decisions that will be made during my incumbency in the Office of the Presidency... But if I were to say, my fellow citizens, that we shall send to the moon, 240,000 miles away from the control station in Houston, ... to an unknown celestial body, and then return it safely to earth, reentering the atmosphere at speeds of over 25,000 miles per hour, causing heat about half that of the temperature of the sun--almost as hot as it is here today--and do all this, and do it right, and do it first before this decade is out, then we must be bold."
The more complete question should be "Why does the Rice University football team plays the University of Texas team when the odds are so much in favor of the University of Texas ?". In effect, Rice University (located in Houston) always has had a much weaker (american) football team compared to the rival team at University of Texas (located in Austin). In short, this "Why does Rice Play Texas ?" is reminiscent of "Why does David Fight Goliath ?". To most foreigners, the sentence sounds weird because the word "university" has been removed. 


View Larger Map

The parallel between this speech on space exploration and a disruptive technology like Compressed Sensing here is apt (let us note that Rice is now the center of a new technology: The one-pixel camera) The big idea that gets the most press is how the single camera works but it is interesting to see that it takes some amount of explaining to see the real difference between a normal camera and the Rice one pixel camera. In effect, as I mentioned previously, the real difference between Rice's camera and a normal camera is the number of samples the device takes ( the Rice camera could be run in a normal raster mode exactly as a normal camera). The Rice camera is a breakthrough not because of the way it is designed but rather because of the lower number of samples required to achieve the same quality of image as a normal camera. 

And so the question arising about the blurriness of the reconstructed images from the CS camera are justified (posted by rif in the comment section of this entry). I asked the question directly to Rich Baraniuk and Justin Romberg:

Do you have any good feel as to why the TV reconstruction of the images featured in this webpage and attendant publications, is still blurry with 40 % of the coefficients specifically for the mug and the soccer ball?

My current guesses are as follow:
  • the camera is obtaining compressed measurements of a 3-d object but you use a dictionary of 2d functions for the reconstruction ?
  • the pseudo-random family used for the projection is not optimally incoherent with the Haar wavelets as opposed to, say, noiselets ?
  • exposure time is limited between different mirror configurations ?

Justin first responded :

1) The optical experiments the Rice team is running right now are probably "low resolution" in the following way. If they took many measurements (say 10s of thousands in order to reconstruct the 4096 pixel image) and then just reconstructed using least-squares (in effect averaging together a bunch of noisy, fully sampled observations) , the final image would still probably be a bit blurry. I don't know if they've tried this; I'll let Kevin and Rich comment.

2) I am not sure what values of epsilon they used in the recovery program min TV(x) s.t. ||Ax - y||_2 <= epsilon But if this value was more than something like 1-5% of ||y||_2, the TV recovery would start to get blurry even from "perfect" measurements.

Rich then responded by saying:

the reason for the blurriness is most likely due to misalignment of the optics; ie: even if we put a regular CCD array where our DMD was located the result would be blurry.

your guesses are good ones, but i'm not so sure they could have caused this issue. but we'll keep pondering them.

This is good to see that a better understanding of the issue is addressed by the folks involved in that research and the hope is to eventually obtain non-blurry images for less than the 40% coefficients currently being used. But as we can see, that technology has to find a good area where it is the only one to flourish in order to become one of these disruptive technologies of the future. The formidable opponent that is the current CMOS cameras sold at your high tech store near you has to have a shortfall that only Compressed Sensing can address.

In order to give oneself some guidance, let us look at the definition of disruptive technologies as viewed by Todd Proebsting when he was giving a talk on innovation in programming languages.

A “disruptive” technology
Disadvantage in primary market
Advantage in secondary market
Sold in small, low-margin market

Established companies concentrate and innovate on primary market; ignore secondary
Timely improvements lessen disruptive technology’s liabilities, increasing markets, market share, margins, etc.


A “disruptive” language safe, GC’ed interpreters
Disadvantage SLOW
Advantage Rapid Application Develop
Sold in small, low-margin market web developers, ISV’s
(established competitor ignored market)

Established companies concentrate on primary differentiator SPEED

Timely improvements lessen disruptive technology’s liabilities, increasing markets, market share, margins, etc.
Moore’s Law (for free!)
RAD enhancements

My criteria: technology must
Have disadvantages: Be mostly ignored by recent PLDI and POPL conferences
Alleviate real problems…"What does it do?"

For each candidate technology: 2 slides
  • Opportunity what’s the issue?
  • Current solutions what’s done now
  • Proposal: New disruptive technology
  • Disadvantages why some (many?) will scoff
  • Unmet needs benefits to adopters
What are the 2 slides for Compressed Sensing ?

Thursday, August 23, 2007

Shadowing the ISS

The latest photographs of the ISS and Endeavor can be found here, well most of them.


Please note the shadowing we computed here and a similar occurence below.

The express pallet should have been located next to the astronaut below.


We yawn because we care


Hayabusa may be coming home thanks to the hard work on Japanese engineers at JAXA. There is now a layer for Gigapixel images in Google Earth. This is going to come handy when we are going to retrieve our camera (Geo-CAM R) from HASP (a High Altitude Balloon to be flown in September) and make large maps like we did last year.
In other news, it also looks like we yawn because we care
Current results suggest that contagious yawning is impaired in ASD, which may relate to their impairment in empathy. It supports the claim that contagious yawning is based on the capacity for empathy.

Wednesday, August 22, 2007

Compressed Sensing: It's not just L1 anymore, greed can get you there too.

With all the talk about L1 minimization being the only way to reconstruct signals in Compressed Sensing, here is an important landmark: Deanna Needell and Roman Vershynin, just submitted this paper on Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit. The commentary on the paper is here.

The abstract is pretty self explanatory:

This paper seeks to bridge the two major algorithmic approaches to sparse signal recovery from an incomplete set of linear measurements – L1- minimization methods and iterative methods (Matching Pursuits). We find a simple regularized version of the Orthogonal Matching Pursuit (ROMP) which has advantages of both approaches: the speed and transparency of OMP and the strong uniform guarantees of the L1-minimization. Our algorithm ROMP reconstructs a sparse signal in a number of iterations linear in the sparsity (in practice even logarithmic), and the reconstruction is exact provided the linear measurements satisfy the Uniform Uncertainty Principle.


A basic code for ROMP can be found on Deenna Needell's page.

The difference between greedy algorithms like ROMP and L1 minimization (which require more cumbersome optimization/ linear programming approach) is touched on in this Q&A at IMA in 2005 by Emmanuel Candes and it all comes down to reducing the sampling needed to obtain good reconstruction.

Thursday, August 16, 2007

CS is not just Compressed Sampling nor Compressed Sensing.

[ The Compressed Sampling blog is here ]

Moving forward from the recent Synthesis and L1 entries, I am trying to now review some of the new work on compressed sensing that do not generally fall into the traditional subjects of just compressive sampling.
  • Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization by Benjamin Recht, Maryam Fazel, Pablo A. Parrilo This is the most intriguing paper I have seen so far because it promises to make a bridge between compressed sensing and other areas such as operator/matrix compression: Examples include Minimum order linear system realization, Low-Rank Matrix Completion, Low-dimensional Euclidean embedding problems, Image Compression. They make the case that the L1 reconstruction capability in compressed sensing has also been seen in other areas and that compressed sensing is now bringing the theoretical tools as to why these heuristics actually work. The areas of interest include matrices and the issues at hand are not about cardinality of matrices (sparsity) but rather ranks. The Hilbert Space norm that was Euclidian in CS becomes that of Frobenius and the sparsity inducing norm is not L1 but the nuclear norm. The tools to investigate these phenomena in CS are the convex optimization linear programming tools whereas they become that of the semidefinite programming in the context of this article. Lots of interesting and fascinating potential application are listed at the end of the paper. The figure on the right show the same phase change observed in CS but now the x-axis and y-axis represent some measure of the rank of the matrix being tested for recovery. Fascinating. Maybe SeDuMi should be part of the packages for CS from now on? I cannot wait to see how this parallel will play out with Action Respecting Embeddings and Nonlinear Dimension Reduction with Side Information since we were interested in this at some point. But right now it seems it is really a question of skillfully using the Matlab reshape function. As it turns out, CS may indeed have some influence in Machine Learning but in a way I did not anticipate. The Maximum Margin Matrix Factorization algorithm by Jason Rennie and Nathan Srebro may make big headway in enabling the nuclear norm approach developed in this paper. And yes, Pierre, the Netflix prize resolution, in that sense may use some of the concepts in CS. Furthermore, while reconstruction is indeed very interesting and satisfying, I am awaitng what their counterparts in detection will produce ( Richard Baraniuk and Michael Wakin in Random projections of smooth manifolds).
  • One of the hot issue in CS is really about how to quantify the amount of sparsity one can get away with when using random or deterministic projections. As we all know Compressed Sensing does not require random projections to work. The bases just need to be incoherent. But when you do use them, what is the least sparse decomposition you can get away with, or how given a random matrix, can one figure out how it can be useful for CS without breaking down. Terry Tao explains what the challenge is here. Anatoly Eydelzon, a doctoral candidate at Rice will present his thesis on the subject on August 30th ( Deterministic Conditions for Solution Recovery in Compressive Sensing and Error Correction)
  • Another way to go about this is to solve this issue with a deterministic algorithm. This is what Mark Iwen is doing with his paper: A Deterministic Sub-linear Time Sparse Fourier Algorithm via Non-adaptive Compressed Sensing Methods. First, it is nice to see another CS paper that specifically refers to group testing in the form of the Chinese Remainder Theorem. Mark Iwen modifies an algorithm develop by Graham Cormode and Muthu Muthukrishnan. One of the interesting thing about new papers is how Compressed Sensing is defined, which is always a little bit different than before. For instance, he gives a very compact definition of CS that I cannot recall being so clearly expressed before.
    The general CS setup is as follows: Let A be an N-length signal/vector with complex valued entries and be a full rank N × N change of basis matrix. Furthermore, suppose that T A is sparse (i.e., only k ≪ N entries of T A are significant/large in magnitude). CS methods deal with generating a K ×N measurement matrix,M, with the smallest number of rows possible (i.e., K minimized) so that the k significant entries of · A can be well approximated by the K-element vector result of M· T A. (1) Note that CS is inherently algorithmic since a procedure for recovering T A’s largest k-entries from the result of Equation 1 must be specified.
    At the end of the paper, it also becoming clear as to how CS is going to affect linear algebra:
    Compressed Sensing (CS) methods provide algorithms for approximating the result of any large matrix multiplication as long as it is known in advance that the result will be sparse/compressible.
    This is very reminiscent of the Fast Multipole method that initially was thought as a mere improvement to some electrostatic or gravitational computations but eventually yielded major advances in many other areas. Mark used the approach of Cormode and Muthukrishnan ( Combinatorial algorithms for compressed sensing. G. Cormode and S. Muthukrishnan. SIROCCO 2006. DIMACS TR 2005-25)

Reference: Rice

Tuesday, August 14, 2007

Looking at beautiful things


Damaris has a small entry on her work currently looking at the Shuttle damaged tiles. We had a similar project at STC where we would look at the belly of the shuttle using an HD camera at 60 frames per second and eventually provide 3D photogrammetry of the belly of the Orbiter from several hundred meters down.




One of the thesis of the subject was done by Paul Gersting at Texas A&M University under Johnny Hurtado and Mark Lemmon and was entitled: A photogrammetric on-orbit inspection for orbiter thermal protection system. When the shuttle would perform the RPM (Rendez-Vous Pitch Maneuver or shuttle back flip) below the International Space Station, our HEOCam system would have been able to evaluate impact depth as Paul shows in his work.


The IPAM/UCLA Graduate Summer School on Probabilistic Models of Cognition: The Mathematics of Mind has finished. The presentations and webcasts can be found here.

Saturday, August 11, 2007

A bridge too far. It's over.

As hinted on previously and on John Wiseman's blog, we are not semi-finalist to the urban Challenge. We did not make that bridge. Thank you for all the good wishers. At some time, we'll try to write and talk more about our system of ominidirectional cameras using compressed sensing and dimensionality reduction in policies we intended to accomplish during the site visit but did not have time to do.



In the meantime, our payload on HASP was integrated, we are looking forward to a second-flight of GeoCam and a new flight of Hyper-GeoCam, an Hyperspectral Random Lens Imager (following the foodsteps of the MIT Random Lens Imaging system by Rob Fergus, Antonio Torralba, William Freeman.)

Printfriendly