Wednesday, July 13, 2011

Nuit Blanche Reader's Mailbag

Bob Sturm provided a new correction to his implementation of CoSaMP and Subspace Pursuit algorithms. If this is not a huge justification for having written Nobody cares about you and your algorithm, I don't know what is. Currently I am playing with some of the sparse PCA and Low Rank codes around and keep on getting mex errors for a few of them. So even when this is written in Matlab, you're not communicating with your target audience if your algorithm cannot be instantiated on the users' computer.

Some anonymous commenters mentionned about Facing Mona Lisa:

The discussion on this blog is very "scientific" and I havent started to read up on it yet. I was imagining that it would be possible to use these methods to create a new compression-algorithm for images, like a new JPEG or ECW or MrSID- format. What is your idea on this?
Thanks for a nice blog and I will try to read up on compressed sensing!

To what I responded with:
Compressive sensing provides a simple way of compressing data however, it is also known that a naive approach to it yield schemes that are not as optimal compared to jpeg or wavelet based compression encodings. Less naive schemes include adaptive compressive sampling and/or structured sparsity based decoders and are likely to yield better results than the naive approach. However, if one looks at the semi-success of jpeg2000, a wavelet based encoding system, it looks like the era of better compression systems does not have bright future.

In the case of this sensor however, a jpeg or jpeg2000 encoding system would not make sense as the compressed data is already gathered through the readings of the voltage of the chip. What is needed is a better reconstruction solvers that takes these readings from voltages to an images. The point I made in this entry is that there are better reconstructions solvers than the least square solution (which seems to have been used by the authors of the paper) especially for a system that looks like it is underdetermined such as happens in compressive sensing....

Laurent Duval added:

You said "Compressive sensing provides a simple way of compressing data however, it is also known that a naive approach". I would not be in complete accordance with that. Compressive is (still IMHO) somehow an adjective to sensing, as (seemingly) pointed out by Bob Sturm here: "I agree that compressed sensing is much more a method of acquisition than it is a method for compression.", more than a way to compress stuff. Actual compression generally needs some amount of engineering wizardy than goes far beyond any "sparsity" assumptions. In a word: Fourier+Huffman < local Fourier (DCT)+ Huffman but local (8x8) wavelet + Huffamn < local Fourier (DCT)+ Huffman. Yet Wavelet + Zerotree > local Fourier (DCT)+ Huffman. Again, Wavelet + Zerotree ~ DCT + zerotree, both < to Windowed Fourier (GenLOT like) + treillis. And the story may not be over yet. There is some intertwining between sparsity yielded by transform and coding, AND expectation. Poor sparsity + black magic coding may perform better than highest sparsity and 2nd order entropy. As for: "However, if one looks at the semi-success of jpeg2000, a wavelet based encoding system, it looks like the era of better compression systems does not have bright future." I believe this assertion deserves some further historical, time-to-market and corporate weighting and a reference to Sentience is indescribable by Daniel Lemire. When we can describe the forest... but  progress on NP or PCP are needed first.

Eric Tramel replied with

WRT the "next era" of coding methods, I think we're still on the look out for the next wave. Currently, advanced coders, like Laurent mentioned, are mostly completely hyper-engineered, 100-mode contraptions that can squeeze compression out of rocks. H.264 for video is a hairy beast of corner cases, but it is my understanding that H.265 makes it look simple by comparison, all for ~30% rate-distortion performance gain.

CS is attractive for its simplicity, and that despite its simplicity it has passable performance in practice (in some applications). In order for CS to become the next thing in coding, measurement quantization will need to be fully understood and we'll need more accurate solvers (i.e. ones that make the most out of signal models & priors). I don't see this happening at a rate that would put CS on par with anything but last-gen traditional coding techniques. But of course, CS isn't intended for these purposes :P

TL;DR: Still looking for something elegant for the next gen of coders..

and then added

nother comment that is more on the topic of what you talked about in the post, Igor:

You'd be surprised what you can recover from. Well, maybe you wouldn't be, but others might. We talk a lot about making sure that the projector, \Phi, A, etc, matches bounds to ensure performance. But, legibility can be achieved with even really, really, really crappy & poorly conditioned projectors and a handful of measurements. This is potentially useful for some very simple vision applications.

So, it is very possible that, as you are suggesting, an L1 solver approach could provide some kind of performance gain. They just have to make sure that they characterize the effective projection accurately :P Also remember that they are dealing with a single, non-reconfigurable, shot. So, they are also limited by the number of sensors on the chip. It is possible that for a low sensor resolution and a single capture, a maximum liklihood approach might make more sense than an L1 solution.

But, I'm having the same problem you stated at the beginning of the article. I can't find the paper! I went to look for it right after reading the press release but came up empty. It could be that they are waiting for some kind of patent application to go through...

No paper makes it hard to tell what they're doing exactly for recovery or what kind of trade-offs they're making in design :'(

As I have said before, i have been here before. The main difficulty for another paper to be a compressive sensing paper based on this detector design would be to have a good characterization of its transfer function and see if it fits the Donoho-Tanner phase transition. Also I think I made my views clear that indeed in some circumstances, we really are performing sparsity sensing as opposed to compressive sensing.For the rest I am bit more optimistic than Laurent because for examples such as image processing, we have mostly six billion entities on earth with a capable processor even through none of them grew in the same fashion as the others.

Finally, Stephen Becker sent me the following:

Hi Igor, I found a bad link for Regularized OMP (ROMP) on your list of solvers (section 4.1 of )
The bad link is:, is bad)
an updated link is:
While I was thinking about this, I realized there are caltech links that need to be updated.  We merged departments, so the "acm" website is permanently down, and hence our solver NESTA
has moved to a new home:

Also, my website (formerly is now  Our TFOCS solver URL remains the same.  On a related note, I leave Caltech soon, and have written a thesis that you or your readers may find interesting:"Practical compressed sensing: modern data acquisition and signal processing".
And some old websites of (former) caltech folks are being removed, so I've listed them below, along with the new links: now ) now ) (2 locations)( now )
While I was at it, I checked other links on this site ( using a validator ( ) since I'd noticed that other links were old as well.  Turns out a lot of them are old! Here's a list:
-- Missing targets -- (meant to be http not ftp???) (twice; "forbidden". link to index.html instead?)
-- other links (permanently gone, or servers temporarily down?) -- ("forbidden". link to index.html instead?)  ("forbidden")  ("forbidden")  ("forbidden")   ("forbidden")"forbidden")   ("forbidden")'shomepage (the apostrophe is probably not right) to), (which isn't found either) to), (which isn't found) due to some punctuation in the link)   (can't find host)  (can't find host -- this site is gone)   (can't connect)

Thanks for all your work indexing these pages and the almost daily updates! We appreciate it.

Thanks Stephen, I needed work :-) I'll try to focus on the big ones, other sites may become permanently silent as a result of this audit though.

Image Credit: NASA/JPL/Space Science Institute
W00067992.jpg was taken on July 10, 2011 and received on Earth July 11, 2011. The camera was pointing toward SATURN at approximately 291,728 kilometers away, and the image was taken using the CB2 and CL2 filters. 

No comments: