Pages

Monday, December 15, 2008

CS: Very High Speed Incoherent Projections for Superresolution, and a conference.

Stephane Mallat made a presentation at ETCV'08 (I mentioned it before). As some of you may know Stephane Mallat is known for his contribution to development of the wavelet framework.  He has also been involved in a start-up in the past few years. The start-up, initially called Let It Wave devised technologies around the bandelet families of functions (in particular in collaboration with Gabriel Peyre). It looks as though, the latest development of this start-up has been the conversion of today's videos into High Definition video. In that presentation entitled Sparse Geometric Superresolution, Stephane mentions that the challenge of up-conversion involves the ability to produce an increase of 20 times the number of initial information in the low resolution video.




As he explains, the chip that is supposed to produce this feat should cost $5 and it also has to use fast algorithms and matching pursuit don't do well in that very high speed conversion. The presentation in the video is very interesting in detailing the method. The thing I note is the thing he doesn't say in these exact words: In order to do a fast search in a dictionary, he projects the low resolution video onto an incoherent basis. From that projection, he uses a comparison between that projection and the projections of a very large dictionary unto that incoherent basis and does pattern matching.


A person has asked me before about doing CS with JPEGs: I guess one can do that for that purpose (superresolution). In a different area, we have seen this type of procedure being undertaken by Jort Gemmeke in speech problems ( Using sparse representations for missing data imputation in noise robust speech recognition by Jort Gemmeke and Bert Cranen ). At 7.5 Gbit/s, I wonder how we could use the LB-101M chip to do all kinds of superresolution beyond images....I think it also fits the description of hardware performing CS so I will include it in the CS hardware page. [ Update: I am told by Stephane Mallat that the LB-101M chip does not implement this algorithm. It implements a somewhat smarter algorithm given the hardware constraints]

I also found it funny that Stephane mentioned the testing procedure used by experts to gauge whether a new imaging technology is acceptable. They simply go into a room and watch images and movies for hours with the new technology and provide some feedback on a how good or bad it is.


That reminded of how Larry Hornbeck described to us how he had to make the DMD technology acceptable for imaging/projector technology. The technology is now integrated in DLP projectors but it took a while before the algorithm commanding (the DMD controller) the mirrors on the DMDs got a good feedback from the professionals. Larry had become an expert over time, and his team at Texas Instrument was constantly checking with him while they were improving the algorithm. At one point, the team seemed a little miffed that Larry would always find problems when they did not seem to exist. They then switched technologies in the projection room but Larry would still find the same problems. At which point, Larry and his team decided the DMD technology had indeed matured to the point that it would be hard for experts to find flaws. Larry received an Emmy awards in 1998 for the development of this technology. The DMD is also at the heart of the Rice single pixel camera.

Larry gave me one of his demonstration DMD at the end of his presentation! woohoo.  


On a totally unrelated note, a conference that features Compressed Sensing as a subject of interest is CHINACOM 2009: Int'l Conference on Communications and Networking in China,  Information and Coding Theory Symposium,  August 26-28, 2009, Xi'an, China.

No comments:

Post a Comment