Wednesday, December 31, 2008

CS: Thank you!


While 2008 is about to shut down, I wanted to say thank you for your help. For the past two years, an increasing number of people have contributed to this blog. Direct contributors have included:
and enriched it wonderfully. Some of them even created their own research blog, this is outstanding!
Since I am slow, my understanding of some of the CS papers or some subject areas could not have occured were it not for a kind person to answer it. I got lucky often and most of the time, an e-mail conversation ensued that I eventually felt was important for everybody else to read. The following people kindly ok'ed our offline discussions into a full post or some part of it:
Thank you!

As a matter of general policy, if you are having a detailed technical e-mail discussion with me, I generally ask if it's OK to publish it afterwards. I usually go through some proof-reading and remove non technical items. Eventually the final draft is OK'ed by you.

One of the annoying thing that occurs when I go to conferences is that I also bring a bunch of folks who will relentlessly ask questions. Thank you to Ron DeVore, Jean-Luc Germond and Bojan Popov for allowing us to disrupt their smooth meeting at Texas A&M. The same goes to Frank Nielsen for ECTV '08.

Much of the referal traffic to this blog was not self generated! Terry Tao's, Andrew Gelman's and John Langford's blogs provide much of that traffic. Thank you guys.

While I see less of this nowadays, I still feel strongly that as the author of your own paper, you should go through the pain of building a website which can take about 5 minutes using Google Sites these days. If you go through the effort of giving additional online information or adding your code, I generally try to make sure you have the maximum exposure as a result. I always prefer reproducible research results to hard to reproduce papers. Please also consider that even if an algorithm looks easy for you to implement, it may not be the case for most. In particular I intend on remaining slow and dumb for the continuing year (no good resolution for 2009 on that front).

I also say it is a pain for you to have a greater web presence because most of you are currently not being judged by your peers on these "details". You eventually will! if not by your peers, you will by the engineers in companies that want to do business with you either as a future employee or a consultant. There is no better time to start than today.

Recently, I had a discussion with one of you on the non-availability of the preprints on his site. The e-mail discussion that ensued revealed that the lab policy had not be updated in a long time. The folks at Sherpa provide you with a database for checking on your ability to perform self-archiving with specific publisher's copyright policies. Please use this tool as opposed to relying on an outdated policies or information.

As an aside, if you believe this blog provides a valuable utility consider donating to the webcrawler engine that makes it all possible at watchthatpage. If you are using this service for other purposes please consider donating money to them as well. The paypal system accepts credit cards and additional information can be added in a message form where you might want to let them know about their huge contribution to the Nuit Blanche blog. The kids implementing this tool told me that it is a typical start-up with two guys working in a dorm. Again, this blog and the community of readers would not be what it is without them, please sparse a buck or twenty or a hundred, for some of you think of it as just a business expense. I do not own stock in that venture.

Last but not least, the folks behind the codes and websites providing a resource to the CS community should be commended for doing an extraodinary job. They are listed in the Compressive Sensing 2.0 page. In particular, a big kudos goes to Mark Davenport for attending and regularly updating the Rice Compressive Sensing site.

To the ones I forgot to list, please forgive me and do tell me. To all, thank you and Happy New Year. Enjoy watching the finale of Sidney's fireworks.




CS: A Python Alternating L_1 Implementation


Stephane Chretien just let me know that he and Alexis Flesch (a Ph.D. student working for Jean-Yves Dauxois ) has produced a Python implementation of the Alternating L_1 method. The implementation uses cvxopt, a Python Package for Convex Optimization. The graphs shows a perfect recovery with the alternating L_1 algorithm and the same non-recovery for the L_1 algorithm. 

Tuesday, December 30, 2008

CS: Compressive Sensing Background Subtraction Homework




Found on Youtube this video illustrating an example of CS background subtraction which I think is an implementation of Compressive sensing for background subtraction  by Volkan CevherAswin SankaranarayananMarco Duarte, Dikpal Reddy, Richard Baraniuk, and Rama Chellappa. From the accompanying text of the video:

Compressive Sensing based background subtraction.
This video contains four parts top-left is the original sequence, bottom left is the reconstructed (using l1-minimization) difference image, top-right is the silhouette (estimated from bottom-left by median based thresholding) and bottom-right is the foreground reconstruction.
----
This was part of course project for Video Processing (Dr. Baoxin Li) in Fall 2008. 
Team: Raghavendra Bhat
Rita Chattopadhyay
This is funny as it reminded me a little bit the saliency maps in Laurent Itti's work. At the very least it could be used as a first iteration of these saliency maps as shown in this video.

Monday, December 29, 2008

CS: Terence Tao Video presentation of Compressed Sensing

As mentioned earlier, Terry Tao made a presentation at NTNU on Compressed Sensing. The fine folks at NTNU made a movie of it and put it on Youtube. Because of the length, it is cut in seven parts. I'll add it to the CS video section later. Enjoy.

















The same lecture is available in full here. However, I could not watch it using Chrome and Opera and it looks like that, once again , but in a less drastic way, the lecture can be read only with Internet Explorer and Firefox.

Friday, December 26, 2008

CS: Group Testing in Biology, Tiny Masks for Coded Aperture and Linear Reconstruction, Andrew Ng


Anna Gilbert, in her presentation entitled "Applications of Compressed Sensing to Biology", shows some of most recent results obtained with colleagues (one is Raghu Kainkaryam mentioned previously) performing Group Testing using Compressed Sensing ideas. The idea is to reduce the number of chips (SNP) for testing. The results are currently so so. The video is here.

When watching Assaf Naor's video at the intractability center, I did not realize that there were that many negative results in the Johnson-Lindenstrauss approach in other spaces.

At the Advanced Light Source at Lawrence Berkeley National Laboratory, we are investigating how to increase both the speed and resolution of synchrotron infrared imaging. Synchrotron infrared beamlines have diffraction-limited spot sizes and high signal to noise, however spectral images must be obtained one point at a time and the spatial resolution is limited by the effects of diffraction. One technique to assist in speeding up spectral image acquisition is described here and uses compressive imaging algorithms. Compressive imaging can potentially attain resolutions higher than allowed by diffraction and/or can acquire spectral images without having to measure every spatial point individually thus increasing the speed of such maps. Here we present and discuss initial tests of compressive imaging techniques performed with ALS Beamline 1.4.3?s Nic-Plan infrared microscope, Beamline 1.4.4 Continuum XL IR microscope, and also with a stand-alone Nicolet Nexus 470 FTIR spectrometer.
They are devising coded aperture with micrometer masks. The reconstruction is a traditional linear transform. I am sure they will eventually look at a nonlinear reconstruction technique to get a better spatial resolution as did Roummel Marcia and Rebecca Willett in  Compressive Coded Aperture Superresolution Image Reconstruction ( the slides are here). I am also told by Ori Katz, one of the folks at Weizman performing Ghost Imaging (as mentioned here) that their reconstruction scheme is also linear. This is also the case of the coded aperture in the IR range that could be implemented in the LACOSTE program (as can be seen here and here). I think one of the main challenge of CS in the near future will be to characterize more straightforwardly how, in imaging (i.e. positive functions in 2D), nonlinear reconstruction techniques provide additional information than simpler and older linear algebra base reconstruction techniques.
 
Andrew Ng presents his work in this video. I  have mentioned him several times before these past two years.

Monday, December 22, 2008

CS: Another Nuit Blanche: Geometry and Algorithms Workshop Videos

The Center for Computational Intractability has just released some of the videos of the Geometry and Algorithm workshop. Several talks are of interest to the Compressive Sensing Community (labeled with *). If you watch all of them, it's gonna be a long night again:
Credit: NASA/ESA, SOHO

Sunday, December 21, 2008

CS: Watching Videos on The Longest Night.

Tonight is the longest night in the northern hemisphere. Let watch some videos till the end of this nuit blanche....

First, Dror Baron has given some context to his Compressed Sensing research. The reason for this blog is to put some context in the amount of raw information that is accumulating on the subject. I always appreciate it when researchers go beyond their expected output and try to put their research in perspective. Thanks Dror!

At the Center of Computational IntractabilityAdi Akavia made a presentation on the SFT algorithm that can "Deterministically and Locally Finding Significant Fourier Transform Coefficients". The video of the talk is here.


Also of interest are the two talks by Sanjeev Arora and Moses Charikar entitled “Semidefinite programming and approximation algorithms: a survey.” (video_part_1 and video_part_2) and the tutorial by Assaf Naor on “An Introduction to Embedding Theory”(video part 1 video part 2). The abstract of the tutorial is:
This talk is intended as an introductory talk for non-experts in preparation for the upcoming Princeton workshop on metric embeddings. This workshop will focus on new directions in this area, so the purpose of the tutorial is to explain some of the relevant background. We will explain the basic definitions, show some important examples, discuss some of the fundamental theorems (with a few complete proofs), show some lower bound techniques, and describe a few applications of this mathematical discipline to computer science. No prerequisites will be assumed.
Finally, a more hardware focused talk by Ramesh Raskar at ECTV'08. If you haven't had time to read his papers yet, here is a good way to get interested by watching the video.


Thursday, December 18, 2008

CS: Ain't the Power of Interwebs Great or What ?


Five examples on why the interwebs is great, a reader provides a CS solution to the 12-balls problem, a commenter on a blog pinpoints an obvious fact, another reader shares his paper before it can be hosted on its home site and the authors provide access to a free book and a recent presentation.

* Marc Bernot, a reader of this blog, who works in the Observation Systems Research Division of the Thales Alenia Space Company, wrote the following:
I was not completely surprised by the title of your last post (the one about the 12 balls problem) because I recently came up with a solution/explanation with a small taste of CS. Maybe this will be of interest to you (or your readers if you think this is sufficiently understandable). Let us consider the following matrix :             
          M=[1 0 0 -1  0 -1  1 -1  0  1 -1  1;
                 0 1 0 -1 -1  0 -1  0  1  1  1 -1;
                 0 0 1  0 -1 -1  0  1 -1 -1  1  1]

The three rows correspond to 3 weighings and the 12 columns to the twelve balls. A 1 means putting the corresponding ball on the right plate of the balance, and a -1 on the left. For example, the first row means that balls number (1,7,10,12) are put on the right and (4,6,8,11) on the left. The output of these three weighings is enough to determine which ball is different and whether it is lighter or heavier than the rest. Here is why: Let x be the vector of masses of 12 balls all weighing m but one that weighs m'. We have x=m*(1 ... 1)+(m'-m)*u where u is a vector with only one nonzero (with value 1) coordinate (it is 1-sparse). Because M has as many 1 as -1 on each row, we have:

                                     Mx = (m'-m) Mu

What we have access to by weighing masses according to M, is not exactly Mx, but the sign of Mx. For example, if ball 1 is heavier, the left plate is up during the first weighing and the sign of the first coordinate of Mx is +1. So, the outputs of the weighings are converted this way : left plate up = 1, left plate down = -1, even = 0. The output is thus

                             sign(Mx)=sign[(m'-m)Mu].  

Since u is of the form (0 .. 0 1 0 .. 0), sign(Mx) represents one of the column of M or the opposite of one of the column of M.

Because of the way M was constructed, there is one and only one column that corresponds to sign(Mx) (or its opposite). The number of this column is the number of the faulty ball. If sign(Mx) has the same sign as Mu, the ball is heavier, it is lighter otherwise. About the particular construction of M :

Excluding (0,0,0)',(1,1,1)' and (-1,-1,-1)', there are 24 vectors in {-1,0,1}^3. If we assume that two vectors with opposite signs are equivalent, then there are only 12 classes left. M was constructed so that its 12 columns represent the 12 classes of equivalence (and the sum of elements of each row is 0). Conclusion : we were able to identify any 1-sparse vector in {-1,0,1}^12 with 3 "sign measurements".
Kudos to Marc. If I am following the description of how he scales the matrix M, then the number of balls that could be checked with 20 weighings is 1,743,392,199 ( = (3^n-3)/2 ). Twenty measurements to find one unfit ball, it's like finding a needle in some haystack. The problem resolution reminds me of the 1-Bit Compressive Sensing paper by Petros Boufounos and Richard Baraniuk ( related talks are "L1 Minimization Without Amplitude Information." and 1-Bit Compressive Sensing). Of related interest is the Group Testing in the Life Sciences document of which one of the author, Nicolas Thierry-Mieg is the author of the STD algorithm that was mentioned in the Shifted Transversal Designs and Expander Graphs entry of Raghu Kainkaryam's blog. To give you an idea of the power of the web, look at the exchange in the comment section between Raghu and Radu BerindeRadu is one of the author of the  Sparse Recovery Experiments with Sparse Matrices wiki.

* In the category "...If it walks like a duck and quacks like a duck...", I found the following entry in the arxivblog that features the following paper:  Ghost imaging with a single detector by Yaron BrombergOri Katz and Yaron Silberberg. (also here). The abstract reads:
We experimentally demonstrate pseudothermal ghost-imaging and ghost-diffraction using only a single single-pixel detector. We achieve this by replacing the high resolution detector of the reference beam with a computation of the propagating field, following a recent proposal by Shapiro [J. H. Shapiro, arXiv:0807.2614 (2008)]. Since only a single detector is used, this provides an experimental evidence that pseudothermal ghost imaging does not rely on non-local quantum correlations. In addition, we show the depth-resolving capability of this ghost-imaging technique.
If you look at the comment section, you'll notice a commentator named Wim making the connection to CS. As I replied in the comment section, if you:

  • Reconstruct an 300 x 300 image with 16000 measurements ( 17% reduction of the original image)
  • Obtain each measurement by the application of random masks and finally, 
  • Need a specific nonlinear algorithm to do the reconstruction 

  • and Use a single pixel (this is a not an actual condition but it clearly points to a similarity to the Rice Camera)

then one can safely say that what you are doing is Compressed Sensing. It would be fascinating to see how the Gerchberg Saxton phase-retrieval algorithm mentioned in the paper compares to the recently featured Compressed Sensing Phase Retrieval of Matthew MoravecJustin Romberg and Richard Baraniuk. Let us also note the authors' interest in detecting depth as well as investigating nonlinear adaptive  measurements as witnessed in the conclusion section of the paper:

Furthermore, the use of a SLM enables the implementation of closed-loop feedback schemes, potentially reducing the number of realizations needed for image reconstruction.
More information and the movie of the reconstruction can be found here.

* The papers accepted at ICASSP'09 are listed here, We have covered some of them before. Hadi Zayyani forwarded me this one: Bayesian Pursuit Algorithm for Sparse Representation by Hadi ZayyaniMassoud Babaie-Zadeh and Christian Jutten. The abstract reads:
In this paper, we propose a Bayesian Pursuit algorithm for sparse representation. It uses both the simplicity of the pursuit algorithms and optimal Bayesian framework to determine the active atoms in the sparse representation of the signal. We show that using the Bayesian Hypothesis testing to determine the active atoms from the correlations, leads to an efficient activity measure. Simulation results show that our suggested algorithm has the best performance among the algorithms which are implemented in our simulations.




If one were to decompose the signal in wavelets, it looks as though this approach would be very similar to the recent BOMP algorithm featured in Block-Sparsity: Coherence and Efficient Recovery by Yonina Eldar and Helmut Bolcskei. Hadi tells me that he is going to post an implemention of his algorithm on his page a little later. I am going to add the algorithm to the Big Picture page as soon as the algorithm is available.

Aapo Hyvärinen, Jarmo Hurri, and Patrik Hoyer is released free for the time being.

Piotr Indyk  and Milan Ruzic just released the slides of their FOCS presentation entitled Near-Optimal Sparse Recovery in the L1 norm featuring some of the results of the Sparse Recovery Experiments with Sparse Matrices.



Credit Photo: Igor Carron. Coast of Greenland.

Wednesday, December 17, 2008

CS: Group Testing and Sparse Signal Recovery, A linear solution to the 12-balls problem


The first time I became aware of Group Testing was when I was confronted to the problem described here by Robert H. Thouless in a paper entitled "The 12-Balls Problem as an Illustration of the Application of Information Theory" [1]:
You have 12 balls of the same size. 11 of these balls are the same weight, one is different; you do not know whether this ball is heavier or lighter than the others. You have also a balance on which any group of the balls can be weighed against any other group. How can you discover by means of three weighings which is the different ball and whether it is heavier or lighter than the other balls ?
I was slow and young (I am now slow and older:-)) and it took me a whole day to figure it out. Ever since I have been looking into compressed sensing, I have been looking for an introductory paper connecting group testing and compressed sensing. That paper has come out and is entitled: Group Testing and Sparse Signal Recovery by Anna GilbertMark Iwen and  Martin Strauss. The abstract reads:
Traditionally, group testing is a design problem. The goal is to design an optimally efficient set of tests of items such that the test results contain enough information to determine a small subset of items of interest. It has its roots in the statistics community and was originally designed for the Selective Service during World War II to remove men with syphilis from the draft [5]. It appears in many forms, including coin-weighing problems, experimental designs, and public health. We are interested in both the design of tests and the design of an efficient algorithm that works with the tests to determine the group of interest because many of the same techniques that are useful for designing tests are also used to solve algorithmic problems in compressive sensing, as well as to analyze and recover statistical quantities from streaming data. This article is an expository article, with the purpose of examining the relationship between group testing and compressive sensing, along with their applications and connections to sparse function learning.
Until today, I thought that the solution to the 12-ball problem was an adaptative one as my solution had the second weighing dependent on the results of the first weighing. It turns out that there is also a non-adaptive solution to it, as described here (my solution is the tree-like solution below the non-adaptive solution. Ain't the interwebs great ?) In other words, a CS-like linear measurement technique seems to be doing as well as a nonlinear one!


Reference:
[1] The 12-Balls Problem as an Illustration of the Application of Information Theory, Robert H. Thouless, The Mathematical Gazette, Vol. 54, No. 389 (Oct., 1970), pp. 246-249

Credit: NASA/JPL/Space Science Institute, Enceladus

Tuesday, December 16, 2008

CS: Comments on the LB101-M chip, NIPS'08, Compressed sensing in MRI with non-Fourier encoding

Yesterday, I jumped to conclusions in saying that the LB-101M chip was performing the algorithm mentioned in the presentation of Stephane Mallat at ECTV'08. After asking him specifically that question, he answered back stating the LB-101M chip did not implement the algorithm of the presentation as I initially stated. He also stated that the chip implemented a somewhat smarter algorithm given the hardware constraints. In the description one can see these constraints, the chip "was implemented on a small Altera ™ Cyclone-II 70 FPGA, with only 70000 logic elements." If you have ever worked with an FPGA, you know how much space a multiplication takes, and one can only be amazed at what this chip is doing. 

The NIPS '08 online papers are out. From the Rice Compressive Sensing Repository, there is a new paper I have not covered before:



Compressed sensing (CS) has inspired significant interest because of its potential to reduce data acquisition time. There are two fundamental tenets to CS theory: (1) signals must be sparse or compressible in a known basis, and (2) the measurement scheme must satisfy specific mathematical properties (e.g., restricted isometry or incoherence properties) with respect to this basis. While MR images are often compressible with respect to several bases, the second requirement is only weakly satisfied with respect to the commonly used Fourier encoding scheme. This paper explores the possibility of improved CS-MRI performance using non-Fourier encoding, which is achieved with tailored spatially-selective RF pulses. Simulation and experimental results show that non-Fourier encoding can significantly reduce the number of samples required for accurate reconstruction, though at some expense of noise sensitivity. Theoretical guarantees and their limitations in MRI applications are also discussed.


Rachel Ward renamed "Cross validation in compressed sensing via the Johnson-Lindenstrauss Lemma" into   Compressed sensing with cross validation

Monday, December 15, 2008

CS: Very High Speed Incoherent Projections for Superresolution, and a conference.

Stephane Mallat made a presentation at ETCV'08 (I mentioned it before). As some of you may know Stephane Mallat is known for his contribution to development of the wavelet framework.  He has also been involved in a start-up in the past few years. The start-up, initially called Let It Wave devised technologies around the bandelet families of functions (in particular in collaboration with Gabriel Peyre). It looks as though, the latest development of this start-up has been the conversion of today's videos into High Definition video. In that presentation entitled Sparse Geometric Superresolution, Stephane mentions that the challenge of up-conversion involves the ability to produce an increase of 20 times the number of initial information in the low resolution video.




As he explains, the chip that is supposed to produce this feat should cost $5 and it also has to use fast algorithms and matching pursuit don't do well in that very high speed conversion. The presentation in the video is very interesting in detailing the method. The thing I note is the thing he doesn't say in these exact words: In order to do a fast search in a dictionary, he projects the low resolution video onto an incoherent basis. From that projection, he uses a comparison between that projection and the projections of a very large dictionary unto that incoherent basis and does pattern matching.


A person has asked me before about doing CS with JPEGs: I guess one can do that for that purpose (superresolution). In a different area, we have seen this type of procedure being undertaken by Jort Gemmeke in speech problems ( Using sparse representations for missing data imputation in noise robust speech recognition by Jort Gemmeke and Bert Cranen ). At 7.5 Gbit/s, I wonder how we could use the LB-101M chip to do all kinds of superresolution beyond images....I think it also fits the description of hardware performing CS so I will include it in the CS hardware page. [ Update: I am told by Stephane Mallat that the LB-101M chip does not implement this algorithm. It implements a somewhat smarter algorithm given the hardware constraints]

I also found it funny that Stephane mentioned the testing procedure used by experts to gauge whether a new imaging technology is acceptable. They simply go into a room and watch images and movies for hours with the new technology and provide some feedback on a how good or bad it is.


That reminded of how Larry Hornbeck described to us how he had to make the DMD technology acceptable for imaging/projector technology. The technology is now integrated in DLP projectors but it took a while before the algorithm commanding (the DMD controller) the mirrors on the DMDs got a good feedback from the professionals. Larry had become an expert over time, and his team at Texas Instrument was constantly checking with him while they were improving the algorithm. At one point, the team seemed a little miffed that Larry would always find problems when they did not seem to exist. They then switched technologies in the projection room but Larry would still find the same problems. At which point, Larry and his team decided the DMD technology had indeed matured to the point that it would be hard for experts to find flaws. Larry received an Emmy awards in 1998 for the development of this technology. The DMD is also at the heart of the Rice single pixel camera.

Larry gave me one of his demonstration DMD at the end of his presentation! woohoo.  


On a totally unrelated note, a conference that features Compressed Sensing as a subject of interest is CHINACOM 2009: Int'l Conference on Communications and Networking in China,  Information and Coding Theory Symposium,  August 26-28, 2009, Xi'an, China.

Saturday, December 13, 2008

CS: Freefalling, Hardware at 130,000 feet, Statistics on a cartoon, and a Silicon detector.

Here is a fantastic rendering of what it would look like to fall from Space (for those of you reading this through e-mail or an RSS feed, here is the direct link)



Credit: Kyle Botha.
This reminds me that you don't need a rocket to go up. Stratospheric balloons do that very well as can be seen from the view of a webcam that was on-board the HASP last year.


Credit: CosmoCam

For those of you interested in having some hardware at those altitudes, you may be interested in the new request for proposal by the HASP folks. The deadline is Dec. 19th.


The first cartoon video on CS has reached more than 1000 viewers (950 times being me hitting that replay button). The statistics provided by Youtube provide some insight about when the interest of the reader was the highest. I am not sure how they compute this but the first bump in this video is when CS is connected to underdetermined systems and linear algebra 101.



The video can be viewed here:




Finally, a video on a type of photon detector made out of Silicon that seems to have reached some breakthrough. More pixels, lower cost is our future, how can we integrate this in new types of CS cameras ?

Friday, December 12, 2008

CS: Greedy Signal Recovery Review, Motion Segmentation, Efficient Sampling and Stable Reconstruction,

I found three papers related to CS:

The two major approaches to sparse recovery are L1-minimization and greedy methods. Recently, Needell and Vershynin developed Regularized Orthogonal Matching Pursuit (ROMP) that has bridged the gap between these two approaches. ROMP is the first stable greedy algorithm providing uniform guarantees. Even more recently, Needell and Tropp developed the stable greedy algorithm Compressive Sampling Matching Pursuit (CoSaMP). CoSaMP provides uniform guarantees and improves upon the stability bounds and RIC requirements of ROMP. CoSaMP offers rigorous bounds on computational cost and storage. In many cases, the running time is just O(N logN), where N is the ambient dimension of the signal. This review summarizes these major advances.
ROMP and CoSaMP codes are accessible from the reconstruction section of the Big Picture. Let us also note the importance of the Restricted Isometry Constant. 


Periodic nonuniform sampling is a known method to sample spectrally sparse signals below the Nyquist rate. This strategy relies on the implicit assumption that the individual samplers are exposed to the entire frequency range. This assumption becomes impractical for wideband sparse signals. The current paper proposes an alternative sampling stage that does not require a full-band front end. Instead, signals are captured with an analog front end that consists of a bank of multipliers and lowpass filters whose cutoff ismuch lower than the Nyquist rate. The problem of recovering the original signal from the low-rate samples can be studied within the framework of compressive sampling. An appropriate parameter selection ensures that the samples uniquely determine the analog input. Moreover, the analog input can be stably reconstructed with digital algorithms. Numerical experiments support the theoretical analysis.




The attendant talk is here. In a different direction, Shankar RaoRoberto TronRene Vidal, and  Yi Ma just released 

In this paper, we study the problem of segmenting tracked feature point trajectories of multiple moving objects in an image sequence. Using the affine camera model, this problem can be cast as the problem of segmenting samples drawn from multiple linear subspaces. In practice, due to limitations of the tracker, occlusions, and the presence of nonrigid objects in the scene, the obtained motion trajectories may contain grossly mistracked features, missing entries, or corrupted entries. In this paper, we develop a robust subspace separation scheme that deals with these practical issues in a unified mathematical framework. Our methods draw strong connections between lossy compression, rank minimization, and sparse representation. We test our methods extensively on the Hopkins155 motion segmentation database and other motion sequences with outliers and missing data. We compare the performance of our methods to state-of-the-art motion segmentation methods based on expectation-maximization and spectral clustering. For data without outliers or missing information, the results of our methods are on par with the state-of-the-art results, and in many cases exceed them. In addition, our methods give surprisingly good performance in the presence of the three types of pathological trajectories mentioned above. All code and results are publicly available at http://perception.csl.uiuc.edu/coding/motion/.

Yi Ma and his team continue on doing wonders. Here they use the result in rank minimization and CS to provide robust trackers over missing data. 


And finally, here is a section of papers and presentations not strictly exactly focused on Compressed Sensing but rather that make use or reference Compressive as a neighboring technique. There the presentation entitled Pseudospectral Fourier reconstruction with IPRM by Karlheinz Gröchenig and Tomasz Hrycak.

Also, Arxiv now allows a full search in the text of the papers in its library. Here is a list of paper that mention Compressive Sensing without Compressive Sensing being the main subject of the paper:

Thursday, December 11, 2008

CS: Compressed Sensing Phase Retrieval, Deterministic SFT, A generalization of Kolmogorov’s theory of n-widths for infinite dimensional spaces, a job.


From Wikimization, here is a newer version of Toward 0-norm Reconstruction, and a Nullspace Technique for Compressive Sampling as presented by Christine Law with Gary Glover at the Linear Algebra and Optimization Seminar (CME510), iCME, Stanford University, November 19, 2008.

Unlike what I said earlier, the paper entitled Compressed Sensing Phase Retrieval with Matthew Moravec, Justin Romberg and Richard Baraniuk can be found here.

Adi Akavia was speaking on Monday, December 8 2008 at MIT. The talk presentation reads:
Computing the Fourier transform is a basic building block used in numerous applications. For data intensive applications, even the O(N log N) running time of the Fast Fourier Transform (FFT) algorithm may be too slow, and sub-linear running time is necessary. Clearly, outputting the entire Fourier transform in sub-linear time is infeasible, nevertheless, in many applications it suffices to find only the \tau-significant Fourier transform coefficients, that is, the Fourier coefficients whose magnitude is at least \tau-fraction (say, 1%) of the energy (ie, the sum of squared Fourier coefficients). We call algorithms achieving the latter SFT algorithms. 

In this work we present a *deterministic* algorithm that finds the $\tau$-significant Fourier coefficients of functions f over *any finite abelian group G* in time polynomial in log|G|, 1/\tau and L_1(f) (for L_1(f) denoting the sum of absolute values of the Fourier coefficients of f). Our algorithm is robust to random noise. 

Our algorithm is the first deterministic and efficient (ie, polynomial in log|G|) SFT algorithm to handle functions over any finite abelian groups, as well as the first such algorithm to handle functions over Z_N that are neither compressible nor Fourier-sparse. Our analysis is the first to show robustness to noise in the context of deterministic SFT algorithms. Using our SFT algorithm we obtain (1) deterministic (universal and explicit) algorithms for sparse Fourier approximation, compressed sensing and sketching; (2) an algorithm solving the Hidden Number Problem with advice, with cryptographic bit security implications; and (3) an efficient decoding algorithm in the random noise model for polynomial rate variants of Homomorphism codes and any other concentrated and recoverable codes.

Hum.. (I put the emphasis in the text) here is another statement along the lines that sparsity is not being the main prerequisite condition for subsampling. I look forward to seeing more of that paper.


On December 17-19, 2008, at the Workshop on Optimization and Applications, Institute of Mathematics and Informatics, Bulgarian Academy of Sciences in Sofia, Bulgaria, Ognyan Kounchev will give a talk entitled A generalization of Kolmogorov’s theory of n-widths for infinite dimensional spaces: applications to Compressive sensing. The abstract reads:
Recently, the theory of widths has got high interest due to its importance for the so-called Compressive sensing, see the works of D. Donoho, E. Candes, T. Tao, R. Baraniuk and others. On the other hand, the theory of n-widths of Kolmogorov (a kind of Optimal recovery theory) is not appropriate for the spaces of functions depending on several variables - this is seen from the simplest examples which one would like to have treated by a theory of Optimal recovery. We generalize the theory of n-widths of Kolmogorov by considering approximation by infinite-dimensional spaces of functions which have so far ”harmonic dimension n” in some sense. A large class of such spaces having ”harmonic dimension n” is provided by the solutions of elliptic differential operators of order 2n. We indicate possible applications to Compressive sensing.

Here is a new job posting for a post-doc in the group of Jean-Luc Starck at CEA near Paris. The description reads:

JOB OPPORTUNITY
Title of the job : Post-doctoral position (3 years), Signal/Image processing at CEA Saclay, Service d’Astrophysique. The Service d'Astrophysique (SAp) at CEA Saclay invites applications for a postdoctoral appointment in the area of data analysis/image processing of astronomical data to work with Jean-Luc Starck.
The CEA Saclay is a government research center situated 40 minutes from central Paris, France. The SAp has a wide interest in astrophysics ranging from planets to cosmology, with a specialisation in space missions (eg. Euclid, XMM-Newton, Herschel, PLANCK, JWST, Integral etc) and instrumentation (eg. Megacam on the Canada-France-Hawaii Telescope). The position is to work on sparse representation of astronomical data for different applications such as non-Gaussianity detection, inverse problem and compressed sensing.
Candidates should have a PhD in Image processing, Physics or Astronomy.
Previous experience in sparse representations (wavelet, curvelets, etc) and the development of data analysis methods is preferred, but experience in related areas is also suitable.
The position, funded for at least 3 years (up to 5 years), will be renewed on a yearly basis depending on scientific progress and achievement. The gross minimum salary will be 34,000€ annually (~2,260€ net per month), and will be adjusted according to experience and family situation. A minimum of 5,000 € per year of travel money for each position will also be provided, in addition to the usual funding support of any French institution (medical insurance, etc). Applicants are requested to send a CV, a list of publications, and a brief statement of research interests. This material together with three letters of reference should be sent by the closing date to
Jean-Luc Starck
Laboratoire Astrophysique des Interactions Multi-échelles
Irfu, Service d'Astrophysique, CEA Saclay,
F-91191 GIF-Sur-YVETTE CEDEX, France.
Email: check the flyer for more information.
The closing date for receipt of applications : February 28, 2009
The job posting has been to the CSJobs page.

The ICPR 2010 conference has Compressed Sensing as a subject of interest.

Image Credit: NASA/JPL/Space Science Institute, image of Tethys taken on December 9th, 2008.

Wednesday, December 10, 2008

CS: Compressive Sensing Technology Watch (Part Two)


Here is a presentation on the influence of GPUs in Compressive Sensing. It is located at: 



I also had to reframe the cultural references. Instead of Coluche a well known french artist, I replaced him with a more U.S centered cultural reference: Seinfeld's George Constanza. I explained previously why in CS: Coded Mask Imagers: What are they good for ? The George Costanza "Do the Opposite" Sampling Scheme. Here is the video.



Printfriendly