Lena's prisoners, I described a community where one could assume that proposals made there were not bold enough. Let me detail the situation a little bit but remain vague on the details as I think it can be transferred to many other fields. In this community, the traditional problem revolved around this one source that is modulated in some fashion. As we all know, Compressive Sensing is really about multiplexing several sources together and then perform a nonlinear demodulation. So in the traditional problematic where one deals with a single source, they have this nice measurement matrix that is nowhere near random: Any RIP, NSP based arguments to justify its use with compressed sensing are bound to fail. But then, you are going to tell me: why don't they do a random multiplexing with several sources, they are no dummies ? No they are not, the answer lies in the fact that the traditional problem of modulating one single source is a linear problem whereas if you were to go with a compressive sensing route with multiple sources, the new problem set-up becomes nonlinear. On top of that, one source cost a million dollars to build, two sources cost two million dollars and ...you get my drift..any sane person would ask: Why in the hell would you want to go for a 1000 sources and end up with a nonlinear problem ? People in that community won't tell it to you in your face, but what you are proposing to do is similar to walking up to Buzz Aldrin and tell him he never landed on the Moon and that he is a coward for lying to the American people.
...You're gonna get punched one way or another. Incidentally, it's not like having a linear system makes you understand all the results obtained in this field for the past thirty years. Ask any number crunchers in the trenches and when it comes to some computations, they have seen things they can't explain. They even have methods that are crude yet work as well as more sophisticated ones. They're happy some of these solvers work but they don't know, nor care, why they work. In the new nonlinear problem, we now have multiple sources being randomly multiplexed as a group. For the first time, (you could not do that in the traditional case) we get to care about the relative size of each elements with respect to each others, we also can think rigorously about adaptive sampling because undersampling now doesn't equate to 'magic inpainting' but rather to a conscientious exploration of your world. The fascinating part of this new nonlinear problem is that if you are assuming that the unknowns be both sparse and small, then the problem becomes linear again: the measurement matrix is now a random matrix. Now everything you know since 2004 about compressive sensing applies directly. Furthermore, the number of rows is now something you can play with: i.e. no more severe undersampling. In the traditional linear problem, some unknowns are dummy variables that disappear from the problem set up. In the new nonlinear problem, these new unknowns are constrained through a nonlinear inequality. The assumption of sparsity and small size for the unknowns, yield a solution that seems to have the same support as the solution of the full nonlinear problem. This may provide some light as to why some traditional unexplained schemes work and may even relate to the 1-bit nonlinear compressed sensing approach . In short, it's a whole new world, forget about past Moon landings, you just landed on Mars. It cost you more to get there for sure but will you ever come back ?
 Boufounos P., Baraniuk R. G. "1-Bit Compressive Sensing." Proc. 42nd annual Conference on Information Sciences and Systems (CISS) 2008, March 19-21 2008, Princeton, NJ.
Credit: NASA/JPL-Caltech, Opportunity, Navigation Camera, Sol 2444