This is the answer I gave on Quora to the question "Has Compressed Sensing peaked ?"
I wrote the more verbose answer about two months ago, but if you have been reading the blog during this time period, you may have seen some of the arguments expanded one way or another. Without further due, here it is:
First let me just say that about every two weeks, on average, I find a good paper that is either strictly CS or an extension of it (low rank problems,...).that clearly advances the subject in one shape or another. One recent example [as of January 12th, 2011] of a good paper is the one confirming that a CS encoding could be used in a critical system on a space mission (encoding of the PACS camera on the Herschel Space telescope). This was in the pipeline since the Herschel spacecraft launched but it took some time to figure out the quirks of how to restore images (colored noise...). In effect, even if the hardware was ready to use, it took a little while to get good data out that system. But Herschel is a peculiar example of a scientific instrumentation that was built before the CS algorithm was devised (they were expecting to use a lesser encoding algorithm in the first place). [Is it a paradigm shift where sensor are built first and then encoding designed later ?]
Second let me give some context as to why the field seems to have peaked and why questions such as the one I am responding to, gets to be asked.
The initial success of CS comes from its utilization in hardware that could already produce samples in the right space (Fourier space in MRI). Once again even if the hardware is spitting out the right type of data, it took some good algorithm development (mostly in the reconstruction stage) to get to the point where now MRI is being speed tracked by most MRI makers. This case is fascinating because a naive implementation of CS just would not be directly competitive with a whole slew of empirical methods devised for the past twenty years. In short, CS is not magic when it gets directly implemented in a technology that has already been exploited. It has however brought something extremely important: CS has accelerated investigation in a certain part of the phase/parameter space (including untouched parameter spaces) and has given good reasons as to why other part of the parameter space should not be important to investigate. Both the positive and negative insights are extremely important when making decisions on whether a certain technology climbs the Technology Readiness Level (TRL) ladder.
The reason people seem to think the subject has peaked is because they feel that besides MRI, there has not been other instrumentation providing similar success. This is indeed the case, most other hardware relying on CS are merely at the very low end of the Technology Readiness Level chart i.e. most are in the prototype stage (CS camera at Rice for instance) since common sensors in those fields do not provide the right type of sampling. This nurturing stage fueled by academic research allows one to investigate a different phase space than what other similar instrumentation can reach. In the end, there will be sensors that will not provide additional information or very little compared to existing technology. From that point of view, it may take a little while and some perseverance before we have other winners out of this list of hardware.
Even if none of the current hardware yield acceptable sensors (which I think is a far stretch), I can still see how CS can be used as just an encoding layer (see Herschel, but also the seismic work at UBC and Georgia Tech) which really put a good framework in what used to be called group testing. I also think that either sparsity or some other feature enforced a la CS, will do wonders on calibration issues thereby making these methods central for any type of sensor development.
Let us not forget that while low TRL insturmentation have appeared in the lab (see list), CS also could enable some new kind of instrumentation that could only be construed as futuristic (check "These Technologies Do Not Exist" or the recent idea of using the USPS fleet as a sensor network)
Finally, while the low maturity on the TRL scale is an important factor, there is probably another structural phenomenon at play: An explosion of different concepts enabled by CS across many branches of science and engineering has effectively diluted the core subject into domain specific areas. This fragmentation has really decreased the critical mass of efforts from one specific subject into less visible smaller subjects of inquiries. The core focus featured in the reconstruction solvers is likely to stay and should provide enough support to people designing new instrumentation or group testing procedures though.
Other related entries on the subject can be found here:
- Never, Ever Give In
- "...I found this idea of CS sketchy,..."
- Islands of Knowledge
- The Dip
- This is not a hardware-only or solver-only problem ...
- What Island Is Next ?
Yup.
ReplyDelete