I don't know if I would qualify as a specialist but I sense that the throughput in compressive sensing work is decreasing somehow. Let me explain, in the past two years, we have gotten better ways of describing properties for encoding matrices (null space property, RIP,.....), some new concepts of encoding matrices seem to be a better fit to the physics of interest to some engineering fields, we have more domain specific dictionaries while the reconstruction solvers are advertized to be increasingly better. In the past few months, the number of papers direcly related to these issues has decreased. At the same time, we have also seen extensions of the field in different directions: there is some maturing of hardware concepts and new ones are appearing based on the properties mentioned above. We also see concepts and techniques developed in CS but applied to other domains such as matrix completion as well as generalizations of CS in different directions (nonlinear, ....). The recent call by Muthu to define more precisely CS is a sign that indeed the subject is morphing into something wider. Indeed, there is no reason to expect the contrary as all engineering fields are drowning under linear underdetermined systems. In fact, I don't see a trend away from looking for few variables of interest in large and complex models.
Looking to the future
At a different level, if one removes the issue of reconstruction (which could be construed as a de facto proof that you guessed the right dimensionality of the system) the random projection technique gives way to a whole new world of manifold signal processing and the rise of domain specific sensors. Is the world, besides the military, ready for this type of market ? Only time will tell, I am personaly looking forward to how the issue of calibration will be investigated. Eventually, this and other areas of interest will be borne out of compressive sensing but they will hardly be recognizable as such and This Is A Good Thing (TM).
The direction of this blog
Writing this blog takes a personal toll in the form of having to read Other People's Ideas (OPI), but there is a hope that it remove situations where false dichotomies get spread too far in the system and eventually yield non-important but self-sustaining concepts. The other toll is that the average reader may feel overwhelmed. The recent poll I made last week (see graph above), is hardly statistically relevant but provides a trend that the blog is probably giving too much information about the new branches of CS without providing much context. From talking to some of you, there is also an element of weariness in traditional CS about the claims made on the solvers and other encoding approaches. For instance with regards to solvers, there is a sense that much parameter guessing goes into getting the results of the underlying publication. While, one way to avoid this situation is to make one's code available for people to use, another way is to probably make sure that the same benchmark exercise a la Lena is available to authors. Yet this is rarely performed even though Sparco is there for that purpose. Closer to home, another way to answer the sense of loss of directions expressed by the poll is to provide more context by going back to these interviews pieces of asking dumb questions to real experts in the field. While I recognize the limit of the exercise, any other insights on how to do this effectively are welcome.
And now to our regularly scheduled broadcast, we have two papers and an CfP for a publication:
Number of Measurements in Sparse Signal Recovery by Paul Tune, Sibi Raj Bhaskaran, Stephen Hanly. The abstract reads:
We analyze the asymptotic performance of sparse signal recovery from noisy measurements. In particular, we generalize some of the existing results for the Gaussian case to subgaussian and other ensembles. An achievable result is presented for the linear sparsity regime. A converse on the number of required measurements in the sub-linear regime is also presented, which cover many of the widely used measurement ensembles. Our converse idea makes use of a correspondence between compressed sensing ideas and compound channels in information theory
Wyner-Ziv Image Coding from Random Projections. by Shoulie Xie, Susanto Rahardja, Zhengguo Li. The abstract reads:
In this paper, we present a Wyner-Ziv coding based on random projections for image compression with side information at the decoder. The proposed coder consists of random projections (RPs), nested scalar quantization (NSQ), and Slepian-Wolf coding (SWC). Most of natural images are compressible or sparse in the sense that they are well-approximated by a linear combination of a few coefficients taken from a known basis, e.g., FFT or Wavelet basis. Recent results show that it is surprisingly possible to reconstruct compressible signal to within very high accuracy from limited random projections by solving a simple convex optimization program. Nested quantization provides a practical scheme for lossy source coding with side information at the decoder to achieve further compression. SWC is lossless source coding with side information at the decoder. In this paper, ideal SWC is assumed, thus rates are conditional entropies of NSQ quantization indices. Recently theoretical analysis shows that for the quadratic Gaussian case and at high rate, NSQ with ideal SWC performs the same as conventional entropy-coded quantization with side information available at both the encoder and decoder. We note that the measurements of random projects for a natural large-size image can behave like Gaussian random variables because most of random measurement matrices behave like Gaussian ones if their sizes are large. Hence, by combining random projections with NSQ and SWC, the tradeoff between compression rate and distortion will be improved. Simulation results support the proposed joint codec design and demonstrate considerable performance of the proposed compression systems.
and
IEEE Transactions on Image Processing
Special Issue on
Distributed Camera Networks: Sensing, Processing, Communication and Computing
Description of Topic:Distributed camera networks have applications in diverse areas, including urban surveillance, environmental monitoring, healthcare, and battlefield visualization. Although distributed camera networks have been increasing in numbers, effective methods for dissemination, processing and understanding of video data collected by distributed camera networks are lacking. There is a strong need for developing distributed algorithms for compression and dissemination of video data, detection and tracking of moving objects, motion capture and higher level tasks such as visualization, recognition of objects and the activities they are involved in. Most of the existing algorithms for these tasks operate in a centralized manner---imagery or other information such as location and identity are transmitted to a server that performs necessary processing. Such server-oriented architectures do not scale for the problems mentioned above. Another feature of distributed camera networks is that the use of distributed algorithms adds network bandwidth and power to the mix of constraints; those constraints are particularly tight for wireless networks. Algorithms may need to be redesigned to meet these requirements---simple mapping onto embedded platforms is often not sufficient.List of Specific Topics Covered:
Manuscripts are solicited to address a wide range of topics in distributed camera networks, including but not limited to the followingTimeline for Submission, Review, and Publication:
- Sensing
- Collaborative (in-network) processing
- Compressive sensing and sparse representation
- Adaptive sensing (e.g., using different spatial/time resolutions, bit depths, etc.)
- Processing
- Distributed camera calibration
- Distributed video compression Efficient video transmission Detection and tracking of objects
- Recognition of identity and events
- Visualization
- Communication Network architectures Efficient protocols
- Secure transmission
- Camera scheduling
- Cross-layer protocols Computing Embedded systems
- Low-power computing
- Software protocols Privacy protection
List of Guest Editors:
- Manuscript submission: 30, August 2009
- Preliminary results: 15, November 2009
- Revised version: 15, January 2010
- Notification: 15, February 2010
- Final manuscripts due: 15, March 2010
- Anticipated publication: 01, June 2010
Prof. Rama Chellappa, University of Maryland
Prof. Wendi Heinzelman, University of Rochester
Prof. Janusz Konrad, Boston University
Prof. Wayne Wolf, Georgia Institute of Technology
No comments:
Post a Comment