Pages

Thursday, October 22, 2009

CS: NIPS Workshop on Manifolds, sparsity, and structured models, Active Learning, Random Coding, FRI,


Ever since that blog entry featuring work on manifold signal processing and CS , I have had some expectation of some type of integration of compressive sensing in machine learning topics beyond simply publications. It looks like 2009 is the year when this is happening in the form of workshops within an ML conference. First, Mike Wakin is co-organizing a NIPS workshop on Low-dimensional geometric models along with Richard Baraniuk, Piotr Indyk, Bruno Olshausen, Volkan Cevher, and Mark Davenport,. The call for contributions for that workshop follows:


CALL FOR CONTRIBUTIONS
NIPS Workshop on Manifolds, sparsity, and structured models: When can low-dimensional geometry really help?

Whistler, BC, Canada, December 12, 2009

http://dsp.rice.edu/nips-2009


Important Dates:
----------------
  • Submission of extended abstracts: October 30, 2009 (later submission might not be considered for review)
  • Notification of acceptance: November 5, 2009
  • Workshop date: December 12, 2009

Overview:
---------
Manifolds, sparsity, and other low-dimensional geometric models have long been studied and exploited in machine learning, signal processing and computer science. For instance, manifold models lie at the heart of a variety of nonlinear dimensionality reduction techniques. Similarly, sparsity has made an impact in the problems of compression, linear regression, subset selection, graphical model learning, and compressive sensing. Moreover, often motivated by evidence that various neural systems are performing sparse coding, sparse representations have been exploited as an efficient and robust method for encoding a variety of natural signals. In all of these cases the key idea is to exploit low-dimensional models to obtain compact representations of the data.

The goal of this workshop is to find commonalities and forge connections between these different fields and to examine how we can we exploit low-dimensional geometric models to help solve common problems in machine learning and beyond.

Submission instructions:
------------------------

We invite the submission of extended abstracts to be considered for a poster presentation at this workshop. Extended abstracts should be 1-2 pages, and the submission does not need to be blind. Extended abstracts should be sent to md@rice.edu in PDF or PS file format.

Accepted extended abstracts will be made available online at the workshop website.

Organizers:
-----------
* Richard Baraniuk, Volkan Cevher, Mark Davenport, Rice University.
* Piotr Indyk, MIT.
* Bruno Olshausen, UC Berkeley.
* Michael Wakin, Colorado School of Mines.

Second, Rui Castro is one of the organizer of a NIPS workshop on Adaptive Sensing, Active Learning and Experimental Design. From the NIPS workshop webpage:
  • Submission of extended abstracts: October 27, 2009
    (later submission might not be considered for review)
  • Notification of acceptance: November 5, 2009
  • Workshop date: December 11, 2009

Also found on the interwebs:

Channel protection: Random coding meets sparse channels, by M. Salman Asif, William Mantzel and Justin Romberg. The abstract reads:
Multipath interference is an ubiquitous phenomenon in modern communication systems. The conventional way to compensate for this effect is to equalize the channel by estimating its impulse response by transmitting a set of training symbols. The primary drawback to this type of approach is that it can be unreliable if the channel is changing rapidly. In this paper, we show that randomly encoding the signal can protect it against channel uncertainty when the channel is sparse. Before transmission, the signal is mapped into a slightly longer codeword using a random matrix. From the received signal, we are able to simultaneously estimate the channel and recover the transmitted signal. We discuss two schemes for the recovery. Both of them exploit the sparsity of the underlying channel. We show that if the channel impulse response is sufficiently sparse, the transmitted signal can be recovered reliably.

and Sparse Sampling of Structured Information and its Application to Compression, by Pier Luigi Dragotti. The abstract reads:
It has been shown recently that it is possible to sample classes of non-bandlimited signals which we call signals with Finite Rate of Innovation (FRI). Perfect reconstruction is possible based on a set of suitable measurements and this provides a sharp result on the sampling and reconstruction of sparse continuous-time signals. In this paper, we first review the basic theory and results on sampling signals with finite rate of innovation. We then discuss variations of the above framework to handle noise and model mismatch. Finally, we present some results on compression of piecewise smooth signals based on the FRI framework.





Image Credit: NASA/JPL/Space Science Institute, W00060562.jpg was taken on October 19, 2009 and received on Earth October 20, 2009. The camera was pointing toward SATURN at approximately 2,174,289 kilometers away, and the image was taken using the CB2 and CL2 filters.

No comments:

Post a Comment