Thursday, January 08, 2009

CS: A Theoretical Analysis of Joint Manifolds and the release of the Ann Arbor Fast Fourier Transform

If you recall,  Richard Baraniuk made a presentation on the subject entitled "Manifold models for signal acquisition, compression, and processing" where the slides are here and the attendant video are here. There is now an attendant preprint entitled: A Theoretical Analysis of Joint Manifolds by Mark Davenport, Chinmay Hegde, Marco Duarte, and Richard Baraniuk. The abstract reads:
The emergence of low-cost sensor architectures for diverse modalities has made it possible to deploy sensor arrays that capture a single event from a large number of vantage points and using multiple modalities. In many scenarios, these sensors acquire very high-dimensional data such as audio signals, images, and video. To cope with such high-dimensional data, we typically rely on low-dimensional models. Manifold models provide a particularly powerful model that captures the structure of high-dimensional data when it is governed by a low-dimensional set of parameters. However, these models do not typically take into account dependencies among multiple sensors. We thus propose a new joint manifold framework for data ensembles that exploits such dependencies. We show that simple algorithms can exploit the joint manifold structure to improve their performance on standard signal processing applications. Additionally, recent results concerning dimensionality reduction for manifolds enable us to formulate a network-scalable data compression scheme that uses random projections of the sensed data. This scheme efficiently fuses the data from all sensors through the addition of such projections, regardless of the data modalities and dimensions.
as the authors note:

This method enables a novel scheme for compressive, multi-modal data fusion; in addition, the number of random projections required by this scheme is only logarithmic in the number of sensors J.

Before the Christmas break, I talked to Mark Iwen about whether he would make his FFT algorithm available. As you all know, FFT is a cornerstone of most of the scientific revolution of the past 40 years. An algorithm that potentially improves on it by using a sparsity prior is ground breaking. Mark said that some work needed to be done to release it and that he would try to make it available at the end of January. It looks like he made his Ann Arbor Fast Fourier Transform available for download at sourceforge at the beginning of this week! This is the code implemented in Empirical Evaluation of a Sub-Linear Time Sparse DFT Algorithm. Since it is on Sourceforge, I am sure that Mark would not mind having contributors to this project. One of the nice thing to have would include a compiled version for several platforms and even a .dll for those of us using matlab....

Thank you Mark!

No comments:

Printfriendly