Pages

Friday, January 30, 2009

CS: Two jobs, Optically multiplexed imaging, Robust estimation of Gaussian mixtures by l_1 penalization, multivariate distribution for subimages,ASIFT

Pierre Vandergheynst has two open positions in his lab. Both involve sparsity and Compressive Sensing. They are also listed in the Compressive Sensing Jobs page.




Continuing on the concept of multiplexing imaging, yesterday it was through coded aperture, today the set-up is a little different and aim at a specific task: tracking. The paper is entitled: Optically multiplexed imaging with superposition space tracking by Shikhar Uttam, Nathan A. Goodman, Mark A. Neifeld, Changsoon Kim, Renu John, Jungsang Kim, and David Brady. The abstract reads:

We describe a novel method to track targets in a large field of view. This method simultaneously images multiple, encoded sub-fields of view onto a common focal plane. Sub-field encoding enables target tracking by creating a unique connection between target characteristics in superposition space and the target’s true position in real space. This is accomplished without reconstructing a conventional image of the large field of view. Potential encoding schemes include spatial shift, rotation, and magnification. We discuss each of these encoding schemes, but the main emphasis of the paper and all examples are based on one-dimensional spatial shift encoding. System performance is evaluated in terms of two criteria: average decoding time and probability of decoding error. We study these performance criteria as a function of resolution in the encoding scheme and signal-to-noise ratio. Finally, we include simulation and experimental results d onstrating our novel tracking method.
On a somewhat different note, yet tracking can be performed by some of this modeling, here is Robust estimation of Gaussian mixtures by l_1 penalization: an experimental study by Stephane Chretien. The abstract reads:
Many experiments in medicine and ecology can be conveniently modeled by finite Gaussian mixtures but face the problem of dealing with small data sets. We propose a robust version of the estimator based on self-regression and sparsity promoting penalization in order to estimate the components of Gaussian mixtures in such contexts. A space alternating version of the penalized EM algorithm is obtained and we prove that its cluster points satisfy the Karush-Kuhn-Tucker conditions. Monte Carlo experiments are presented in order to compare the results obtained by our method and by standard maximum likelihood estimation. In particular, our estimator is seen to perform well better than the maximum likelihood estimator.
Again, tracking is also of importance here with A multivariate distribution for subimages by Steve Maybank . The abstract reads:
A new method for obtaining multivariate distributions for sub-images of natural images is described. The information in each sub-image is summarized by a measurement vector in a measurement space. The dimension of the measurement space is reduced by applying a random projection to the truncated output of the discrete cosine transforms of the sub-images. The measurement space is then reparametrized, such that a Gaussian distribution is a good model for the measurement vectors in the reparametrized space. An Ornstein–Uhlenbeck process, associated with the Gaussian distribution, is used to model the differences between measurement vectors obtained from matching sub-images. The probability of a false alarm and the probability of accepting a correct match are calculated. The accuracy of the resulting statistical model for matching sub-images is tested using images from the MIDDLEBURY stereo database with promising results. In particular, if the probability of accepting a correct match is relatively large, then there is good agreement between the calculated and the experimental probabilities of obtaining a unique match that is also a correct match.

In a similar vein, if you have ever tried SIFT, you know that sometimes it does not work optimally as expected. There is a new algorithm called ASIFT developed by Jean Michel Morel and Guoshen Yu that seems to be very efficient. You can try it directly on two of your own images here at: http://mw.cmla.ens-cachan.fr/megawave/demo/asift/

Summary:
A fully affine invariant image comparison method, Affine SIFT (A-SIFT) is introduced. While SIFT is fully invariant with respect to only four parameters, the new method treats the two left over parameters : the angles defining the camera axis orientation. Against any prognosis, simulating all views depending on these two parameters is feasible with no dramatic computational load. The method permits to reliably identify features that have undergone large transition tilts, up to 36 and more, while state-of-the-art affine normalization methods hardly exceed transition tilts of 2 (SIFT), 2.5 (Harris-Affine and Hessian-Affine) and 10 (MSER).

References:
J.M. Morel and G.Yu, ASIFT: A New Framework for Fully Affine Invariant Image Comparison, to appear in SIAM Journal on Imaging Sciences, 2009.
G. Yu and J.M. Morel, A Fully Affine Invariant Image Comparison Method, accepted to IEEE ICASSP, Taipei, 2009.
J.M. Morel and G.Yu, On the consistency of the SIFT Method, Preprint, CMLA 2008-26, Sept 2008.


Here are some of the comparison between SIFT and ASIFT in videos, this is pretty stunning:






Compare that to SIFT only

Thursday, January 29, 2009

Nuit Blanche

You are tired to launch a browser window and hit "nuit blanche" on the Google every morning or you think you might miss one entry of Nuit Blanche, then get new entries directly in your mailbox by entering your email address in the following box and hit the Subscribe button:

You will be asked confirmation in your mailbox.

You can also add this RSS Feed to your FeedReader's list:

http://nuit-blanche.blogspot.com/feeds/posts/default

CS: Testing the Nullspace Property using SDP, Compressive Coded Aperture Imaging, Informative Sensing, NB

Alexandre d'Aspremont and Laurent El Ghaoui recently updated their preprint entitled: Testing the Nullspace Property using Semidefinite Programming. The abstract reads
Recent results in compressed sensing show that, under certain conditions, the sparsest solution to an underdetermined set of linear equations can be recovered by solving a linear program. These results rely on nullspace properties of the system matrix. So far, no tractable algorithm is known to test these conditions and most current results rely on asymptotic properties of sparse eigenvalues of random matrices. Given a matrix A, we use semidefinite relaxation techniques to test the nullspace property on A and show on some numerical examples that these relaxation bounds can prove perfect recovery of sparse solutions with relatively high cardinality.
A beautiful summary of the current state of our knowledge on this issue can be found in the paper:
.....Universal conditions for strong recovery based on sparse extremal eigenvalues were derived in Candes and Tao (2005) and Candes and Tao (2006) who also proved that certain (mostly random) matrix classes satisfied these conditions with an exponentially small probability of failure. More recently, Cohen et al. (2006) derived sparse recovery conditions based on properties of the nullspace of A. In particular, if we define:

α_k = max max y^T x,
{Ax=0, ||x||_1=1} {||y||_1=1, ||y||_1≤k}


Cohen et al. (2006) show that α_k less than 1/2 guarantees strong recovery.
One key issue with the current sparse recovery conditions in Candes and Tao (2005) or Cohen et al. (2006) is that except for explicit recovery thresholds available for certain types of random matrices, testing these conditions on generic matrices is potentially harder than solving the combinatorial ℓ0 problem in (1) as it implies either solving a combinatorial problem to compute αk, or computing sparse eigenvalues. Semidefinite relaxation bounds on sparse eigenvalues were used in d’Aspremont et al. (2008) to test the conditions in Candes and Tao (2005) on arbitrary matrices. In recent independent results, Juditsky and Nemirovski (2008) provide an alternative proof of some of the results in Cohen et al. (2006), extend them to the noisy case and produce a linear programming relaxation bound on α_k with explicit performance bounds. In this paper, we derive a semidefinite relaxation bound on α_k, study its tightness and compare its numerical performance with that of the relaxation in Juditsky and Nemirovski (2008). Because it involves solving a semidefinite program, the complexity of the semidefinite relaxation bound derived here is significantly higher than that of the linear programming based relaxation in Juditsky and Nemirovski (2008) and no explicit performance performance bounds are available here on matrices satisfying sparse recovery conditions, we show on small scale examples that the semidefinite bounds on αk are often, but not always, tighter than those produced by the the linear programming relaxation in Juditsky and Nemirovski (2008)....

This new document is a substantial revision from the previous draft and it now includes some discussion about the work of Anatoli Juditsky and Arkadii Nemirovski in On Verifiable Sufficient Conditions for Sparse Signal Recovery via l_1 Minimization. As you recall, this type of work is extremely valuable as it has the potential of producing guidelines for hardware implementers on how to design their Point Spread Functions and so forth. For instance, one may recall that designing a coded aperture mask for the purpose of providing superresolution hinges on the ability to map a mask to a specific type of Toeplitz matrix [1]. But superresolution may be just the beginning of producing designs beyond what we have been able to do for the past 40 years (at least in coded aperture). For this to happen, hardware designers need to have the freedom to tinker with new designs and then be able to ask the question: Can I do Compressive Sensing with this design ? Currently only the implementation of Anatoli Juditsky and Arkadii Nemirovski is available and is located at:


(oops, looks like the code is not available today, I'll report when it is) Alexandre d'Aspremont told me that an implementation of their code will eventually come out. [Update: The numerical part of this draft might change in an upcoming version 3. I'll report shortly on the new version of this paper if there are changes, and on the availability of the code of Anatoli Juditsky and Arkadi Nemirovski. This is a really challenging and important problem and I am glad some of the best minds are looking into it ].

In a somewhat related matter,


Nonlinear image reconstruction based upon sparse representations of images has recently received widespread attention with the emerging framework of compressed sensing (CS). This theory indicates that, when feasible, judicious selection of the type of distortion induced by measurement systems may dramatically improve our ability to perform image reconstruction. However, applying compressed sensing theory to practical imaging systems poses a key challenge: physical constraints typically make it infeasible to actually measure many of the random projections described in the literature, and therefore, innovative and sophisticated imaging systems must be carefully designed to effectively exploit CS theory. In video settings, the performance of an imaging system is characterized by both pixel resolution and field of view. In this work, we propose compressive imaging techniques for improving the performance of video imaging systems in the presence of constraints on the focal plane array size. In particular, we describe a novel yet practical approach that combines coded aperture imaging to enhance pixel resolution with superimposing subframes of a scene onto a single focal plane array to increase field of view. Specifically, the proposed method superimposes coded observations and uses wavelet-based sparsity recovery algorithms to reconstruct the original subframes. We demonstrate the effectiveness of this approach by reconstructing with high resolution the constituent images of a video sequence.
Some of the work has already been covered in [1], but the hardware design and wavelet based reconstruction and the attendant results are new and very interesting. I am sure I'll come back to these later.

Also, I mentioned their work last friday, here is the preprint that introduces us to the concept of "bandwise random projections", I am loving this as it certainly has major ramifications to the understanding of the primary visual cortex, especially trying to compare current modeling efforts and the signal processing aspect of it. The preprint is Informative Sensing by Hyun Sung Chang, Yair Weiss and William T. Freeman. The abstract reads:

Compressed sensing is a recent set of mathematical results showing that sparse signals can be exactly reconstructed from a small number of linear measurements. Interestingly, for ideal sparse signals with no measurement noise, random measurements allow perfect reconstruction while measurements based on principal component analysis (PCA) or independent component analysis (ICA) do not. At the same time, for other signal and noise distributions, PCA and ICA can significantly outperform random projections in terms of enabling reconstruction from a small number of measurements. In this paper we ask: given the distribution of signals we wish to measure, what are the optimal set of linear projections for compressed sensing? We consider the problem of finding a small number of linear projections that are maximally informative about the signal. Formally, we use the InfoMax criterion and seek to maximize the mutual information between the signal, x, and the (possibly noisy) projection y=Wx. We show that in general the optimal projections are not the principal components of the data nor random projections, but rather a seemingly novel set of projections that capture what is still uncertain about the signal, given the knowledge of distribution. We present analytic solutions for certain special cases including natural images. In particular, for natural images, the near-optimal projections are bandwise random, i.e., incoherent to the sparse bases at a particular frequency band but with more weights on the low-frequencies, which has a physical relation to the multi-resolution representation of images.

An attendant presentation of some of the approach can be found in this entry. Let us thank The Google for bringing this one up.

On a different note, the Compressive Sensing Hardware page now has some videos included, if you know of any other, I'll be glad to include them.

Finally, Emmanuel Candes and Terry Tao wrote in the latest IEEE Information Theory Society Newsletter and made a reference to Nuit Blanche. Thanks Laurent for the heads-up.


Reference:
[1] Roummel Marcia and Rebecca Willett, Compressive Coded Aperture Superresolution Image Reconstruction - additional material can be found here while the slides are here

Credit: NASA/JPL-Caltech/University of Arizona/Texas A&M, sol 151 of Phoenix.

Wednesday, January 28, 2009

CS: Compressive dual photography, Image reconstruction by deterministic CS, Nesterov algorithm

I found the following paper at the Rice repository:


Pradeep Sen and Soheil Darabi, Compressive dual photography. (Computer Graphics Forum, March 2009)

The accurate measurement of the light transport characteristics of a complex scene is an important goal in computer graphics and has applications in relighting and dual photography. However, since the light transport data sets are typically very large, much of the previous research has focused on adaptive algorithms that capture them efficiently. In this work, we propose a novel, non-adaptive algorithm that takes advantage of the compressibility of the light transport signal in a transform domain to capture it with less acquisitions than with standard approaches. To do this, we leverage recent work in the area of compressed sensing, where a signal is reconstructed from a few samples assuming that it is sparse in a transform domain. We demonstrate our approach by performing dual photography and relighting by using a much smaller number of acquisitions than would normally be needed. Because our algorithm is not adaptive, it is also simpler to implement than many of the current approaches.

It looks like an extension of the illumination work performed earlier at Rice and University of Arizona. I will add this project to the Compressive Sensing Hardware page. The following file is in postscript and was found thanks to the Google :-)



Image reconstruction by deterministic compressive sensing by Kangyu Ni, Somantika Datta, Svetlana Roudenko, Douglas Cochran. The abstract reads:
A recently proposed approach for compressive sensing with deterministic measurement matrices is applied to images that possess varying degrees of sparsity in their wavelet representations. The use of these deterministic measurement matrices is found to be approximately as effective as the use of Gaussian random matrices in terms of image reconstruction fidelity. The ``fast reconstruction'' algorithm enabled by this deterministic sampling scheme produces accurate results, but its speed is hampered when the degree of sparsity is not sufficiently high.

Finally, as Gabriel Peyre mentioned to me, it looks like the algorithm developed using the Nesterov scheme by Pierre Weiss is fast. So here is Pierre Weiss' thesis entitled: Fast algorithms for convex optimization. Applications to image reconstruction and change detection. The abstract of which is:

This PhD contains contributions in numerical analysis and in computer vision. In the first part, we focus on the fast resolution, using first order methods, of convex optimization problems. Those problems appear naturally in many image processing tasks like image reconstruction, compressed sensing or texture+cartoon decompositions. They are generally non differentiable or ill-conditioned. We show that they can be solved very efficiently using fine properties of the functions to be minimized. We analyze in a systematic way their convergence rate using recent results due to Y. Nesterov. To our knowledge, the proposed methods correspond to the state of the art of the first order methods. In the second part, we focus on the problem of change detection between two remotely sensed images taken from the same location at two different times. One of the main difficulty to solve this problem is the differences in the illumination conditions between the two shots. This leads us to study the level line illumination invariance. We completely characterize the 3D scenes which produce invariant level lines. We show that they correspond quite well to urban scenes. Then we propose a variational framework and a simple change detection algorithm which gives satisfying results both on synthetic OpenGL scenes and real Quickbird images.


Credit: NASA, Opportunity, navigation camera, sol 1776.

Tuesday, January 27, 2009

CS:OLS/ROLS/StOLS, Group sparsity, Poisson CS, CS of parameterized shapes, IT meets free discontinuity, An homotopy algorithm for the Lasso


After yesterday's entry, Angshul Majumdar also provided us with a modified version
....of the original OLS algorithm proposed in

Their algorithm is implemented in SparseOLS.zip.

Angshul has also added two modified versions

These algorithms are also listed in the Reconstruction Section of the Big Picture. Angshul also tells me that Matlab Central now has a link to the Greedy Algorithms promoting Group Sparsity. Thanks Angshul!


Also found on the interwebs:

Limits of Deterministic Compressed Sensing Considering Arbitrary Orthonormal Basis for Sparsity by Arash Amini and Farokh Marvasti. The abstract reads:

It is previously shown that proper random linear samples of a finite discrete signal (vector) which has a sparse representation in an orthonormal basis make it possible (with probability 1) to recover the original signal. Moreover, the choice of the linear samples does not depend on the sparsity domain. In this paper, we will show that the replacement of random linear samples with deterministic functions of the signal (not necessarily linear) will not result in unique reconstruction of k-sparse signals except for k=1. We will show that there exist deterministic nonlinear sampling functions for unique reconstruction of 1- sparse signals while deterministic linear samples fail to do so.
Minimax risk for Poisson compressed sensing by Rebecca Willett and Maxim Raginsky. The abstract reads:
This paper describes performance bounds for compressed sensing in the presence of Poisson noise and shows that, for sparse or compressible signals, they are within a log factor of known lower bounds on the risk. The signal-independent and bounded noise models used in the literature to analyze the performance of compressed sensing do not accurately model the effects of Poisson noise. However, Poisson noise is an appropriate noise model for a variety of applications, including low-light imaging, in which sensing hardware is large or expensive and limiting the number of measurements collected is important. In this paper, we describe how a feasible sensing matrix can be constructed and prove a concentration-of-measure inequality for these matrices. We then show that minimizing an objective function consisting of a negative Poisson log likelihood term and a penalty term which could be used as a measure of signal sparsity results in near-minimax rates of error decay.


The Benefit of Group Sparsity by Junzhou Huang and Tong Zhang. The abstract reads:
This paper develops a theory for group Lasso using a concept called strong group sparsity. Our result shows that group Lasso is superior to standard Lasso for strongly group-sparse signals. This provides a convincing theoretical justication for using group sparse regularization when the underlying group structure is consistent with the data. Moreover, the theory predicts some limitations of the group Lasso formulation that are confirmed by simulation studies.

The Rice repository also featured the following preprints and papers, that I have not covered before, over the week-end:

Compressive image fusion by Tao Wan, Nishan Canagarajah, and Alin Achim. The abstract reads:
Compressive sensing (CS) has received a lot of interest due to its compression capability and lack of complexity on the sensor side.In this paper, we present a study of three sampling patterns and investigate their performance on CS reconstruction. We then proposeWe then propose a new image fusion algorithm in the compressive domain by using an improved sampling pattern. There are few studies regarding the applicability of CS to image fusion. The main purpose of this work is to explore the properties of compressive measurements through different sampling patterns and their potential use in image fusion. The study demonstrates that CS-based image fusion has a number of perceived advantages in comparison with image fusion in the multiresolution (MR) domain. The simulations show that the proposed CS-based image fusion algorithm provides promising results.

Compressive sensing of parameterized shapes in images in Ali Gurbuz, James McClellan, Justin Romberg, and Waymond Scott, Jr. The abstract reads:
Compressive Sensing (CS) uses a relatively small number of non-traditional samples in the form of randomized projections to reconstruct sparse or compressible signals. The Hough transform is often used to find lines and other parameterized shapes in images. This paper shows how CS can be used to find parameterized shapes in images, by exploiting sparseness in the Hough transform domain. The utility of the CS-based method is demonstrated for finding lines and circles in noisy images, and then examples of processing GPR and seismic data for tunnel detection are presented.

Iterative thresholding meets free discontinuity problems by Massimo Fornasier and Rachel Ward. The abstract reads:

Free-discontinuity problems describe situations where the solution of interest is defined by a function and a lower dimensional set consisting of the discontinuities of the function. Hence, the derivative of the solution is assumed to be a ‘small’ function almost everywhere except on sets where it concentrates as a singular measure. This is the case, for instance, in crack detection from fracture mechanics or in certain digital image segmentation problems. If we discretize such situations for numerical purposes, the free-discontinuity problem in the discrete setting can be re-formulated as that of finding a derivative vector with small components at all but a few entries that exceed a certain threshold. This problem is similar to those encountered in the field of ‘sparse recovery’, where vectors with a small number of dominating components in absolute value are recovered from a few given linear measurements via the minimization of related energy functionals. Several iterative thresholding algorithms that intertwine gradient-type iterations with thresholding steps have been designed to recover sparse solutions in this setting. It is natural to wonder if and/or how such algorithms can be used towards solving discrete free-discontinuity problems. The current paper explores this connection, and, by establishing an iterative thresholding algorithm for discrete free-discontinuity problems, provides new insights on properties of minimizing solutions thereof.

An homotopy algorithm for the lasso with online observations by Pierre Garrigues and Laurent El Ghaoui, The abstract reads:

It has been shown that the problem of l1-penalized least-square regression commonly referred to as the Lasso or Basis Pursuit DeNoising leads to solutions that
are sparse and therefore achieves model selection. We propose in this paper RecLasso,an algorithm to solve the Lasso with online (sequential) observations. Weintroduce an optimization problem that allows us to compute an homotopy fromthe current solution to the solution after observing a new data point. We compareour method to Lars and Coordinate Descent, and present an application tocompressive sensing with sequential observations. Our approach can easily beextended to compute an homotopy from the current solution to the solution thatcorresponds to removing a data point, which leads to an efficient algorithm forleave-one-out cross-validation. We also propose an algorithm to automatically update the regularization parameter after observing a new data point.

The poster, and attendant python code are listed on Pierre Garrigues' page. This implementation is also listed in the Reconstruction Section of the Big Picture.

Finally, I missed this one BAA for a postdoc proposal on page 48 (BAA # HM1582-09-BAA-0001), but it is interesting to read how one would go from a current hardware system into a CS one:

8.25. LIDAR Compressed Sensing and Denoising
Proposed Research Project:

LIDAR sensors generate significant amounts of data at the acquisition stage, thus making the transmittal, processing, storage, dissemination, and visualization difficult to manage. The goal of this research is to investigate the feasibility of employing compressed sensing techniques for LIDAR sensors at the data acquisition stage. That is, reduce the data on the sensor, thereby easing the demands for transmission and storage, while retaining the ability to reassemble the full point cloud data set with minimal error.

Technical Objectives:

This project seeks the following objectives:
  • Determine the feasibility of collecting LIDAR single point and/or point cloud data using compressed sensing techniques.
  • Determine which representation should be employed both for sampling and for reconstruction. Is it best to use a fixed set of vectors (Fourier, Wavelets, a combination of these, etc.) or can these be learned from the data?
  • Determine to what extent the feasibility, and the optimal method, depends on the application (e.g., in urban vs. rural scenes how do we find the smallest features within those environments?).
  • What are the computational demands of the compressed sensing technique, at the time of collection (on-board processing requirements) and for reconstruction?
  • How robust can a LIDAR compressed sensing technique be with respect to noise, spurious returns, scene clutter, scene content, and features of varying scale?
  • Would the proposed technique introduce compression artifacts that could make further processing difficult?
Goals:
The anticipated outcome of this research includes a technical document with mathematical details describing the above findings. Assuming that compressed sensing of LIDAR is feasible, we also expect recommendations for how this may best be accomplished.


Image Credit: NASA/JPL-Caltech, Looking Back at Victoria Crater, August 2008.

Monday, January 26, 2009

CS: BOMP/GOMP/stGOMP/ReGOMP, New version of Sparsify, New release of DCS, Compressive Coded Aperture, CMP

The previous entry elicited some interesting comments and looks like a back and forth exchange in a review process. Thank you Kevin for responding!

In the meantime, Angshul Majumdar mentioned to me that within a few days the following algorithm implementions will be released through Matlab Central. While we are waiting for that submission process to go through, the algorithms are available on Nuit Blanche with Angshul's blessing: 

...Here are some greedy algorithms for Group Sparsity. Following the works of Eldar et al

[1] Y. C. Eldar and M. Mishali, "Robust Recovery of Signals From a Union of Subspaces", arXiv.org 0807.4581, submitted to IEEE Trans. Inform. Theory, July 2008.
[2] Y. C. Eldar and H. Bolcskei, "Block Sparsity: Coherence and Efficient Recovery," to appear in ICASSP 2009.

I have implemented their algorithm - Block Orthogonal Matching Pursuit (BOMP), and three of my own, namely:
  • GOMP - Group Orthogonal Matching Pursuit (similar to BOMP, except for decides the group by average correlation of each group instead of highest correlation)
  • StGOMP - Stagewise GOMP; combining ideas of BOMP with StOMP
  • ReGOMP - Regularized GOMP; combining ideas of ROMP and GOMP.
All these codes are accessible here. Thank you Angshul!

Thomas Blumensath has released a new version of Sparsify (version 0.4). From the Sparsify page ( the page still says version 0.3 but scroll down and you'll see the following):
21, January, 2009: New release of the toolbox, all that has changed is that I now included an automatic step-size determination method into the Iterative Hard Thresholding algorithm (hard_lo_Mterm).
This modification was announced in December 2008 on the Sparsity workshop in Cambridge and will be discussed in two upcoming papers [11] and [12], which I will make available soon.

[11] T. Blumensath and M. Davies; "Iterative Hard Thresholding for Compressed Sensing" to appear in Applied and Computational Harmonic Analysis
[12] T. Blumensath and M. Davies; "A modified Iterative Hard Thresholding algorithm with guaranteed performance and stability" in preparation (title may change)

Latest version:

sparsify_0_4.zip (1.1 MB)
sparsity 0.4 (1.4 MB)

Both of these codes and version numbers have been added/changed in the Reconstruction Section of the Big Picture.

The following important paper was submitted in 2005 but a new major revision has come out, it is entitled: Distributed Compressive Sensing by Dror Baron, Marco Duarte, Michael Wakin, Shriram Sarvotham, and Richard Baraniuk. The abstract reads:
Compressive sensing is a signal acquisition framework based on the revelation that a small collection of linear projections of a sparse signal contains enough information for stable recovery. In this paper we introduce a new theory for distributed compressive sensing (DCS) that enables new distributed coding algorithms for multi-signal ensembles that exploit both intra- and inter-signal correlation structures. The DCS theory rests on a new concept that we term the joint sparsity of a signal ensemble. Our theoretical contribution is to characterize the fundamental performance limits of DCS recovery for jointly sparse signal ensembles in the noiseless measurement setting; our result connects single-signal, joint, and distributed (multi-encoder) compressive sensing. To demonstrate the efficacy of our framework and to show that additional challenges such as computational tractability can be addressed, we study in detail three example models for jointly sparse signals. For these models, we develop practical algorithms for joint recovery of multiple signals from incoherent projections. In two of our three models, the results are asymptotically best-possible, meaning that both the upper and lower bounds match the performance of our practical algorithms. Moreover, simulations indicate that the asymptotics take effect with just a moderate number of signals. DCS is immediately applicable to a range of problems in sensor arrays and networks.
Here is just an abstract about a new paper on another very interesting subject: Compressive coded aperture imaging by Roummel Marcia, Zachary Harmany, and Rebecca Willett. The abtsract reads:
Nonlinear image reconstruction based upon sparse representations of images has recently received widespread attention with the emerging framework of compressed sensing (CS). This theory indicates that, when feasible, judicious selection of the type of distortion induced by measurement systems may dramatically improve our ability to perform image reconstruction. However, applying compressed sensing theory to practical imaging systems poses a key challenge: physical constraints typically make it infeasible to actually measure many of the random projections described in the literature, and therefore, innovative and sophisticated imaging systems must be carefully designed to effectively exploit CS theory. In video settings, the performance of an imaging system is characterized by both pixel resolution and field of view. In this work, we propose compressive imaging techniques for improving the performance of video imaging systems in the presence of constraints on the focal plane array size. In particular, we describe a novel yet practical approach that combines coded aperture imaging to enhance pixel resolution with superimposing subframes of a scene onto a single focal plane array to increase field of view. Specifically, the proposed method superimposes coded observations and uses wavelet-based sparsity recovery algorithms to reconstruct the original subframes. We demonstrate the effectiveness of this approach by reconstructing with high resolution the constituent images of a video sequence.
I am sure I'll feature it more prominently when it comes out.

Finally, there is another greedy algorithm described in Complementary Matching Pursuit Algorithms for Sparse Approximation by Gagan Rath and Christine Guillemot. The abstract reads:
Sparse coding in a redundant basis is of considerable interest in many areas of signal processing. The problem generally involves solving an under-determined system of equations under a sparsity constraint. Except for the exhaustive combinatorial approach, there is no known method to find the exact solution for general dictionaries. Among the various algorithms that find approximate solutions, pursuit algorithms are the most well-known. In this paper, we introduce the concept of a complementary matching pursuit (CMP). Unlike the classical matching pursuit (MP), which selects the atoms in the signal space, the CMP selects the atoms in the coefficient space. On a conceptual level, the MP searches for 'the solution vector among sparse vectors' whereas the CMP searches for 'the sparse vector among the solution vectors'. We assume that the observations can be expressed as pure linear sums of atoms without any additive noise. As a consequence of the complementary actions in the coecient space, the CMP does not minimize the residual error at each iteration, however it may converge faster yielding sparser solution vectors than the MP. We show that when the dictionary is a tight frame, the CMP is equivalent to the MP. We also present the orthogonal extensions of the CMP and show that they perform the complementary actions to those of their classical matching pursuit counterparts.


Image Credit: NASA/JPL/Space Science Institute. Photo of Saturn from the Cassini spacecraft on Friday when this entry was written.

Friday, January 23, 2009

CS:Joint Reconstruction, Matrix completion, quasiconvex homotopic reg., Bit Precision Analysis, SRIP, RIP, Distortion-Rate Functions for Quantized CS,

What about if Compressive Sensing would allow us to go through airport security gates faster? Some of the folks at Northeastern University are looking into it. In the meantime, let's look at the batch you'll be reading this week-end as this entry has many papers found thanks to Google, my webcrawler or the Rice Compressive Sensing site.


Looks like we have an improvement over a previous contribution in Matrix Completion from a Few Entries by Raghunandan Keshavan, Sewoong Oh and Andrea Montanari. The abstract reads:
Let M be a random n \alpha x n matrix of rank r, and assume that a uniformly random subset E of its entries is observed. We describe an efficient algorithm that reconstructs M from |E| = O(r n) observed entries. More precisely, for any \delta \gt 0, there exists C(\delta, \alpha ) \lt \infty such that, if |E| \ge C(\delta, \alpha ) r n ; then M can be reconstructed with root mean square error smaller than \delta. Further, if |E| \ge C'( \alpha) rn log n, M can be reconstructed exactly with high probability. This settles a question left open by Candes and Recht and improves over the guarantees for their reconstruction algorithm. The complexity of our algorithm is O(|E|r log n), which opens the way to its use for massive data sets. In the process of proving these statements, we obtain a generalization of a celebrated result by Friedman-Kahn-Szemer´edi and Feige-Ofek on the spectrum of sparse random matrices.
The video pictured can be found here (46 MB).

Of related interest is On the Calculation of the l2 → l1 Induced Matrix Norm by Konstantinos Drakakis and Barak A. Pearlmutter and Nuclear norm minimization for the planted clique and biclique problems by Brendan Ames and Stephen Vavasis.

Here is a paper that complements well the joint manifolds we saw earlier from the Rice folks. While the joint manifold is very interesting, in the end, image reconstruction is a requisite for some application. This is done in Joint Reconstruction of Compressed Multi-view Images by Xu Chen and Pascal Frossard. The abstract reads:
This paper proposes a distributed representation algorithm for multi-view images that are jointly reconstructed at the decoder. Compressed versions of each image are first obtained independently with random projections. The multiple images are then jointly reconstructed by the decoder, under the assumption that the correlation between images can be represented by local geometric transformations. We build on the compressed sensing framework and formulate the joint reconstruction as a l2-l1 optimization problem. It tends to minimize the MSE distortion of the decoded images, under the constraint that these images have sparse and correlated representations over a structured dictionary of atoms. Simulation results with multi-view images demonstrate that our approach achieves better reconstruction results than independent decoding. Moreover, we show the advantage of structured dictionaries for capturing the geometrical correlation between multi-view images.

As an echo to yesterday's l_o papers, here is Sparse reconstructions via quasiconvex homotopic regularization by Ali Bilgin, Kevin LaTourette (also check the attendant presentation) The abstract reads:

The use of homotopic approximations to the L0 `norm' has been shown to allow for accurate image reconstructions from from sparse data from much higher levels of undersampling than previous techniques have allowed. In this work, I introduce the wavelet transform as a sparsity prior in this context and demonstrate that it can be used to accurately reconstruct images acquired in the MRI setting. Further, I present a proof that for the chosen class of quasiconvex homotopic functions we are guaranteed to converge to a global minima given that a local minima exists. Finally, I investigate the application of a continuation scheme in the regularization parameter and the e ects on image reconstruction quality.




In a different area, Bit Precision Analysis for Compressed Sensing by Ehsan Ardestanizadeh, Mahdi Cheraghchi, Amin Shokrollahi. The abstract reads:

This paper studies the stability of some reconstruction algorithms for compressed sensing in terms of the bit precision. Considering the fact that practical digital systems deal with discretized signals, we motivate the importance of the total number of accurate bits needed from the measurement outcomes in addition to the number of measurements. It is shown that if one uses a $2k \times n$ Vandermonde matrix with roots on the unit circle as the measurement matrix, $O(\ell + k \log(n/k))$ bits of precision per measurement are sufficient to reconstruct a $k$-sparse signal $x \in \R^n$ with dynamic range (i.e., the absolute ratio between the largest and the smallest nonzero coefficients) at most $2^\ell$ within $\ell$ bits of precision, hence identifying its correct support. Finally, we obtain an upper bound on the total number of required bits when the measurement matrix satisfies a restricted isometry property, which is in particular the case for random Fourier and Gaussian matrices. For very sparse signals, the upper bound on the number of required bits for Vandermonde matrices is shown to be better than this general upper bound.

and The statistical restricted isometry property and the Wigner semicircle distribution of incoherent dictionaries by Shamgar Gurevich, Ronny Hadani. The abstract reads:
In this article we present a statistical version of the Candes-Tao restricted isometry property (SRIP for short) which holds in general for any incoherent dictionary which is a disjoint union of orthonormal bases. In addition, we show that, under appropriate normalization, the eigenvalues of the associated Gram matrix fluctuate around 1 according to the Wigner semicircle distribution. The result is then applied to various dictionaries that arise naturally in the setting of finite harmonic analysis, giving, in particular, a better understanding on a remark of Applebaum-Howard-Searle-Calderbank concerning RIP for the Heisenberg dictionary of chirp like functions.
Also for the same authors, Incoherent dictionaries and the statistical restricted isometry property.


Linear Transformations and Restricted Isometry Property by Leslie Ying, Yi Ming Zou. The abstract reads:

The Restricted Isometry Property (RIP) introduced by Candes and Tao is a fundamental property in compressed sensing theory. It says that if a sampling matrix satisfies the RIP of certain order proportional to the sparsity of the signal, then the original signal can be reconstructed even if the sampling matrix provides a sample vector which is much smaller in size than the original signal. This short note addresses the problem of how a linear transformation will affect the RIP. This problem arises from the consideration of extending the sensing matrix and the use of compressed sensing in different bases. As an application, the result is applied to the redundant dictionary setting in compressed sensing.

We study the average distortion introduced by quantizing compressive sensing measurements. Both uniform quantization and non-uniform quantization are considered. The asymptotic distortion-rate functions are obtained when the measurement matrix belongs to certain random ensembles. A new modification of greedy reconstruction algorithm that accommodates quantization errors is proposed and its performance is evaluated through extensive computer simulations.


We propose a new method for reconstruction of sparse signals with and without noisy perturbations, termed the subspace pursuit algorithm. The algorithm has two important characteristics: low computational complexity, comparable to that of orthogonal matching pursuit techniques when applied to very sparse signals, and reconstruction accuracy of the same order as that of LP optimization methods. The presented analysis shows that in the noiseless setting, the proposed algorithm can
exactly reconstruct arbitrary sparse signals provided that the sensing matrix satisfies the restricted isometry property with a constant parameter. In the noisy setting and in the case that the signal is not exactly sparse, it can be shown that the mean squared error of the reconstruction is upper bounded by constant multiples of the measurement and signal perturbation energies.

An interesting presentation by Yair Weiss, Hyun Sung Chang and William Freeman entitled Computational Photography, Computational Neuroscience, and Random Projections. They are giving a different answer to the idea I mentioned a while back, this is great. In particular, they talk about their Uncertain Component Analysis as featured in their previous paper entitled: Learning Compressed Sensing. I note their use of George Costanza in the slides (like I did earlier).

Sparse Representation for the Classification of Tumors Using Gene Expression Data by Xiyi Hang, and Fang-Xiang Wu. The abstract reads:
Personalized drug design requires the classification of cancer patients as accurate as possible. With advances in genome sequencing and microarray technology, a large amount of gene expression data has been and will continuously be produced from various cancerous patients. Such cancer-alerted gene expression data allows us to classify tumors at the genome-wide level. However, cancer-alerted gene expression datasets typically have much more the number of genes (features) than the number of samples (patients), which imposes a challenge for classification of tumors. In this paper, a new method is proposed for cancer diagnosis using gene expression data by casting the classification problem as finding sparse representations of test samples with respect to training samples. The sparse representation is computed by the l1-regularized least square method. To investigate its performance, the proposed method is applied to six tumor gene expression datasets and compared with various support vector machine (SVM) methods. The experimental results have shown that the performance of the proposed method is comparable with or better than those of SVMs. In addition, the proposed method is more efficient than SVMs as it has no need of model selection.
Of interest for the Compressive Sensing Hardware page which I have revamped a little bit is the following older paper: An architecture for the efficient implementation of compressive sampling reconstruction algorithms in reconfigurable hardware by Fernando E. Ortiz and Eric J. Kelmelis and Gonzalo R. Arce. The abstract reads:
According to the Shannon-Nyquist theory, the number of samples required to reconstruct a signal is proportional to its bandwidth. Recently, it has been shown that acceptable reconstructions are possible from a reduced number of random samples, a process known as compressive sampling. Taking advantage of this realization has radical impact on power consumption and communication bandwidth, crucial in applications based on small/mobile/unattended platforms such as UAVs and distributed sensor networks. Although the benefits of these compression techniques are self-evident, the reconstruction process requires the solution of nonlinear signal processing algorithms, which limit applicability in portable and real-time systems. In particular, (1) the power consumption associated with the difficult computations offsets the power savings afforded by compressive sampling, and (2) limited computational power prevents these algorithms to maintain pace with the data-capturing sensors, resulting in undesirable data loss. FPGA based computers offer low power consumption and high computational capacity, providing a solution to both problems simultaneously. In this paper, we present an architecture that implements the algorithms central to compressive sampling in an FPGA environment. We start by studying the computational profile of the convex optimization algorithms used in compressive sampling. Then we present the design of a pixel pipeline suitable for FPGA implementation, able to compute these algorithms.

On his blog, Paul Deignan wonders what wording should be used for compressive sensing. Send him your suggestions.

IPAM has a short introduction to the Bregman scheme used to do reconstruction in CS here. Yin Zhang will give a talk at Rice on Monday (it is on the CS calendar)

Finally, here is homework on CS by Subhabrata Bhattacharya at UCF.

Thursday, January 22, 2009

CS: l_0 work, l_p, SL0, AST, LoCOMP, CS and tomography, Do we need benchmarks ? and a job

This entry has an l_0 flavor to it. I just found Average performance of the approximation in a dictionary using an ℓ0 objective by Francois Malgouyres and Mila Nikolova. The abstract reads:

We consider the minimization of the number of non-zero coefficients (the ℓ0 “norm”) of the representation of a data set in a general dictionary under a fidelity constraint. This (nonconvex) optimization problem leads to the sparsest approximation. The average performance of the model consists in the probability (on the data) to obtain a K-sparse solution—involving at most K nonzero components—from data uniformly distributed on a domain. These probabilities are expressed in terms of the parameters of the model and the accuracy of the approximation. We comment the obtained formulas and give a simulation.

Here are some accepted papers for ICASSP 09. Here is one: Thresholded Smoothed-L0 (SL0) Dictionary Learning for Sparse Representations by Hadi Zayyani, Massoud Babaie-Zadeh and Remi Gribonval. The abstract reads:

In this paper, we suggest to use a modified version of Smoothed-l0 (SL0) algorithm in the sparse representation step of iterative dictionary learning algorithms. In addition, we use a steepest descent for updating the non unit column norm dictionary instead of unit column-norm dictionary. Moreover, to do the dictionary learning task more blindly, we estimate the average number of active atoms in the sparse representation of the training signals, while previous algorithms assumed that it is known in advance. Our simulation results show the advantages of our method over K-SVD in terms of complexity and performance.
I look forward to the implementation of ATH-SL0 (Amplitude THresholded SL0) and KTH-SL0 (KTHresholded SL0). I am adding this paper in the list of sparse dictionaries in the big picture.

From the Rice Compressive Sensing repository, we have a new paper entitled Reconstruction in Compressive Sensing Using Affine Sclaing Transformations with Variable-p Diversity Measure by Rufino J. Domínguez, Sergio D. Cabrera, J. Gerardo Rosiles and Javier Vega-Pineda. The abstract reads:

The Affine Scaling Transformation (AST) family of algorithms can solve the minimization of the l_(p less than1), p-normlike diversity measure for an underdetermined linear inverse problem. The AST algorithms can therefore be used to solve the sparse signal recovery problem that arises in Compressive Sensing. In this paper, we continue to investigate the application of the iterative AST family of algorithms with a dynamical adjustment of the p parameter to improve convergence speed and signal recovery accuracy. In our previous work we experimentally determined that any p in [0, 1] can give the sparse solution when exact recovery is possible, however, the behavior of the algorithm is highly dependent on this parameter. In addition, the bestapproximation error, for those cases where exact recovery is not possible, is also highly dependent on p. Using various criteria, we propose and evaluate some strategies to vary the values of p as a function of the iteration in the AST algorithm. The goal in these strategies for a variable-p AST algorithm is to capture the benefits of the p=0 and the p=1 fixed-p approaches simultaneously.

This method uses as a variable the index of the semi-norm l_p's. This is very new and very astute. Here is a figure showing the distribution of the p value used during a computation, as one can see, many iterations occur in the region close to p = 0.



This graph is from a previous paper: Sergio D. Cabrera, Rufino Dominguez, J. Gerardo Rosiles, and Javier Vega-Pineda, Variable-p affine scaling transformation algorithms for improved compressive sensing.

On a different note, there is another ICASSP paper entitled: A Low Complexity Orthogonal Matching Pursuit for Sparse Signal Approximation with Shift-Invariant Dictionaries, by Boris Mailhé, Remi Gribonval, Frederic Bimbot, and Pierre Vandergheynst. The abstract reads:

We propose a variant of Orthogonal Matching Pursuit (OMP), called LoCOMP, for scalable sparse signal approximation. The algorithm is designed for shift-invariant signal dictionaries with localized atoms, such as time-frequency dictionaries, and achieves approximation performance comparable to OMP at a computational cost similar to Matching Pursuit. Numerical experiments with a large audio signal show that, compared to OMP and Gradient Pursuit, the proposed algorithm runs in over 500 less time while leaving the approximation error almost unchanged.
Pierre Vandergheynst also has a presentation on compressed sensing from his perspective.

Laurent Duval mentioned to me the presentation by Thomas Capricelli entitled Compressed Sensing and Tomography. The author also recently completed his thesis entitled: Generalized convex projection algorithms and applications to medical imaging. The abstract of the thesis reads:

This thesis focuses on the use of generalized projection operators in convex optimization algorithms and on their applications to medical imaging. We describe various extensions of the notion of a projection onto a closed convex set in a Hilbert space. These include subgradient projectors, proximity operators, and Bregman projectors. We propose a new generalized projection algorithm for projecting onto a countable intersection of closed convex sets and prove its convergence. This contribution unifies several existing results on the convergence of projection methods in Hilbert spaces. We then study original applications of projection methods in medical imaging. We propose a new strategy to incorporate Poisson noise in continuous tomography, as well as a new approach to discrete tomography via convex programming and total variation. We also discuss the connections between total variation, compressive sensing, and tomographic reconstruction. Finally, we present what seem to be the first numerical results on the use of Bregman distances in projection algorithms. The software tools that have been developed to carry out this work have been designed so as to be made available to the scientific community.

The thesis is in French and chapter 5 discusses the compressed sensing aspect of tomography. I note the following at the end of the chapter and a similar statement at the end of the presentation:
The experiment from Candes, Romberg and Tao must be understood as a reconstruction from a sub-sampling of the Fourier domain, without noise and using a mask adapted to simple signals.

Eventually, it really begs the upcoming problem of having good benchmark cases against which people can judge their reconstruction algorithms and implementations. We, as a community, need to probably add additional problem cases in SPARCO. I haven't talked to Michael Friedlander or Ewout van der Berg on this issue, but I am sure they are looking forward to good benchmark to add to the problems section. Specifically, there is a unique opportunities for hardware developers to provide some of these benchmarks...wink, wink :-) Also, let me restate this again, SPARCO is solver agnostic.

While we are on the subject of tomography, here is another paper on quantum tomography, I am not quite sure what it entails, but it sound fascinating:


For an initially well designed but imperfect quantum information system, the process matrix is almost sparse in an appropriate basis. Existing theory and associated computational methods (ℓ1-norm minimization) for reconstructing sparse signals establish conditions under which the sparse signal can be perfectly reconstructed from a very limited number of measurements (resources). Although a direct extension to quantum process tomography of the ℓ1-norm minimization theory has not yet emerged, the numerical examples presented here, which apply ℓ1-norm minimization to quantum process tomography, show a significant reduction in resources to achieve a desired estimation accuracy over existing methods.



On a totally unrelated note, the University of Waterloo is seeking an assistant professor in the sub-area of sparse sampling, compressed sensing. It is listed in Compressive Sensing Jobs section.

Wednesday, January 21, 2009

CS: KF-CS code, Weighted l1 with prior, Comment on CGD, Cognitive Radio, a Postdoc, TSW-CS, Impact and Trends and a Video.

Namrata Vaswani just released two papers on a Kalman filtered reconstruction solver in Analyzing Least Squares and Kalman Filtered Compressed Sensing. The abstract reads:

In recent work, we studied the problem of causally reconstructing time sequences of spatially sparse signals, with unknown and slow time-varying sparsity patterns, from a limited number of linear “incoherent” measurements. We proposed a solution called Kalman Filtered Compressed Sensing (KF-CS). The key idea is to run a reduced order KF only for the current signal’s estimated nonzero coefficients’ set, while performing CS on the Kalman filtering error to estimate new additions, if any, to the set. KF may be replaced by Least Squares (LS) estimation and we call the resulting algorithm LS-CS. In this work, (a) we bound the error in performing CS on the LS error and (b) we obtain the conditions under which the KF-CS (or LS-CS) estimate converges to that of a genie-aided KF (or LS), i.e. the KF (or LS) which knows the true nonzero sets.

and in Real-time Dynamic MR Image Reconstruction using Kalman Filtered Compressed Sensing
byChenlu Qiu, Wei Lu and Namrata Vaswani. The abstract reads:
In recent work, Kalman Filtered Compressed Sensing (KF-CS) was proposed to causally reconstruct time sequences of sparse signals, from a limited number of “incoherent” measurements. In this work, we develop the KF-CS idea for causal reconstruction of medical image sequences from MR data. This is the first real application of KF-CS and is considerably more difficult than simulation data for a number of reasons, for example, the measurement matrix for MR is not as “incoherent” and the images are only compressible (not sparse). Greatly improved reconstruction results (as compared to CS and its recent modifications) on reconstructing cardiac and brain image sequences from dynamic MR data are shown.
The code implementation can be found on the KF-CS page. I wonder how this solver is different from the more recent papers featured last week from IBM. Kudos to Namrata Vaswani for releasing the implementation.

Here is a new interesting preprint entitled Weighted $\ell_1$ Minimization for Sparse Recovery with Prior Information by M. Amin Khajehnejad, Weiyu Xu, Salman Avestimehr, Babak Hassibi. The abstract reads:
In this paper we study the compressed sensing problem of recovering a sparse signal from a system of underdetermined linear equations when we have prior information about the probability of each entry of the unknown signal being nonzero. In particular, we focus on a model where the entries of the unknown vector fall into two sets, each with a different probability of being nonzero. We propose a weighted $\ell_1$ minimization recovery algorithm and analyze its performance using a Grassman angle approach. We compute explicitly the relationship between the system parameters (the weights, the number of measurements, the size of the two sets, the probabilities of being non-zero) so that an iid random Gaussian measurement matrix along with weighted $\ell_1$ minimization recovers almost all such sparse signals with overwhelming probability as the problem dimension increases. This allows us to compute the optimal weights. We also provide simulations to demonstrate the advantages of the method over conventional $\ell_1$ optimization.
I note the following interesting bit that gives some context into how this new approach is different:

The analysis uses the high-dimensional geometry techniques first introduced by Donoho and Tanner [1], [3] (e.g., Grassman angles) to obtain sharp thresholds for compressed sensing. However, rather than use the neighborliness condition used in [1], [3], we find it more convenient to use the null space characterization of Xu and Hassibi [4], [13]. The resulting Grassmanian manifold approach is a general framework for incorporating additional factors into compressed sensing: in [4] it was used to incorporate measurement noise; here it is used to incorporate prior information and weighted ℓ1 optimization. Our analytic results allow us to compute the optimal weights for any p1, p2, n1, n2. We also provide simulation results to show the advantages of the weighted method over standard ℓ1 minimization.

On a different note, Dirk Lorenz, a reader of this blog, kindly emailed me the following:
...when I checked your entry on "A Coordinate Gradient Descent Method for l_1-regularized Convex Minimization" on Nuit Blanche, I started to check the paper in detail. Well, I do not agree on your remark that it seems that CGD is faster than any other method. At least from their tables, they report always a quite large objective value for CGD. I guess that other methods may be much faster when solving the problem just to that accuracy.
After some back and forth e-mail discussion, he and I agree that the examples shown in the paper show an anedoctal fact. The CGD solver seems to be going much faster than some of the fastest reconstruction codes like GPSR_BB and FPC for the examples featuring a compressed sensing example (partial fourier and gaussian matrix, table 1-4). In some case, it is 10 times faster. At the same time, the same solver seems to be doing worse that the others for the blurring example. As I said, it is anecdotal, but for somebody who sees through the prism of CS, it sure is intriguing. I realize that mathematicians have other stronger standards. Thanks Dirk for pointing this out!

On a related note, it's a hunch but I would not be overly surprised when we do have CS imagers that they would have an advantage over wavefront coding imaging systems since the mixing in Compressive Sensing imagers would be more like the gaussian mixing mentioned above than a specific blurring kernel instatiated by the cubic-PM phase mask as used in wavefront coding. But then again I am not specialist like Ramesh.



Dirk Lorenz also released the slides of his presentation at the SIAM Conference on Imaging Science in San Diego entitled: Semismooth Newton and active set methods for sparse reconstruction.

In a different area, Martin Braun, Jens Elsner, Friedrich Jondral just released another use of Compressive sensing in Cognitive radios in Signal Detection for Cognitive Radios with Smashed Filtering. The abstract reads:
Compressed Sensing and the related recently introduced Smashed Filter are novel signal processing methods, which allow for low-complexity parameter estimation by projecting the signal under analysis on a random subspace. In this paper the Smashed Filter of Davenport et al. is applied to a principal problem of digital communications: pilot-based time offset and frequency offset estimation. An application, motivated by current Cognitive Radio research, is wide-band detection of a narrowband signal, e.g. to synchronize terminals without prior channel or frequency allocation. Smashed Filter estimation and maximum likelihood-based, uncompressed estimation for a signal corrupted by additive white Gaussian noise (Matched Filter estimation) are compared. Smashed Filtering adds a degree of freedom to signal detection and estimation problems, which effectively allows to trade signal-to-noise ratio against processing bandwidth for arbitrary signals.
Other reports from the Karlsruhe group can be found here. Cognitive Radios have been added to the Compressive Sensing Hardware page even though I do not know of specific hardware implementation. Let us hope it find its way as a module of some sort in the GNU Radio project. If you are interested in Cognitive Radios, Compressive Sensing and Sensor Networks, there is an announcement for a Postdoc position at Computer Science & Engineering, Electrical Engineering, Mathematics & Statistics at University of Vigo (Doctoral University) in Spain. More can be found on the Compressive Sensing Jobs pages.

The Bayesian Compressive Sensing page has updated another newer version of TSW-CS: It can be downloaded at: tswcs.zip (Last Updated: Jan. 19, 2009). It would not hurt if there was a version number associated with each of these releases. Again all these solvers are listed in the reconstruction section of the Big Picture.

As some of you know, I am interested in the real impact of this blog. One of the ways to do this is through an evaluation of "clean" statistics i.e. statistics showing how this site (and no other site) bring folks to read a specific paper or download an application. About two weeks ago, Mark Iwen made available the code for his Ann Arbor Fast Fourier Transform i.e. "a Fast Fourier Transform (FFT) method for frequency sparse functions. Recovers the k most energetic frequencies hidden in a bandwidth-N function in k*polylog(N) time."

This is an important algorithm. The result of this "clean trial" yielded the following stats (see above) in a week when most people have not gotten back to school yet (it was a low stats period for the blog too): about 50 hits the first day and a sustained 20 hits per day for five days. As usual, the amount in the tail roughly counts cumulatively as much as the first day. I think those numbers should be higher in a higher traffic period.

The Ann Arbor Fast Fourier Transform is available for download at sourceforge. The code is implemented in Empirical Evaluation of a Sub-Linear Time Sparse DFT Algorithm.

The number of people reading this blog through an RSS feed is in the realms of 240 people. This is a little less than the number of people coming directly to the site everyday (255/day), and while this two sets overlap, the intersection of these two sets is not a large numbercompared to either sets since most of the folks coming to the site do so after a search of Google or through wikipedia.

Another stat caught my eyes in the large increase of people coming to check the videos / online talks. A five fold increase in eight months and a near two-fold increase in December thanks to the link put on the Rice repository. I think it says a lot about the future of information sharing even in the intellectual realm. Next time, you organize a workshop or a conference, you may want to think about putting some monies for the video production!
Right now, the latest videos are on the top but I am not sure it is a good way of putting these together as some of the earlier talks at IMA are still very important for newcomers to watch first. I should probably put more of them inside the big picture and the other static pages. Other statistics numbers are provided by Youtube for every videos. I have looked at a small sample of them and it looks like when they are featured on the site, one can expect an average 160 people to watch it. To the students who went through the pain of producing a video on the subject, please, pretty please, do make sure your information on Youtube link back to your website.



To get a sense of the traffic: the traffic on the video site is currently one fifth that of the big picture and one tenth that of the blog, but it is also picking up in speed.


While we 're at it, I mentioned before the 28MB video presenting the paper entitled: Compressive Structured Light for Recovering Inhomogeneous Participating Media by Jinwei Gu, Shree Nayar, Eitan Grinspun, Peter Belhumeur, and Ravi Ramamoorthi. It is now on Youtube. Enjoy.