. Round Trip Flight to Durham, NC: $ 996.57
. Two nights at the Washington Duke Inn and Golf Club: $274.23
. Attending the Compressive Sensing Workshop: $250
. Enjoying like minded folks while reading Nuit Blanche at the coffee breaks: Priceless.
While a lot of you are in the same room, here are a set of questions that might deserve some attention. Some of you may want to talk amongst yourselves on these issues.
In a recent NA-Digest, Jack Dongarra mentioned the following:
. Two nights at the Washington Duke Inn and Golf Club: $274.23
. Attending the Compressive Sensing Workshop: $250
. Enjoying like minded folks while reading Nuit Blanche at the coffee breaks: Priceless.
While a lot of you are in the same room, here are a set of questions that might deserve some attention. Some of you may want to talk amongst yourselves on these issues.
In a recent NA-Digest, Jack Dongarra mentioned the following:
From: Jack Dongarra
Date: Mon, 9 Feb 2009 14:05:55 -0500
Subject: Linear algebra software survey request
We are planning to update the survey of freely available software for the solution of linear algebra problems. The September 2006 version of the survey can be found at: http://www.netlib.org/utk/people/JackDongarra/la-sw.html .The aim is to put in “one place” the source code that is freely available for
solving problems in numerical linear algebra, specifically dense, sparse direct and iterative systems and sparse iterative eigenvalue problems. Please send me updates and corrections. I will post a note on the na-digest when the new list is available.
Thanks,
Jack
It turns out that in CS, linear algebra problems are being solved but it seems to me that we have not collectively connected to the older and successful LAPACK community. Maybe we should or ...maybe not. Jack responded to my request of the inclusion of a list of the current CS solvers by a "a bit too far afield" response. I agree but as in all disruptive technologies, it is just a question of time before we overtake the entire field of linear algebra :) . On the other hand should we also have a similar table such as this one on top of the mere listing given in the big picture ? and if so, what should the columns be ? (your thoughts are welcomed in the comment section that I will synthesize later).
In a different area, I was asked recently about a set of benchmark against which a solver could be confronted to. I mentioned the set of problems listed in the SPARCO suite. Should we have more of these problems in this set ? Could we get some of the hardware folks to provide different series of measurements to this set of benchmark?
With regards to the stats of this blog, every entry is seen through Google reader by 174 people and through Feedburner by about 94 people. The hit count directly to the blog amount to about 300 per day while about 66 people receive every entry directly in their mailboxes. The Compressive Sensing Study Group on Linkedin now has 105 members.
Following up on the entry on sunday, I talked to Arkadi Nemirovski, who updated his site where there are now two versions of the algorithm implemented in On verifiable sufficient conditions for sparse signal recovery via L1 minimization by Anatoli Juditsky and Arkadi Nemirovski. They are available at this site:
It complements well the other paper entitled Testing the Nullspace Property using Semidefinite Programming by Alexandre d'Aspremont and Laurent El Ghaoui where the NSProp implementation is now available at:
As I said earlier, these two implementations may be game changers in that they might replace the traditional reliance on the restricted isometry property (RIP). Both programs check a condition on a constant \alpha_k. The study of the value of \alpha_k with small matrices could allow one to investigate low order models as well as specific dictionaries.... These two programs are prominently listed in the measurement matrix section of the Big Picture.
A month ago, I mentioned the release of an implementation of the SpaRSA algorithm. It looks like it can even go faster now thanks to Sangkyun Lee and Stephen Wright who implemented it on the GPU and released the implementation on this site. From that page:
Implementing Algorithms for Signal and Image Reconstruction on Graphical Processing Units, submitted, 2008.
This site contains GPU codes for compressed sensing and image denoising/deblurring:All GPU codes compile into Matlab mex binaries, so users can process data and pass them to our GPU routines in Matlab. Our GPU codes require CUDA-enabled GPUs from NVIDIA.
- In compressed sensing, we implemented a simple version of the SpaRSA algorithm described by Wright, Nowak, and Figueiredo [1], using randomly chosen rows of discrete cosine transform (DCT) matrix as the sensing matrix.
- In image denoising/deblurring, we implemented the PDHG algorithm of Zhu and Chen [2], which solves the Rudin-Osher-Fatemi formulation with total variation regularization.
I have added this page to the Compressive Sensing Hardware page. Let us also note the SpaRSA matlab implementation is now at version 2.0.
Yesterday, I featured Dequantizing Compressed Sensing with Non-Gaussian Constraints by Laurent Jacques, David Hammond, and M. Jalal Fadili and followed up with one of those usual dumb questions to Laurent:
How does your solver compared to the one studied by Petros Boufounos and Richard Baraniuk in the 1-Bit Compressive Sensing paper ?
The short answer was:
We haven't tested yet BPDQ on this particular extreme coding.
Laurent also provided a somewhat longer answer:
But let me explain what is similar and different with Petros and Rich's method:
Let x be a signal (sparse or compressible) and let Phi be a Gaussian random sensing matrix.
In the 1-bit compressed sensing model of the paper of Petros Boufounos and Richard Baraniuk, only the sign of (Phi x) is known at the decoder. They propose then to reconstruct the signal by altering the common Basis Pursuit program in two aspects. First, they replace the fidelity term by a Quantization Consistency (QC) term, i.e. the fact that the sign of the remeasured signal candidate must provide the same measurement vector. Second, since the amplitude of the signal is lost in the sign operation, they propose to realize the BP minimization on the sphere of unit norm signals to avoid a vanishing solution. This leads however to a non-convex spherical constraint. They finally solve practically the program by first regularizing the problem on a smoothed version of the QC constraint, and second by running a projected gradient minimization to force to problem to stay on the sphere.
This method was the first attempt to really introduce the QC constraint into the Compressed Sensing formalism, while other methods relied on handling the quantization distortion on the measurements as a common noise in the Basis Pursuit DeNoise (BPDN) program. In the end, the experimental results provided by this 1-Bit CS decoder were interesting compared to the common BPDN treatment with a gain of several dBs.
Our research improves this first approach on a couple of points. First, we wanted to bring a solution for any number of bits (from 1 to many), i.e. any quantization bin width for a uniform quantization model. Second, our decoder had to be closer to the Quantization Consistency, i.e. the fact that requantizing the measurement of the reconstructed signal has to reproduce the known quantized measurement vector. Third, we desired convex constraints so that we could solve it by the versatile techniques of proximal methods and monotone operator splitting (the Douglas Rachford splitting for our case). And finally, our decoder had to be controlled by the theory, i.e. we wanted to bound the approximation error committed by the quantization noise and by possible deviation from the exactly sparse signal model, i.e. the signal compressibility, as it was already existing for the BPDN decoder.
This is why we introduced these Basis Pursuit DeQuantizer (BPDQ) of moment p for p\ge 2. In their foundations, they are adapted to signal reconstruction from measurement vector corrupted by any noise of bounded \ell_p norm since this norm replace the \ell_2 norm used in the BPDN fidelity term. Let me insist on the fact that we are not touching to the sparsity term that is still expressed in the \ell_1 norm as before. The programs BPDQ_p are really introduced to answer to one question: what is a good fidelity term adapted to a non-gaussian noise, as it is the case of the uniform quantization noise. I want to avoid any confusion with other (important but unconnected) research where the \ell_1 sparsity term is replaced by the \ell_q non-convex norm (with q \le 1).
Interestingly, if the sensing matrix satisfies a generalized Restricted Isometry Property of moment p, i.e. a RIP variant bounding the \ell_p norm (p \ge 2) of the random projection of sparse signals (a property that is also satisfied by Gaussian random matrices with high probability), the BPDQ decoders have the desired stability behaviour, i.e. the "instance optimality".
In addition, for noise due to uniform quantization of measurements, we observe an interesting effect. When we work in oversampled situation where the number m of measurements is much higher than required to have the RIP_p, the part of the BPDQ approximation error due to the quantization noise is divided by \sqrt{p+1}. Therfore, when allowed, increasing the moment p thus decreases the approximation error! This is for us really the indication that the quantization noise is handled more faithfully.
To come back to the initial comparison with the 1-bit compressed sensing of Petros Boufounos and Richard Baraniuk, their proposed method is very close to our BPDQ when p=infinity. Indeed, 1-bit quantization consists also in taking the bin width equal the sup-norm of Phi x. The fidelity term for p=infinity induces then the previous sign constraint. Notice that we do not have to work on the unit signal sphere since we add the knowledge of alpha in the coding/decoding scenario. I'm quite sure therefore that in the extreme 1-bit regime our BPDQ for p=infinity will produce results comparable to those of the initial 1-bit CS decoder. However, in oversampled situations, i.e. when m is high, the moment p can be chosen large (making the \ell_p constraint very close to the \ell_infinity one), and our decoders provides theoretically and experimentally decreasing approximation error by working closer to the Quantization Consistency.
Thank you Laurent!
Credit photo: me. That day was cold!
2 comments:
You're not at the workshop, or are you Igor? Anyway, the workshop is great!
Jort
No I am not.
Igor.
Post a Comment