Pages

Tuesday, December 08, 2009

CS: YALL1, fitness function Eureqa, Data Driven CS, Matlab, Python, NIPS 2009 papers, Bayesian statistics in AI and Social Sciences

Junfeng Yang, now at Nanjing University, let me know of the newly released paper on the algorithm underlying the YALL1 package, it is: Alternating Direction Algorithms for L_1 Problems in Compressive Sensing by Junfeng Yang and Yin Zhang. The abstract reads:

In this paper, we propose and study the use of alternating direction algorithms for several L_1-norm minimization problems arising from sparse solution recovery in compressive sensing, including the basis pursuit problem, the basis-pursuit denoising problems of both unconstrained and constrained forms, as well as others. We present and investigate two classes of algorithms derived from either the primal or the dual forms of the L_1-problems. The construction of the algorithms consists of two main steps: (1) to reformulate an L_1-problem into one having partially separable objective functions by adding new variables and constraints; and (2) to apply an exact or inexact alternating direction method to the resulting problem. The derived alternating direction algorithms can be regarded as rst-order primal-dual algorithms because both primal and dual variables are updated at each and every iteration. Convergence properties of these algorithms are established or restated when they already exist. Extensive numerical results in comparison with several state-of-the-art algorithms are given to demonstrate that the proposed algorithms are efficient, stable and robust. Moreover, we present numerical results to emphasize two practically important but perhaps overlooked points. One point is that algorithm speed should always be evaluated relative to appropriate solution accuracy; another is that whenever erroneous measurements possibly exist, the L_1-norm fidelity should be the fidelity of choice in compressive sensing.

YALL1 is available in Matlab. As most of you may already know, MATLAB 2009b has a potential serious bug. You can find a description here, but since this a serious problem relating to the use of the transpose sign which is very much used in compressive sensing solvers, let me copy the message here:

There seems to be a serious bug in the current Matlab release R2009b that
occurs when solving a linear system of equations with the transpose. Here
is a small example:

A=[ 0 2 ; 1 0 ];
b=[ 2 ; 0 ];
A'\b

ans =

0
1

whereas the correct solution is [ 0 ; 2 ]. The answer [ 0 ; 1 ] is the
solution of A\b, so it appears that the transpose sign ' is ignored.
However, the correct solution is returned when using

transpose(A)\b, A.'\b

instead of A'\b.

Interestingly, this bug does not occur for all matrices, for instance

A = [1 2 ; 1 1]
A'\b

returns the correct result [ -2 ; 4 ].

The bug seems to be new in release R2009b and applies to Linux and Windows
32 and 64 bit versions. A bug report has been filed.
The Mathworks folks responded here:

Following up to the message in the Nov 30, 2009 NA Digest:

Thanks to all the MATLAB Community members who have reported this bug
and forwarded this information along.

We have posted this Bug (583443) in our Bug Report system:

http://www.mathworks.com/support/bugreports/

Look under MATLAB, in the Mathematics Category, Exists In R2009b.

MATLAB customers should feel free to Watch this Bug for any updates,
or Subscribe to an RSS feed of a group of Bugs of interest.

This Bug Report details the circumstances required to trigger the bug.

The summary is that A must be the permutation of a full, square, non-scalar
triangular matrix (but not exactly triangular) and must be used as A’\b in
the solve.

The Bug Report does not yet reflect this, but the bug will be fixed in our
next release of MATLAB.

But in order to see their bug report, you have to create an account with them. I am not quite sure this sends the right message. I am not an open source fanatic but this little story is a short reminder that little attempts at implementing algorithms in a free language has some potential to survive the test of time. Which brings us to Ben Moran who implemented the solving of a Sudoku puzzle with an L1 minimization approach in Python (as mentioned here) . Ben, pretty please, after that python meeting, please release that code :-)

Georgos Tzagkarakis kindly let me know that, in a previous entry, I confused him with his brother Christos Tzagkarakis who also happen to publish in compressive sensing. This is fixed. As it turns out Georgos has submitted the following to ICASSP '10:

Bayesian Compressed Sensing Imaging using a Gaussian Scale Mixture
by Georgos Tzagkarakis and Panagiotis Tsakalides. The abstract reads:

The ease of image storage and transmission in modern applications would be unfeasible without compression, which converts high-resolution images into a relatively small set of significant transform coefficients. Due to the specific content of many real-world images they are highly sparse in an appropriate orthonormal basis. The inherent property of compressed sensing (CS) theory working simultaneously as a sensing and compression protocol, using a small subset of random incoherent projection coefficients, enables a potentially significant reduction in the sampling and computation costs of images favoring its use in real-time applications which do not require an excellent reconstruction performance. In this paper, we develop a Bayesian CS (BCS) approach for obtaining highly sparse representations of images based on a set of noisy CS measurements, where the prior belief that the vector of projection coefficients should be sparse is enforced by fitting directly its prior probability distribution by means of a Gaussian Scale Mixture (GSM). The experimental results show that our proposed method, when compared with norm-based constrained optimization algorithms, maintains the reconstruction performance, in terms of the reconstruction error and the PSNR, while achieving an increased sparsity using much less basis functions.

Thanks Georgos !

In the Sparsity in Everything series, a series of post that highlights sparsity and power laws in odd places for the purpose of shamelessly linking them back to compressive sensing, I mentioned the work of Michael Schmidt and Hod Lipson entitled Distilling Free-Form Natural Laws from Experimental Data that seemed to amount to a solving some sort of l_0 problem ( i.e. fitting the data/phenomena at hand with a combinatorial search using genetic algorithms.) Well they just came out with a GUI that allows anyone to try that experiment for themselves. It's called Eureqa. In Equation search part 1, Andrew Gelman just wrote in a string a data he sent and he's waiting for an answer. He might be waiting for a long time.... In the meantime, I am surprised to see that there is no response to the criticism made on the fitness function of the original article and now thisprogram. The criticism was brought forth here:

On the Article "Distilling Free-Form Natural Laws From Experimental Data" by Christopher Hillar and Friedrich Sommer. The paper starts as:
A recent paper [1] introduced the fascinating idea that natural symbolic laws (such as
Lagrangians and Hamiltonians) could be learned from experimental measurements in a physical system (such as a pendulum). The learning is based on a genetic algorithm that employs a fitness function involving the pair-wise partial derivatives of the time-series measurements of state-space data coming from the system. While the general approach is original and convincing, we feel that the particular algorithm described in the paper cannot accomplish these goals. We present theoretical and experimental arguments for this belief and also explain some mathematically rigorous ideas that can be used within the authors’ framework to find symbolic laws.
We first recapitulate the main steps in the methodology


And while you are waiting for Eureqa to converge (to the wrong solution ?), you might as well do two things:

1. Enjoy reading papers from NIPS 2009. They're out. Andrew Mcgregor points to some streaming papers. While reading some of them and being off-line, I was wondering aloud if, in some exploratory instance, we should not go about the following process to do compressive sensing:
and you now have a compressive sensing scheme tailored to your area of interest.

2. Enjoy this discussion between two bayesian people: an AI person and a social science person namely Eliezer Yudkowsky and Andrew Gelman respectively





Finally, don't expect Eureqa to help in your formula for happiness, it's just a fit whereas the cure seems to be to just watch faces in the morning.


Credit photo: Sam Derbyshire, roots of polynomials as shown in John Baez's "The beauty of roots" http://math.ucr.edu/home/baez/roots/. Poster from NIPS 2009, Multilabel Prediction via Compressed Sensing.

2 comments:

  1. Please note I did not say it was bunk, rather that it may lie on a faulty fitness function and that it does not look like something new. As one of the commenter in Andrew Gelman's point out, this sort of things have been around for a while now. They are now just better packaged. At my level, the criticism I would level stems from the need to find a combinatorial solution when it sounds like a potentially more ground breaking work would relax this problem to an L_1 like problem as in Compressive Sensing, but this is just me.

    Igor.

    ReplyDelete