Pages

Friday, February 12, 2010

CS: A short op-ed, some comments,an internship and a postdoc

Can bold work make it through the peer-review process ? I am not sure. For instance, it is still a mystery to me as to why the Random Lens Imager paper has not been published. Another baffling example is the fate of Fast Sparse Representation based on Smoothed l_0 Norm by G. Hosein Mohimani, Massoud Babaie-Zadeh and Christian Jutten that features the following abstract:
In this paper, a new algorithm for Sparse Component Analysis (SCA) or atomic decomposition on over-complete dictionaries is presented. The algorithm is essentially a method for obtaining sufficiently sparse solutions of underdetermined systems of linear equations. The solution obtained by the proposed algorithm is compared with the minimum l1-norm solution achieved by Linear Programming (LP). It is experimentally shown that the proposed algorithm is about two to three orders of magnitude faster than the state-of-the-art interior-point LP solvers, while providing the same (or better) accuracy.
As advertized in the abstract, SL0 (the code is there) is indeed blazingly fast. In fact, in the cases I ran this week, SL0 was much faster than SPGL1 or GPSR and more accurate than these solvers and some L1-OMP codes. We all know that the peer review process has its flaws: this example of rejection by the IEEE Signal Processing Letters is one obvious instance of mismanagement by the editor(s). No matter what the reviewers were saying, because of the astounding claim, the editor should have asked the reviewers to verify the "two to three orders magnitude faster" claim. Obviously this is not how it went: this rejection is a disgrace.

I have had several comments this past week on a number of entries and since most of you read the blog through an RSS feed you are probably missing out on those. So here there are:

Regarding the NA Digest question I recently asked:

Subject: Question: Is there a name for this trivial "algorithm" ?

The following may sound like a dumb question because the "algorithm" is so trivial but I was wondering if there was a name for this algorithm:

You are trying to infer an m x n matrix A with unknown elements. The only method of learning about this matrix is through applying matrix vector products Ax = y. That is, you construct a vector x, perform a multiplication Ax, and finally analyze the resulting vector y of observations. The algorithm trivially revolves around using vectors of the canonical basis thereby providing direct access to the columns of A.

This algorithm can be directly implemented in n steps and therefore requires n matrix vector products.

I am interested in a compressive sensing approach to this matrix discovery algorithm when one knows in advance that the matrix A is sparse and one wants to reduce the number of matrix-vector steps from n to O(k log(n)) where k represents the sparsity of matrix A. See the following link: http://is.gd/7TrsH for more information on how to approach this problem.

If you have any ideas on the name of this algorithm (the trivial one), please send suggestions to: igorcarron@gmail.com

Cheers,

Igor.
http://nuit-blanche.blogspot.com/search/label/CS
I received some very insightful responses from the wise readers of the list. Here are few of them:

Trond Steihaug kindly responded first with:
The algorithm you describe using mastrix vector product and known sparsity was proposed by Curtis-Powell and Read, IMA Journal of Applied Mathematics 1974 13(1):117-119; On the Estimation of Sparse Jacobian Matrices, A. R. CURTIS, M. J. D. POWELL and J. K. REID. I enclose the paper.

Symmetry or other known structural properties can also be utilized.
then Robert Schreiber remarked:
Igor,

As far as I know, there is no name for computing A as A * I; probably shouldn't be.

The interesting issue you raise is this -- when A is sparse, if its sparsity pattern (i.e. the graph of A) is known and its elements are unknown, then can one choose fewer than n vectors {x_1 , ... x_m) with m \lt n.

In this case, it is well known that m = chi(G(A)) is possible, where chi(G) is the chromatic number of the graph G. Each x is a sum of standard basis vectors. You can figure out the rest of this... This technique is well known as a fast way to compute an approximate Jacobian with a smaller than n number of function evaluations of a nonlinear function.

and finally, Andreas Stathopoulos noticed:

Hi, I looked at your blog and the algorithm you are proposing. This looks like the classic Monte Carlo method used by many physicists to approximate various functions of A, eg., diag(inv(A)), trace(inv(A)), trace(log(A)), det(A), etc. Similar methods have been used in data mining.

The principle is that for random vectors the covariance Exp(XX^T) = Identity. So for large number of random vecs (normalized by their number) XX^T \approx I. Then A(XX^T) = (AX)X^T \approx A, so we need to compute only Y = AX, and YX^T. Looking at your method, it looks like your underdetermined system does exactly that without any assumptions about covariance. In the limit, it should turn out to be unnecessary. Moreover, the Monte Carlo methods use {-1,1} random entries in X.

I am unclear how the number of nonzeros in A or the log(n) enter in the runtime of the method. For the noise methods above it is known that they converge as O(1/sqrt(samplesize)). The constant depends on the variance of the function of A that is calculated.
I should have been clearer in my statement "one knows in advance that the matrix A is sparse" in that the sparsity pattern is unknown a priori. I liked Robert's "there is no name for computing A as A * I; probably shouldn't be" :-) Anyway, the implementation of the algorithm is featured in MMA17. As it turns out in the MMA17 blog entry, Dick Gordon, one of the authors of the ART algorithm wrote:

Re: Monte Carlo Algebra
Dear Igor,
Maybe this is train of thought, but your Monday Morning Linear Black Box random algorithm reminds me of:

Gordon, R. (1970). On Monte Carlo algebra. J. Appl. Prob. 7, 373-387.
Abstract: A Monte Carlo method is proposed and demonstrated for obtaining an approximate algebraic solution to linear equations with algebraic coefficients arising from first order master equations at steady state. Exact solutions are hypothetically obtainable from the spanning trees of an associated graph, each tree contributing an algebraic term. The number of trees can be enormous. However, because of a high degeneracy, many trees yield the same algebraic term. Thus an approximate algebraic solution may be obtained by taking a Monte Carlo sampling of the trees, which yields an estimate of the frequency of each algebraic term. The accuracy of such solutions is discussed and algorithms are given for picking spanning trees of a graph with uniform probability. The argument is developed in terms of a lattice model for membrane transport, but should be generally applicable to problems in unimolecular kinetics and network analysis. The solution of partition functions and multivariable problems by analogous methods is discussed.

This may be the first, if not only, example of Monte Carlo applied to symbolic math, rather than numbers or arrays of numbers. Perhaps it could be regarded as a compressive sampling of trees?
Yours, -Dick Gordon
In the Compressive Sensing Linear Algebra 101 thread, Stephen Becker, mentions the following:
If you change your prior and instead of assuming that A is sparse you now assume that it has rank r (presumably with rank r \lt size(A) ), then you can find A (with high probability) using only 2*r linear measurements.

You need to be able to measure Y = A*X (with X having r columns), as well as Z = A'*W (with W having r columns). You only need to perform a single QR decomposition on W or Z (very cheap) and a matrix multiply. There's no need for a solver like GPSR. In practice, you can do this for a matrix of size m = n = 2^30, and if the rank is small (say, 10) it's very fast (less than 1 second for rank = 10).
I realize that the recent rank minimization work provide some handy and powerful techniques but sometimes, the physics of the system being modeled is not amenable to this constraint.

Finally, I have a commenter who asked the following on the passive illumination set-up featured in the following post:

Hi all, I'm trying to implement a setup similar to the one you present here but I'm having some problems in replicating your results.

Regarding the photodetection setup, do you have any constraints there? Or is it a simple photodetector with an amplifier?

Any hints...? Help?
I sent an email out to some of the authors of this set up and it looks like this is what it is (simple photodetector with an amplifier) but I am looking forward for an answer from the people who physically built the system.


Finally, Petros Boufounos sent up two positions at MERL, they are also listed in the compressive sensing jobs page:

Feb 11th, 2010. Postdoctoral Researcher, Compressive Sensing, MERL, Cambridge, MA

MERL is seeking qualified and high-caliber applicants for a postdoctoral researcher position in the Multimedia Group. The successful candidate will be expected to perform original research in the area of Compressive Sensing, with particular application in remote sensing and radar signal processing. The candidate may also elect to devote part of his time to independent research. This is a 1-year appointment with possibility for extension.

Qualifications:
• Applicants are expected to hold a Ph.D. degree in Electrical Engineering, Computer Science, or closely related field.
• Background in signal and/or image processing, especially in the area of sparse approximation and Compressive Sensing.
• Experience with processing remotely sensed data (SAR, radar, multispectral imaging) and/or array processing.
• Applicants must have a strong publication record demonstrating novel research achievements.
• Excellent programming skills (C/C++, Matlab, etc) are required.
• Excellent presentation, written and inter-personal communication skills are expected.

Mitsubishi Electric Research Laboratories, Inc. is an Equal Opportunity Employer.
PRINCIPALS ONLY apply via email to petrosb@merl.com with "postdoctoral researcher" in the subject
No phone calls please.

Feb 11th, 2010. Internship: Compressive Sensing , MERL, Cambridge, MA

Development of new sensing technologies using principles from compressive sensing. The project involves development of hardware and/or simulation of physical systems. The ideal candidate should have significant signal processing experience, and should be comfortable with Matlab and C/C++. Experience in the areas of compressive sensing, sparse approximation, and radar signal processing is desirable but not necessary. Interested candidates apply to Petros Boufounos (petrosb@merl.com) with "summer internship" in the subject. No phone calls please.


Photo: 3D view of an Enceladan landscape. This 3D view of Enceladus is based upon a topographic model calculated using photoclinometry on a single Cassini image of the moon. It is looking along a newly named set of fractures named Bishangarh Fossae. Credit: Image data: NASA / JPL / SSI; 3D model: Bernhard Braun. Via the Planetary Society blog.

No comments:

Post a Comment