Pages

Monday, February 08, 2010

Monday Morning Algorithm 17: Compressive Sensing Linear Black Box Discovery

On Saturday, I wrote about an algorithm that is trivial to implement in linear algebra when done naively. It is so trivial I am not sure it has a name. Anyway, I just implemented it and pretty much like for the original l_1-magic tests, it looks like magic! I use the wording Linear Black Box thanks to Sergey Ten but if anybody knows of a better name...

I was also corrected by Stephen an anonymous reader, on how to introduce the problem, so let me use his significantly better introduction:

You are trying to infer an m x n matrix A with unknown elements. The only method of learning about this matrix is through applying matrix vector products Ax = y. That is, you construct a vector x, perform a multiplication Ax, and finally analyze the resulting vector y of observations.

In reality, the elements of matrix A consists of individual features of a physical system we are trying to learn about. Because this physical system A is linear, we know the act of physically measuring the system is equivalent to matrix vector multiplication Ax = y.


The trivial algorithm uses a series of x vectors equal to the canonical basis vectors to find the columns of A. In our case though, we have the additional information from the physics of the system that A is sparse [Update: However, we do not know the sparsity pattern of A]. We also have a constraint that the number of analog or physical matrix-vector Ax multiplications must be reduced. In the context of Linear Algebra, there is really little you can do to use this sparsity information about A and you are therefore stuck with having to perform n measurements. In Saturday's entry, I sketched a solution where the number of measurements could be of the order of klog(n) where k represents the sparsity of matrix A. Today's entry is a self contained prototype of that implementation. It uses GPSR but it could use any of the other algorithm listed in the reconstruction section of the big picture in compressive sensing. Here it is:

% This code implements a Linear Black Box problem
% when one knows that the unknown linear system is sparse.
% More information can be found here:
%
% http://nuit-blanche.blogspot.com/2010/02/cs-compressive-sensing-linear-al
% gebra.html
%
% Written by Igor Carron
% This script/program is released under the Commons Creative Licence
% with Attribution Non-commercial Share Alike (by-nc-sa)
% http://creativecommons.org/licenses/by-nc-sa/3.0/
% Disclaimer:the short answer, this script is for educational purpose only.
% More Disclaimer: http://igorcarron.googlepages.com/disclaimer
%


clear;
m = 1000;
n = 2000;
n1 = 50;

A = zeros(m,n);
A(2,1324) =3;
A(254,802)=25;
A(6,1)=12;

% 1. Start with an empty matrix X (the size of X will be n x n1
% for the remainder of the algorithm)

X = randn(n,n1);
Y = A*X;

% 2. Produce a random vector of size n x 1 that will be added
% as one the column of X,

v = randn(n,1);
X =[X v];

% 3. Increase n1 by 1

n1 = n1 + 1;

% 4. Physically compute the response
% y = A x (x is an n x 1 vector, y is an m x 1 vector)

y = A*v;

% 5. Add the new found y to the columns of Y ( an m x n1 matrix)

Y = [Y y];

Xt = X';
Yt =Y';

% 6. Solve for all vectors at_i (i goes from 1 to m)
%in yt_i = Xt at_i where yt_i is an C n1 x 1 vector,
%Xt is an n1 x n matrix, at_i is an n x 1 vector.
%You may have noticed that C Xt is underdetermined, i.e. too
% few rows compared to the number of columns, but there are C
%many different solvers that solve this system of equation.

for i=1:m
ytilde = Yt(1:n1,i);
at_i = GPSR_BB(ytilde,Xt, 0.01*max(abs(Xt'*ytilde)),'Continuation',1);
At(:,i) = at_i;
end
AA = At';
caxis([0 20]);
figure(1)
title('Initial matrix')
mesh(A); colormap(jet); colorbar
figure(2)
title(strcat('Reconstructed from :',num2str(n1), ...
' measurements instead of :', num2str(n)))
mesh(AA); colormap(jet); colorbar
figure(3)
caxis([1e-10 10]);
title('Reconstruction error matrix ')
mesh((abs(AA-A))); colormap(jet); colorbar


% 7. If convergence is observed for At (i.e. for all
% the columns at_i) then stop, if not C go to step 2.

It is a prototype, it is simple, but I think it is a good start for playing around and modifying it to include number of features (such as the loop in item 7).

What is important to see from this simple example:
  • In the trivial setting, I would need 1000 measurements of my physical system to get the full matrix A. I would probably have to put much thought on how to do this in parallel in the analog system if I wanted to do this fast.
  • In the Compressive Sensing setting, I need 50 measurements for an error of a few percents on the coefficients of A but, and this is the beauty of it, I can use all the computational power available on a supercomputer or on the cloud to find A. More specifically every row solving problem is self contained. The problem is eminently parallel. This is something I could not do in the trivial setting, i.e. no parallel supercomputer could help me perform my physical experimentation faster! Also, as an aside, when was the last time you heard an experimentalist say she needed a cluster of exactly 1026 CPUs to get results from her experiment ... yeah, never, I thought so.
The next step besides this prototype, would be to probably include a constraint on the fact that the coefficients must be positive and so forth. You can download the code here. This algorithm and others are listed in the Monday Morning Algorithm series.

Credit photo: NASA, Landsat view of New Orleans.

1 comment:

  1. Re: Monte Carlo Algebra
    Dear Igor,
    Maybe this is train of thought, but your Monday Morning Linear Black Box random algorithm reminds me of:

    Gordon, R. (1970). On Monte Carlo algebra. J. Appl. Prob. 7, 373-387.
    Abstract: A Monte Carlo method is proposed and demonstrated for obtaining an approximate algebraic solution to linear equations with algebraic coefficients arising from first order master equations at steady state. Exact solutions are hypothetically obtainable from the spanning trees of an associated graph, each tree contributing an algebraic term. The number of trees can be enormous. However, because of a high degeneracy, many trees yield the same algebraic term. Thus an approximate algebraic solution may be obtained by taking a Monte Carlo sampling of the trees, which yields an estimate of the frequency of each algebraic term. The accuracy of such solutions is discussed and algorithms are given for picking spanning trees of a graph with uniform probability. The argument is developed in terms of a lattice model for membrane transport, but should be generally applicable to problems in unimolecular kinetics and network analysis. The solution of partition functions and multivariable problems by analogous methods is discussed.

    This may be the first, if not only, example of Monte Carlo applied to symbolic math, rather than numbers or arrays of numbers. Perhaps it could be regarded as a compressive sampling of trees?
    Yours, -Dick Gordon gordonr@cc.umanitoba.ca

    ReplyDelete