Pages

Sunday, April 15, 2012

Wavelet Chaos and Compressive Sensing

I realize that the title of this entry has many buzzwords, but the following is an attempt at evaluating through a rule of thumb if a compressive sensing approach to Polynomial Chaos expansion as used in Uncertainty Quantification is a good move... or not. Constructive criticism, as usual, is welcome. 

To do that, I followed the generic idea of Polynomial Chaos decomposition and compressive sensing as highlighted in the work of Alireza Doostan [1,2] or that of Lionel Mathelin and Kyle Gallivan [3]. To get some example running, I used the PCE_LEGENDRE code featuring Polynomial Chaos Expansion using Legendre Polynomials but I could have used Chebfun. In this case:


"...We wish to analyze a stochastic PDE of the form:
                                         -div A(X,Y) grad U(X,Y) = F(X) where
  • U(X,Y) is an unknown scalar function,
  • A(X,Y) is a given diffusion function,
  • F(X) is a given right hand side function,
  • represents dependence on spatial variables,
  • Y represents dependence on stochastic variables..."



The usage is pretty simple:


Usage:
where the input is:
  • n, the number of factors in the probability space.
  • p, the maximum degree of the basis functions for the probability space.
and the output is
  • nxy, the order of the matrix BXY. NXY = NX * NY.
  • bxy, the system matrix, stored in sparse format.
  • fxy, the right hand side.

nxybxyfxy ] = pce_legendre_linear_assemble ( np )


So then by providing a shape for A and f, one can modify pce_legendre_linear_assemble.m and run something like:

[nxy,bxy,fxy]=pce_legendre_linear_assemble(4,3);

then by evaluating the least squares inverse for a specific fxy, I get a vector that features the Polynomial Chaos expansion coefficients of interest to the problem. I sort them out and plot them on the figure shown below: By playing a bit, one note that the size of the matrix to be inverted indeed grows exponentially as function of n and p.



As a rule of thumb, it looks as though one would need about 2/3 of the coefficients to get a good representation. So in the context of compressive sensing, it means that an acceptable approximation is about 1/3 sparse. Since the matrix of this system of linear equations is square and becomes large, the idea of compressive sensing is to make it short and fat and still recover an acceptable solution. As a rule of thumb, we are not going to be specific on the structure of this matrix. Hence, the initial system of equation 

A x = y

with A an nxn matrix, will be transformed into

G A x = G y 

with G a random matrix of size m x n. The idea is to now solve for a fatter and shorter matrix (GA) with the knowledge that x is sparse. Since the new matrix has no specific form, the Donoho-Tanner (DT) phase transition applies. I added on top of the DT phase transition found here, the K/M =2/3 (the endpoint M/N is equivalent to K/N=2/3).



What do we read from this graph ? For the level of sparsity being sought (1/3), an l_1 recovery effort would require a smaller matrix of size 0.9N x N instead of N x N !

This is such a small improvement that it is probably not worth the effort of changing drastically the problematic. From that, one wonders when it would be a good move to attempt a compressive sensing approach? My guess is that instead of using polynomial chaos expansion, one should really investigate a wavelet chaos expansion ? The idea is that a wavelet chaos approximation is likely to be more efficient in case the stochastic functional is peaky.



Reference:




No comments:

Post a Comment