Besides reading the ArXiv batch of the week, some of y'all might be interested in digging this one further over the week-end. It started because I was intrigued by what Ori Shental stated earlier and so I went ahead and tried a small experiment. Given a Gaussian vector of size m x 1, a Gaussian dictionary of size m x N, how sparse is this Gaussian vector as a function of the elements of the dictionary ? Can I get a sparse decomposition of this Gaussian noise realization with respect to the elements of the dictionary ?

In short, we are solving for x, with y = A x, where y is the known Gaussian vector et A the dictionary. In order to investigate this, I wrote a small script that uses SL0 solver and here it is for m =50 and N varying from 1 to 600. This is an attempt at seeing something, much like what pirates do. Obviously there are issues with the script such as for instance my criterion of 0.01, but here it is:

clear

sigma_min = 0.00004;

sdf=0.95;

m =50;

itr = 10;

for N = 1:600

sta(N)=0;

for ii=1:itr;

y = randn(m,1);

A = randn(m,N);

x = SL0(A, y, sigma_min,sdf);

xsol = gt(abs(x),0.01);

spars = sum(xsol);

sta(N) = sta(N)+spars;

end

sta(N) = sta(N)/itr;

end

plot(sta)

and this is what we get running this script:

One can see rather quickly that a 50 elements Gaussian vector can be decomposed on average with 40 elements of a 100 elements dictionary. That number goes down to 35 with 300 elements or a sparsity fraction of 0.7 ! Intuitively, we are expecting that fraction to be 1, i.e. the realization of that vector to be full.

*What can be improved in this script ?*
## No comments:

Post a Comment