Before we get into using a pure compressive sensing solver, here is what was unexpected in the drawing of the Donoho-Tanner phase transition and the use of a least square solver. Let us take the example of epsilon = 2%. In other words for every point in this graph, we are figuring out if the least square solver can recover the same solution for the initial and the perturbed linear system. Yesterday, we could read about the (trivial) noiseless case: Irrespective to its sparsity, no solution can be recovered when the system is underdetermined. On the other hand, the least square solver can always recover all solutions when the system is overdetermined. As soon as some multiplicative noise is activated, the picture changes. For the case of 2% we get the following figure.
As it turns out the transition shows that the level of sparsity of the solution is now an important variable to recover a solution to a linear system with 2% multiplicative noise. Let's explain the following two A/B tags.
Tag B: a solution with sparsity 0.1 ( = k/N) and an overdetermination of 14 ( = m/N) can be found with a least square solver.
Tag A: a solution with sparsity 0.6 ( = k/N) and an overdetermination of 14 ( = m/N) cannot be found with a least square solver.
In other words, provided 50 unknowns ( = N), a solution (tag B) with sparsity 5 can be recovered with a least square solver provided some 700 measurements (equations), while a solution (tag A) with sparsity 30 cannot be recovered with 700 measurements (equations). Actually, it looks like it can never be recovered with any amount of measurements (equations). Let me re-frame this some other way,
To be sure, we are far from the underdetermined case (compressed sensing) but I am surprised that sparsity becomes an issue for the recovery of a solution with a least square solver in the (multiplicative) noisy case. This result cannot be new.
Below a critical sparsity level, a sparse vector can be recovered with a simple least square solver from an overdetermined linear system of equations if you know this system's coefficients up to some level of uncertainty.
To be sure, we are far from the underdetermined case (compressed sensing) but I am surprised that sparsity becomes an issue for the recovery of a solution with a least square solver in the (multiplicative) noisy case. This result cannot be new.
Hi,
ReplyDeleteyour blog is interesting but sometimes fials to comunicate to the nonCS expert. Maybe is not in your interest.
In this case, I kind of follow the discussion, but the plots have no labels on their axis!(always put labels to your axis!) What are you plotting?
JuanPi,
ReplyDeleteI mentioned what the axes were in the first part of this entry (part 1). But I agree with you that the axis ought to be labeled in all the entries. I have not done a summary yet of this whole undertaking but basically I am reproducing the donoho-tanner transition for different solvers. As you can see this is work in progress but be reassured that in the summary version, I'll explain all these things so that a proper context can be infered from all these graphs.
If you are coming from today's entry, the big surprise here is that the recovery of vectors using a least square solver requires the solution to be sparse in the case when there is noise in the measurement matrix. When there is no noise, we all know that the least square solver produces the best solution possible. The surprising finding is to find out that sparsity is an element in a case where people expect no such dependency.
Hope it helped,
Cheers,
Igor.
Igor.