Let me summarize what we have so far (Part 1, 2, 3, 4). We are interested in finding if given y (a result of applying A to x), one can recover that same x by finding a solution to: y = (A + \epsilon*E) x using a specific solver. A and E are random gaussian matrices, while \epsilon is chosen at will. We evaluate the possibility of recovery of x given some parameters such as k/N the sparsity level of x and m/N, the percentage of measurements over the ambient dimension. The linear system is underdetermined when m/N is less than 1, it is exactly determined when m = N, it is overdetermined when m is greater than N.
In the noiseless case, i.e. \epsilon = 0, we have the following graph for using an L_1 solver (leading to the well known Donoho-Tanner phase transition for sparse recovery) and using the Least Square solver ( a trivial graph)
In the noisy case, \epsilon = 0.5%, here is the case for the Least Squares solver
SL0 solver (solving the Basis pursuit problem)
cross and bouquet approach with the SL0 solver, we have;
What does this tell us ?
At this level of \epsilon (or imprecision on A) equal to 0.5%, the L1 solver provides a small advantage over least squares in the underdetermined range. This small advantage is limited to a sparsity level less than 10% and for a number of measurements (equations) less than half the size of the ambient space.
A smaller \epsilon implies a better return on investment on using an L1 solver (BP) in the underdetermined case.
A larger \epsilon implies that the Least Squares solver can find a solution in the overdetermined case only if that solution does not go over a critical sparsity threshold.
What is the take away so far ?
If you want to do compressive sensing and are using Basis Pursuit, you'd better know the coefficients of your measurement matrix to less than 0.5%.
If you want to just sense of signal and are using Least Squares for recovery, an uncertainty of 2% means that you need more than six times the number of unknowns and you'll be limited to the sparsity level of the solution.
Next, we'll revisit these cases with an implementation of the LASSO instead of an implementation of the Basis Pursuit (with or without the Cross and Bouquet approach).