Pages

Tuesday, November 22, 2011

Adaptive Compressive Sensing and a short Q&A with Mark Davenport.

Caught this on Mark Davenport's Twitter feed

I responded that I would love to see a reconciliation between their 'negative' result and figure 4 of Adaptive dynamic range matching for spectroscopic measurements by Joseph Kinast and Michael Gehm  (featured earliier on Nuit blanche here) Maybe it's an estimation issue as opposed to a reconstruction issue, or something else, but I am pretty sure I am not the only one asking.


Mark kindly responded but let us first read the paper On the Fundamental Limits of Adaptive Sensing by Ery Arias-Castro, Emmanuel Candes and Mark Davenport. The abstract reads:
Suppose we can sequentially acquire arbitrary linear measurements of an n-dimensional vector x resulting in the linear model y = Ax + z, where z represents measurement noise. If the signal is known to be sparse, one would expect the following folk theorem to be true: choosing an adaptive strategy which cleverly selects the next row of A based on what has been previously observed should do far better than a nonadaptive strategy which sets the rows of A ahead of time, thus not trying to learn anything about the signal in between observations. This paper shows that the folk theorem is false. We prove that the advantages o ered by clever adaptive strategies and sophisticated estimation procedures|no matter how intractable|over classical compressed acquisition/recovery schemes are, in general, minimal.

I noted in their conclusion the following:
This \negative" result should not conceal the fact that adaptivity may help tremendously if the SNR is sufficiently large, as illustrated in Section 3. Hence, we regard the design and analysis of effective adaptive schemes as a subject of important future research. At the methodological level, it seems important to develop adaptive strategies and algorithms for support estimation that are as accurate and as robust as possible. Further, a transition towards practical applications would need to involve engineering hardware that can effectively implement this sort of feedback, an issue which poses all kinds of very concrete challenges. 

Here is Mark's response to my query:
Igor,
To respond to your comment, reconciling these two results is not hard. To quote from our paper:
"...our result does not say that adaptive sensing never helps.In fact, there are many instances in which it will. For example, when some or most of the nonzero entries in x are sufficiently large, they may be detected sufficiently early so that one can ultimately get a far better MSE than what would be obtained via a nonadaptive scheme..."
This can be seen most clearly in Figure 2 of our paper. The gist is that there are clearly some regimes where adaptive algorithms aren't doing substantially better than nonadaptive algorithms (on the left side of the figure, corresponding to the case where the amplitude of the nonzero is relatively weak). I think that this would be very surprising to most people, and is the main focus of our paper. However, the plot also shows that as the signal gets larger (or as the SNR of your measurements grows), you enter a regime (on the right side of the figure) where adaptivity helps a lot.
Without having read all the details of the paper you sent me, my guess is that they are operating in a regime closer to the right side of our plot, where the SNR is big enough that adaptivity really helps. However, it is also important to note that their work focuses a lot on the issue of dynamic range and saturation effects, which we do not consider in our paper. It is possible that these effects would change our conclusions -- it would certainly require changes in our analysis.
Let me know if you have any other questions.
Mark
Thanks Mark for this important insight, hardware makers in general thank you for it. 


2 comments:

  1. It's really quite simple: let the "error" of an algorithm be the squared ratio of noise in the output to noise in the input.

    * Non-adaptively, the error is at least (k log (n/k)) / m and this is achievable. (standard)

    * Adaptively, the error can be as low as (k log log (n/k))/m. (I believe Jarvis gets this for some range of parameters; our algorithm gets it everywhere.)

    * Adaptively, the error can't be lower than k/m. (this paper.)

    There's no contention in the results, only in whether an exponential improvement in the dependence on n "doesn't help" or is "fundamentally better."

    ReplyDelete
  2. Eric,

    I agree there is no contention between the results you mention and the claims in our paper, but we're not just arguing semantics here.

    It's a little more than that -- if the SNR is just a little bit higher than the "worst-case" level, then the error can be as low as k^2/(nm). That's a vast improvement over k/m (assuming that k << n). So there is seemingly the potential for a vast improvement, way beyond just removing the log factor.

    None of the results you mention are aiming at this level of improvement, because it is indeed impossible to obtain in general (i.e, for all input signals).

    ReplyDelete