Compressive sampling offers a new paradigm for acquiring signals that are compressible with respect to an orthonormal basis. The major algorithmic challenge in compressive sampling is to approximate a compressible signal from noisy samples. This paper describes a new iterative recovery algorithm called CoSaMP that delivers the same guarantees as the best optimization-based approaches. Moreover, this algorithm offers rigorous bounds on computational cost and storage. It is likely to be extremely efficient for practical problems because it requires only matrix-vector multiplies with the sampling matrix. For many cases of interest, the running time is just O(N logN), where N is the length of the signal.
The authors also have an interest in mind with regards to the hardware implementation as shown here:
Even though partial Fourier matrices may require additional samples to achieve a small restricted isometry constant, they are more interesting for the following reasons [2]. There are technologies that acquire random Fourier measurements at unit cost per sample.enjoy similar qualities. These traits are essential for the translation of compressive sampling
- The sampling matrix can be applied to a vector in time O(N logN).
- The sampling matrix requires only O(mlogN) storage.
- Other types of sampling matrices, such as the random demodulator [33],
from theory into practice.
I like this paper for many reasons including bringing an awareness of other current reconstruction methods. I should follow their classification in my big picture page. As it turns out, I had not made clear the difference between convex relaxation and combinatorial algorithms. The authors point :
The literature describes a huge number of approaches to solving this problem. They fall into three rough categories:
- Greedy pursuits: These methods build up an approximation one step at a time by making locally optimal choices at each step. Examples include OMP [34], stagewise OMP (StOMP) [12], and regularized OMP (ROMP) [27, 26].
- Convex relaxation: These techniques solve a convex program whose minimizer is known to approximate the target signal. Many algorithms have been proposed to complete the optimization, including interior-point methods [2, 23], projected gradient methods [13], and iterative thresholding [10].
- Combinatorial algorithms: These methods acquire highly structured samples of the signal that support rapid reconstruction via group testing. This class includes Fourier sampling [18, 20], chaining pursuit [15], and HHS pursuit [16], as well as some algorithms of Cormode/Muthukrishnan [9] and Iwen [21].
At present, each type of algorithm has its native shortcomings. Many of the combinatorial algorithms are extremely fast-sublinear in the length of the target signal but they require a large number of somewhat unusual samples that may not be easy to acquire. At the other extreme, convex relaxation algorithms succeed with a very small number of measurements, but they tend to be computationally burdensome. Greedy pursuits, in particular, the ROMP algorithm- are intermediate in their running time and sampling efficiency.
The analysis is inspired by the work on ROMP [27, 26] and the work of Candes-Romberg-Tao [3] on convex relaxation methods.
The relative performance explanation of the following table can be found on page 21.
We probably need to have a living document on these performances.
No comments:
Post a Comment