Found on the interwebs:
A Class of Novel STAP Algorithms Using Sparse Recovery Technique by Hao Zhang, Gang Li, Huadong Meng. The abstract reads:
A class of novel STAP algorithms based on sparse recovery technique were presented. Intrinsic sparsity of distribution of clutter and target energy on spatial-frequency plane was exploited from the viewpoint of compressed sensing. The original sample data and distribution of target and clutter energy was connected by a ill-posed linear algebraic equation and popular $L_1$ optimization method could be utilized to search for its solution with sparse characteristic. Several new filtering algorithm acting on this solution were designed to clean clutter component on spatial-frequency plane effectively for detecting invisible targets buried in clutter. The method above is called CS-STAP in general. CS-STAP showed their advantage compared with conventional STAP technique, such as SMI, in two ways: Firstly, the resolution of CS-STAP on estimation for distribution of clutter and target energy is ultra-high such that clutter energy might be annihilated almost completely by carefully tuned filter. Output SCR of CS-STAP algorithms is far superior to the requirement of detection; Secondly, a much smaller size of training sample support compared with SMI method is requested for CS-STAP method. Even with only one snapshot (from target range cell) could CS-STAP method be able to reveal the existence of target clearly. CS-STAP method display its great potential to be used in heterogeneous situation. Experimental result on dataset from mountaintop program has provided the evidence for our assertion on CS-STAP.
Nice article but my attention was focused on a tidbit. It looks to me that we are bound to see this again and again as soon as engineering fields are going to adopt compressive sensing:
Although many methods to construct matrix satisfying RIP (especially randomization method) have been brought forward[17], a lack of effective means for testing RIP and estimating RIP constant of a given deterministic matrix is still a hidden trouble for practitioners and user of compressed sensing technique[26], no exception for us. It is very hard to verify RIP of matrix used in STAP operation based on sparse recovery algorithm theoretically. So the detailed conditions for L0/L1 equivalence will not be checked in our research. Fortunately, no severe consequence was encountered generally, because simply using L1 optimization could we find the solution of (13) with some characteristic of sparsity (although it maybe isn’t the most sparse one). That is just what we want in most cases.
See my remark on the presentation by Hongkai Zhao on "A fast solver for radiative transport equation and applications in optical imaging". In few words, we are bound to see more and more engineering fields not liking random matrices mostly because they are not physically feasible and it is urgent that the community focuses on making mainstream the tools that checks whether a given a measurement matrix is acceptable for l1 recovery such as the tools presented in On verifiable sufficient conditions for sparse signal recovery via L1 minimization by Anatoli Juditsky and Arkadi Nemirovski. An implementation is available at this site:
or in Testing the Nullspace Property using Semidefinite Programming by Alexandre d'Aspremont and Laurent El Ghaoui where the NSProp implementation is now available at:
Both programs check a condition on a constant \alpha_k. The study of the value of \alpha_k with small matrices could allow one to investigate low order models as well as specific dictionaries.... These two programs are prominently listed in the measurement matrix section of the Big Picture.
The problem of finding heavy hitters and approximating the frequencies of items is at the heart of many problems in data stream analysis. It has been observed that several proposed solutions to this problem can outperform their worst-case guarantees on real data. This leads to the question of whether some stronger bounds can be guaranteed. We answer this in the positive by showing that a class of “counter-based algorithms” (including the popular and very space-efficient FREQUENT and SPACESAVING algorithms) provide much stronger approximation guarantees than previously known. Specifically, we show that errors in the approximation of individual elements do not depend on the frequencies of the most frequent elements, but only on the frequency of the remaining “tail.” This shows that counter-based methods are the most space efficient (in fact, space-optimal) algorithms having this strong error bound. This tail guarantee allows these algorithms to solve the “sparse recovery” problem. Here, the goal is to recover a faithful representation of the vector of frequencies, f. We prove that using space O(k), the algorithms construct an approximation f to the frequency vector f so that the L1 error ||f − f||1 is close to the best possible error minf′ ||f′ − f||1, where f′ ranges over all vectors with at most k non-zero entries. This improves the previously best known space bound of about O(k log n) for streams without element deletions (where n is the size of the domain from which stream elements are drawn). Other consequences of the tail guarantees are results for skewed (Zipfian) data, and guarantees for accuracy of merging multiple summarized streams.
And The Big Picture site has recently been put on Citeulike:
http://www.citeulike.org/user/PaperCollector/article/4248192
woohoo, it now has a bibtex entry:
@article{citeulike:4248192,Credit Photo: NASA, Soyuz Landing in Kazakhstan.
citeulike-article-id = {4248192},
keywords = {compressivesensin},
posted-at = {2009-03-31 23:11:32},
priority = {2},
title = {Igor Carron - Compressive Sensing: The Big Picture},
url = {http://igorcarron.googlepages.com/cs}
}
No comments:
Post a Comment