While reflecting on an entry by Andres Corrada-Emmanuel entitled Guaranteeing Accuracy and Precision, it brought back to memory an old and dumb idea of what I would call the "DUMB comprEssive Sensing reconsTruction Algorithm" or the DUMBEST algorithm for short. The idea is that the minimum number of projections is only known asymptotically. However, in practice, let's say one has 150 such random projections and no satisfactory reconstruction: what happens when you want to reconstruct your initial signal with another projection (making the total number of projections 151). It so happens that there is the possibility that reconstruction could be perfect with only 150 measurements given that one uses the new projection and 149 others taken from the initial set of 150. The point of the DUMBEST algorithm would be that everytime a new projection is added to the set of measurements of size n, one would try to reconstruct a combinatorial number of measurements (n n-1) = n using the new projection and removing one previously belonging in the initial set. One could even go further in the recursion. One could also perform these reconstructions using different solvers and use a voting mechanism to single out the actual reconstructed signal. I realize it really is a dumb algorithm, but in some cases, even a compressive sensing measurement could be extremely costly to obtain and one would probably welcome any means of recovering the initial signal with the minimum amount of measurements. Part of the thinking for this algorithm comes from two hunches:
- Different families of reconstruction algorithms (LP, IT, Greedy, Reweighted .....) converge toward to the same solutions exactly or to an epsilon. However, different algorithms are shown to converge in theory and practice with different minimum numbers of measurements. This is rather unsettling. how do different reconstruction solver qualitatively handle the same set of CS measurements ? If a solution is found with two solvers from different families of algorithm, we have probably reached a solution.
- Let us say that two projecting vectors are close to being colinear. If a vector were to be decomposed in a sum of these two vectors, its coefficients would be very large and may prove difficult to solve for certain solvers (a finding similar to non-normality). Removing one of these two vectors and replacing it with a vector that is not too close to being colinear to the other one will render the problem well-posed.
In a different area, here is a very nice mind map by Hon-dani on Manifold Based Signal Recovery:
and another one here. I am expecting another one from another reader of this blog at some point in time (wink, wink). (It is one of these rare instances where I am in a situation in which I cannot ask for permission to repost an item. I tried to ask for permission to repost this mindmap but the site is in japanese and the comment section seems to be closed unless I register on the japanese site. Since I do not read Japanese, I am stuck. If the author wants me to remove it, please e-mail me and I'll do so in a heart beat).
Also found on the internets: Counter Braids: Asymptotic Optimality of the Message Passing Decoding Algorithm by Yi Lu, Andrea Montanari and Balaji Prabhakar. The abstract reads:
A novel counter architecture, called Counter Braids, has recently been proposed for per-flow counting on high-speed links. Counter Braids has a layered structure and compresses the flow sizes as it counts. It has been shown that with a Maximum Likelihood (ML) decoding algorithm, the number of bits needed to store the size of a flow matches the entropy lower bound. As ML decoding is too complex to implement, an efficient message passing decoding algorithm has been proposed for practical purposes. The layers of Counter Braids are decoded sequentially, from the most significant to the least significant bits. In each layer, the message passing decoder solves a sparse signal recovery problem. In this paper we analyze the threshold dimensionality reduction rate (d-rate) of the message passing algorithm, and prove that it is correctly predicted by density evolution. Given a signal in Rn+ with n non-vanishing entries, we prove that one layer of Counter Braids can reduce its dimensionality by a factor 2.08 · \epsilon log (1/\epsilon) + O(). This essentially matches the rate for sparse signal recovery via L1 minimization, while keeping the overall complexity linear in n.
From the article:
Comparison with Compressed SensingCompressed sensing [5], [6] reduces below Nyquist rate the number of samples needed to recover sparse signals. In other words, it reduces the dimensionality of signal known to be sparse, using suitable non-adaptive projections.Counter Braids, on the other hand, compresses a signal with decreasing digit entropy: it reduces the actual number of bits needed to store the signal and achieves the Shannon entropy lower bound. Interestingly, decoding each layer of CB also solves a dimensionality reduction problem. By reducing the number of counters in each layer, and assigning an appropriate number of bits per counter, CB achieves an overall reduction in bits. In this way, CB performs compression via dimensionality reduction.
and the conclusion:
We analyzed the achievable dimensionality reduction rate for a single-layer Counter Braids and found it to be very close to Donoho and Tanner threshold for non-negative signals with the L1 minimization recovery algorithm. Since the complexity of message-passing algorithm is essentially linear, it is a more efficient solution to non-negative sparse signal recovery than L1 minimization.
I also found two upcoming talks
* MIT Laboratory for Information & Decisio Systems, November 20, 2008, 4:15p.m. - 5:15p.m., Building 32, Room D677, Counter Braids: A novel counter architecture for network measurement by Yi Lu
Abstract:
Accurate flow-level measurement is highly desirable in networking products. As network speed increases, the direct approach of dynamically assigning per-flow counters requires a large amount of fast memory, which is prohibitively expensive. We observe that measurement with limited memory inherently benefits from source coding, and propose “Counter Braids”, an architecture that compresses as it counts. The counts are decompressed offline using a novel message passing algorithm, which also has implications for the area of compressed sensing.
* An introduction to compressive sensing, Seminar at University of California, Berkeley, November 26 at 11:10 a.m.-12 p.m., 380 Soda Hall. Speaker/Performer: Prof. Olga Holtz, UC Berkeley Math, Sponsor: Electrical Engineering and Computer Sciences (EECS)
Both talks have been added to the Compressive Sensing Calendar.
No comments:
Post a Comment