When listening to the beginning of this video entitled Gutenberg and the Monks, Seth Godin introduces the subject with this amusing little story:
So it's a few hundred year ago and one the most famous German of all time says
"I have this really cool thing, I have invented it, it's called the printing press and what we can do is print a lots of copies and ship them all around the country they can put them on display and when they don't sell them they can ship them back and we'll shred them and then we can print more copies."
And I would imagine a conversation when he announced this and the monks said:
"Well, that's all well and good but will this impact our ability to sit in a dark quiet abbey and do calligraphy all day ?"
To which he responded
"Well it doesn't really help your ability to do calligraphy all day, and in fact it's a totally different way of going about doing what you do."
and in response most of the monks, my guess, said:
"Well we are really busy let us know how it goes."
I cannot but help thinking that you could replace some of these words with compressive sensing if instead of looking at increment of current technologies, CS were to be judiciously applied to new hardware. The new hardware must absolutely bring a new dimension to the data gathering process and its eventual use. So much so that the current technology players would have to look at it and say in unison: Well we are really busy let us know how it goes.
Giuseppe Paleologo on Twitter, asks the following burning question:
@igorcarron do you know how much weaker is the nullspace property vs. restricted isometry? Are there studies on this? #compressedsensing
My recollection is that there isn't, however, there seems to be that there is an issue about whether one property is stronger than the other (especially considering that there seems to be different definition for the Null Space Property). See this recent entry for a view on this. In all, I think somebody ought to write a paper on this as it clearly is an issue some people (including me) would like to have a closure on. Then again, I am sure a specialist could clear that up for all of us by sending me a short E-mail. I'll make sure it gets wide publicity.
Checking for RIP is hard, this is why some are looking at building deterministic sensing matrices with some similar property as shown in Construction of a Large Class of Deterministic Sensing Matrices that Satisfy a Statistical Isometry Property by Robert Calderbank, Stephen Howard, Sina Jafarpour. The abstract reads:
Compressed Sensing aims to capture attributes of $k$-sparse signals using very few measurements. In the standard Compressed Sensing paradigm, the $\m\times \n$ measurement matrix $\A$ is required to act as a near isometry on the set of all $k$-sparse signals (Restricted Isometry Property or RIP). Although it is known that certain probabilistic processes generate $\m \times \n$ matrices that satisfy RIP with high probability, there is no practical algorithm for verifying whether a given sensing matrix $\A$ has this property, crucial for the feasibility of the standard recovery algorithms. In contrast this paper provides simple criteria that guarantee that a deterministic sensing matrix satisfying these criteria acts as a near isometry on an overwhelming majority of $k$-sparse signals; in particular, most such signals have a unique representation in the measurement domain. Probability still plays a critical role, but it enters the signal model rather than the construction of the sensing matrix. We require the columns of the sensing matrix to form a group under pointwise multiplication. The construction allows recovery methods for which the expected performance is sub-linear in $\n$, and only quadratic in $\m$; the focus on expected performance is more typical of mainstream signal processing than the worst-case analysis that prevails in standard Compressed Sensing. Our framework encompasses many families of deterministic sensing matrices, including those formed from discrete chirps, Delsarte-Goethals codes, and extended BCH codes.
Also found on Arxiv: Modified Basis Pursuit Denoising (MODIFIED-BPDN) for Noisy Compressive Sensing with Partially Known Support by Wei Lu, Namrata Vaswani. The abstract reads:
In this work, we study the problem of reconstructing a sparse signal from a limited number of linear 'incoherent' noisy measurements, when a part of its support is known. The known part of the support may be available from prior knowledge or from the previous time instant (in applications requiring recursive reconstruction of a time sequence of sparse signals, e.g. dynamic MRI). We study a modification of Basis Pursuit Denoising (BPDN) and bound its reconstruction error. A key feature of our work is that the bounds that we obtain are computable. Hence, we are able to use Monte Carlo to study their average behavior as the size of the unknown support increases. We also demonstrate that when the unknown support size is small, modified-BPDN bounds are much tighter than those for BPDN, and hold under much weaker sufficient conditions (require fewer measurements).
No comments:
Post a Comment