Pages

Wednesday, June 30, 2010

CS: A patent, DIY computational photography, NETRA, 2 comments.

Daniel Reetz just sent the following:
Igor,

I'm a CompPhoto enthusiast, and discovered your blog while looking for interviews with Ramesh Raskar. Anyway, I'm not sure if this is compressed sensing, exactly, but it's an interesting patent, and very fresh:

http://www.google.com/patents?id=QgnPAAAAEBAJ&zoom=4&pg=PA21#v=onepage&q&f=false

Regards,
Daniel Reetz


The abstract of the patent reads as follows:
An imaging system is presented for imaging objects within a field of view of the system. The imaging system comprises an imaging lens arrangement, a light detector unit at a certain distance from the imaging lens arrangement, and a control unit connectable to the output of the detection unit. The imaging lens arrangement comprises an imaging lens and an optical element located in the vicinity of the lens aperture, said optical element introducing aperture coding by an array of regions differently affecting a phase of light incident thereon which are randomly distributed within the lens aperture, thereby generating an axially-dependent randomized phase distribution in the Optical Transfer Function (OTF) of the imaging system resulting in an extended depth of focus of the imaging system. The control unit is configured to decode the sampled output of the detection unit by using the random aperture coding to thereby extract 3D information of the objects in the field of view of the light detector unit....

I responded to Daniel that I would urge the patent holder to make sure there is no prior art on this. The subject of patent is a natural one for a new technology such as compressive sensing. But the point at which we consider something to be obvious is going to be really difficult to ascertain. as the field is ripe for many new insights (without too much thinking!). Daniel's e-mail was sent a day after the U.S. Supreme Court decided on Bilski. For some background material, you may recall how a CT inversion algorithm played a role in algorithms being removed from patentability. From this entry, I wrote:

The knowledge that ART seems to be employed in current CT scanners seems at odd with the review entitled: Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction? by Xiaochuan Pan, Emil Sidky and Michael Vannier. Also of interest, Richard Gordon a co-inventor of ART for CT-scanners (with Gabor T. Herman an author of the paper above) responded to that review in this entry: MIA'09 and Richard Gordon's ART CT algorithm.

And if you think some of these patenting issues are germane, you might want to take a look at the Bilski case currently in front of the U.S. Supreme Court, from the wikipedia entry, there is a mention of CT algorithms:

....Some Federal Circuit decisions, however, had held some transformations of signals and data patent-eligible. For example, the Abele decision approved a dependent claim to a method transforming X-ray attenuation data produced in a X-Y field by an X-ray tomographic scanner to an image of body organs and bones — while at the same time the Abele court rejected a more generic and abstract independent claim to a process of graphically displaying variances from their average values of unspecified data obtained in an unspecified manner.
From the Abele decision, there was the following determination:

....In Johnson, supra, the interrelationship of the algorithm to the remaining limitations of a claim was held to be determinative of whether the claim defined statutory subject matter. Relying on the same reasoning, in In re Walter,618 F.2d 758, 205 USPQ 397 (CCPA 1980), the second part of the two-step analysis5 was defined as follows:

If it appears that the mathematical algorithm is implemented in a specific manner to define structural relationships between the physical elements of the claim (in apparatus claims) or to refine or limit claim steps (in process claims), the claim being otherwise statutory , the claim passes muster under §101. If, however, the mathematical algorithm is merely presented and solved by the claimed invention, as was the case in Benson and Flook, and is not applied in any manner to physical elements or process steps, no amount of post-solution activity will render the claim statutory; nor is it saved by a preamble merely reciting the field of use of the mathematical algorithm....

Is compressive sensing going to change patents ?

It looks like the Supreme Court provided a narrow ruling that still cannot help a lone inventor protect her ideas without having to resort to having access to some large amount of capital. More erudite understanding of what this means can be found here.


And while we are talking about CT reconstruction, Emil Sidky told us shortly after the post listed above that

While it may be true that there are specific protocols that use something other than FBP in CT. These are but a drop in the bucket. No one can really argue with the fact that FBP is the workhorse algorithm of medical CT, and considering the numbers of people that get scanned there is certainly room for substantial exposure reduction to the general population by engineering a general-purpose CT scanner with advanced image-reconstruction algorithms.

Knowing all that, I was looking forward to hearing the webinar Dick Gordon invited me to watch, on the subject of reconstruction algorithms used in clinical protocols. Thanks to the presentation, I learned that the big threes in CT are providing new offerings:
All of them use iterative algorithms aiming at reducing dose. Now the question is really trying to figure out what is being iterated since the algorithms are closely held secrets. It is not clear if they are iterating on some form of FBP or something else.

Last but not least, as I mentioned to him that I liked Daniel's projects, he responded with:

You might also like some of the stuff I've done here:

http://www.futurepicture.org

(perhaps my translation of PP Sokolov's 1911 paper on integral imaging
would be more generally interesting:
http://www.futurepicture.org/?p=34 ) He made an integral camera with a
copper plate and some film!
I need to come back to that paper later. By the way, did you all know that this whole photography thing was born because of Napoleon and the British blockade....muhhh...I thought so but that's for a different blog entry. But while we are on the subject of hacking material to make them do something else they were not intended to do, there is NETRA

NETRA: Interactive Display for Estimating Refractive Errors and Focal Range by Vitor F. Pamplona, Ankit Mohan, Manuel M. Oliveira, Ramesh Raskar. The abstract reads:

We introduce an interactive, portable, and inexpensive solution for estimating refractive errors in the human eye. While expensive optical devices for automatic estimation of refractive correction exist, our goal is to greatly simplify the mechanism by putting the human subject in the loop. Our solution is based on a high-resolution programmable display and combines inexpensive optical elements, interactive GUI, and computational reconstruction. The key idea is to interface a lenticular view-dependent display with the human eye at close range - a few millimeters apart. Via this platform, we create a new range of interactivity that is extremely sensitive to parameters of the human eye, such as the refractive errors, focal range, focusing speed, lens opacity, etc. We propose several simple optical setups, verify their accuracy, precision, and validate them in a user study.

The website is here and the papers are here:






In a different direction, Brian Chesney asks a question on his blog: Compressive Sensing and Sparse in Time and on twitter. To which I commented the following:

Brian,
I don’t understand your predicament. You say

“However, if your signal is sparse in time, you probably still care exactly when your signal did the thing that is of interest to you, when it did whatever it was that you couldn’t afford to miss by sampling randomly.”

If your signal is sparse in time, then you sample that signal in the Fourier domain and pick a random set of frequencies of that signal. That’s it, you don’t randomly sample in the time domain so I don’t see where this issue of latency and group delay come about.

Cheers,

Igor.
Did I miss something ?


Bill Harris asked a question on the issue of Sampling rate of human-scaled time series. He then responded on Andrew Gelman's blog and this is what he had to say:
Igor and Hal, you've given me much to study (I've already printed off several articles and earmarked a few more). I knew of compressed sensing, but I didn't know much about it, and I didn't really know where to enter the literature on the economics side. Both tips help a lot. Thanks to both of you (and to Andrew for hosting the question).

(I also know that my statement of the Nyquist-Shannon sampling theorem is incomplete; what really counts is the bandwidth of the signal, not the highest frequency.)

Is it fair to think that most economic and social science time series analysis does not take advantage of such methodologies? If that be true, do you have any assessment of the magnitude or type of erroneous inferences that are or reasonably could be made from those analyses?

I recently looked at an undersampled (in the Nyquist sense) time series (unfortunately not something I can talk about yet) in which that data gave seriously erroneous insights. In that case, I fortunately also had integrated data available, and the integration played the role of an anti-aliasing filter.
Does any of you have an insight ?

No comments:

Post a Comment