One of the most intriguing instances of a compressive imager is the MIT Random Lens Imager (a paper that has never been published). The most interesting part of this paper is really putting in perspective the fact that compressed sensing enables a better calibration of a strange imager. But what could be considered strange you ask ? According to Mark Neifeld in a presentation made at the Duke-AFRL Meeting a year ago entitled Adaptation for Task-Specific Compressive Imaging , any imager could be considered strange::
Which is why yesterday's paper (Using Cloud Shadows to Infer Scene Structure and Camera Calibration by Nathan Jacobs, Brian Bies, Robert Pless.) on calibration is important. What is interesting there is that unlike most other calibration procedures where much attention goes in defining an exact target with known dimensions,
the authors take the view that enough observations should lead to a self consistent and, by the same token , a unique model that can easily map to the reality of the situation. Now the big question is finding out how to compare two world views, and we are getting to an interesting although close to a philosophical problem. Sure enough, if you want your random imager to perform hyperspectral decomposition, you can check that easily but what about if you want your imager to catch only those parts of the scene that your brain will remember ten days from now, it's a different problem, it's a different mapping. On a different note, I wonder how this approach would work on the photographs taken by our flight from a 120,000 feet high altitude balloon. as one can clearly see the clouds moving (the camera is also moving albeit at a different speed) For those of you interested in having an opportunity for a similar flight, you ought to get on the conference call with Greg Guzik on Friday, November 12, 2010 at 10:00 am (central time).
Today, we have two papers from arxiv:
Quantization using Compressive Sensing by Rajiv Soundararajan, Sriram Vishwanath. The abstract reads:
The problem of compressing a real-valued sparse source using compressive sensing techniques is studied. The rate distortion optimality of a coding scheme in which compressively sensed signals are quantized and then reconstructed is established when the reconstruction is also required to be sparse. The result holds in general when the distortion constraint is on the expected $p$-norm of error between the source and the reconstruction. A new restricted isometry like property is introduced for this purpose and the existence of matrices that satisfy this property is shown.
Suppose a given observation matrix can be decomposed as the sum of a low-rank matrix and a sparse matrix (outliers), and the goal is to recover these individual components from the observed sum. Such additive decompositions have applications in a variety of numerical problems including system identification, latent variable graphical modeling, and principal components analysis. We study conditions under which recovering such a decomposition is possible via a combination of $\ell_1$ norm and trace norm minimization. We are specifically interested in the question of how many outliers are allowed so that convex programming can still achieve accurate recovery, and we obtain stronger recovery guarantees than previous studies. Moreover, we do not assume that the spatial pattern of outliers is random, which stands in contrast to related analyses under such assumptions via matrix completion.
No comments:
Post a Comment