Pages

Monday, October 29, 2007

Compressed Sensing: Sparco helps you test all these spiffy reconstruction algorithms


With the large number of reconstruction techniques, it was just a question of time before somebody would produce a toolbox for unifying the testing of these sparse reconstruction algorithms. This is what Ewout van den Berg, Michael P. Friedlander, Gilles Hennenfent, Felix J. Herrmann, Rayan Saab, and Ozgur Yilmaz have done by releasing Sparco.

Sparco: A Testing Framework for Sparse Reconstruction

Sparco is a framework for testing and benchmarking algorithms for sparse reconstruction. It includes a large collection of sparse reconstruction problems drawn from the imaging, compressed sensing, and geophysics literature. Sparco is also a framework for implementing new test problems and can be used as a tool for reproducible research. Sparco is implemented entirely in Matlab, and is released as open-source software under the GNU Public License.
The technical report is here.

Making sense of two findings

Two wows this past week:

First, Scott Parazynski and Daniel Tani found that there are metal shavings in one of the joint of the rotating solar arrays of the International Space Station (ISS). Since you can't change them, this is major news as it may critically impact the ISS in the future.

Second, the news that Uranium isotope ratios have been shown to not be invariant. This is the first time since the Oklo natural reactor that we show that natural reactors have occurred in many other places (though not at the rate of the Oklo). This is significant because it shows that deep burying of current nuclear waste is not unlike what is currently done in nature. Furthermore, it maybe another element showing us that Earth is a giant nuclear reactor. In this latest theory, I have always been amazed at the Origen-S and SAS2 jobs performed to compare to results found in nature. These codes are generally used in nuclear engineering to determine parameters of interest in nuclear reactor cores.

Credit: NASA, Earth-Moon system as viewed from a Mars probe on its way to Mars. 2003.

Saturday, October 27, 2007

Compressed Sensing: RandomFaces prove that faces lie in a low dimensional manifold

It was only a matter of time, the Machine Learning community is finding ways to use compressed sensing following the work of Richard Baraniuk and Michael Wakin on Random projections of smooth manifolds.
There were a laplacianfaces, eigenfaces, we now have randomfaces, the projection of faces onto random bases. The work of Yi Ma ([1] [2] [3])seems to show that one can get similar results in terms of classification as those obtain from the more complex techniques producing the Laplacianfaces and other Eigenfaces. Some people would say that it may parallel what is happening in the primary visual cortex.

[ Des presentations sur le compressed sensing (acquisition comprimee) se feront a l'Institut Henri Poincaré (Paris, France) en Mars 2008, Méthodes variationnelles et parcimonieuses en traitement des signaux et des images , Merci Laurent]

References:
[1] Robust Face Recognition via Sparse Representation, John Wright, Arvind Ganesh, Allen Yang and Yi Ma. Submitted to IEEE Transactions on Pattern Analysis and Machine Intelligence. [Updated January 2008]
[2] Feature Selection in Face Recognition: A Sparse Representation Perspective, Allen Yang, John Wright, Yi Ma and Shankar Sastry. Submitted to IEEE Transactions on Pattern Analysis and Machine Intelligence.
[3] Computation and Relaxation of Conditions for Equivalence between L1 and L0 Minimization, Yoav Sharon, John Wright and Yi Ma. Submitted to IEEE Transactions on Information Theory.

Compressed Sensing: Reweighted Lp through Least square (L2), Forget Europa, let's shoot for Titan


In a previous entry, I mentioned the impressive improvement of the L1 reconstruction capability for signals that were not so sparse. We also knew that an Lp (p less than 1) exact reconstruction of sparse signals via nonconvex minimization was also possible.

Based on the work of Bhaskar Rao and Kenneth Kreutz-Delgado [2], Rick Chartrand and Wotao Yin produced an astounding reconstruction technique in this article [2]:

ITERATIVELY REWEIGHTED ALGORITHMS FOR COMPRESSIVE SENSING

The theory of compressive sensing has shown that sparse signals can be reconstructed exactly from remarkably few measurements. In [1], it was shown empirically that using Lp minimization with p less than 1 can do so with fewer measurements than with p = 1. In this paper we consider the use of iteratively reweighted algorithms for computing local minima of the nonconvex problem. In particular, a particular regularization strategy is found to greatly improve the ability of a reweighted least-squares algorithm to recover sparse signals, with exact recovery being observed for signals that are much less sparse than required by an unregularized version (such as FOCUSS, [2]). Improvements are also observed for the reweighted-l1 approach of [3].


Why is this is an important paper ? Because there is
  • no need to use SDP programming, the scheme is very simple computationally. It can be written in ten lines of Matlab (Update: This algorithm was featured in this Monday Morning Algorithm)
  • everybody in engineering knows least squares
  • some of us now have a stab at the very large problems.

The major feature of this paper is showing that $\epsilon$ should probably be not that small in the first place. Let us hope that this heuristic will be proven in the future. In order to understand the graph above, one needs to know the acronyms:

  • IRLS stands for Iteratively Reweighted or weighted Least Square (the algorithm of this paper)
  • IRL1 stands for Iteratively Reweighted L1 (the case of p=0 is the same as the reweighted L1 algorithm in [3]).
K stands for the sparsity number of the signal, M is the number of measurements and N is the signal size.

Please note comparable numbers between IRL1/p=0 (linear programming algorithm in [3]) and IRLS/p=0 (least square algorithm in [1]).

Reference:
[1] ITERATIVELY REWEIGHTED ALGORITHMS FOR COMPRESSIVE SENSING, Rick Chartrand, Wotao Yin, ICASSP 2008

[2] An affine scaling methodology for best basis selection.
Rao, B.D.; Kreutz-Delgado, K. IEEE Transactions on Signal Processing, vol.47, (no.1), IEEE, Jan. 1999. p.187-200. [Rao1999.pdf]

[3] Enhancing sparsity by reweighted L1 minimization, Emmanuel Candès, Mike Wakin and Stephen Boyd

Compressive Sensing Radiation Detector Part II

In the case where radiation does not suffer much scattering, the principle of Coded Aperture used for light can be directly applied to these radiation fields. Berthold Horn, Richard Lanza, Roberto Accorsi,
Klaus Ziock, and Lorenzo Fabris at MIT have been following this principle to do imaging by performing Computational Imaging. As pointed out before, Coded Aperture is also what the DISP group does at Duke University (but with radiation that are in the visible range). The big difference resides in the Mask, in radiation with smaller wavelengths (X-rays,...), radiation goes through the mask, but more generally you cannot guide the radiation to your detector and so it becomes a little bit more complex. Spatial resolution is extremely difficult to do because the detector are expensive and need to be where the radiation goes. So one has to resort to batteries of detectors such as the ones below.
In order to provide spatial resolution, one has to either to have many detectors or move the detector or the source as can be seen in this animation (“Synthetic Aperture” radiography). An introduction to instrumentation using coded aperture for astronomy can be found here. This is a great introduction to the subject, especially in regards to the current inversion techniques.

When one deals with radiation scattering (neutrons in nuclear reactor cores or infra-red/visible light in tissues) , radiation sensors have a much more difficult kernel to invert (because the diffusion/scattering of the rays in matter) and requires methods like Bayesian Compressed Sensing (i.e. In-situ detector of Lawrence Carin, Dehong Liu, and Ya Xue at Duke University). All in all, it looks like some of these techniques for Coded Aperture could be improved at the software and hardware level using Compressed Sensing.

Wednesday, October 24, 2007

New satellite imagery of the California fires

ESA just released new satellite imagery from the California fires using the MERIS camera on ENVISAT. These shots were taken 3 hours ago. IMagery from two days ago can be found in a previous entry.

I made a close up version centered on L.A.

The full imagery is shown below. The cloud at the bottom is clearly different in nature than the ones at the top. i.e. they are the result of the fire.

California's fires from Space using different satellites

[ Update: new imagery, October 24th can be found here]

NASA has been able to provide imagery with the MODIS rapid response system where they seem to be using data from Aqua and Terra (which I believe are on the same orbit as Landsat.) I did a search on the USGS database, and found no Landsat 5 (it has had some on-board problems since October 6th) and only one Landsat 7 image. The Landsat 7 one is not clear as it has the SLC off and therefore looks pretty bad.

I have found a very nice image taken on October 22nd by the MERIS camera on-board the European ENVISAT. There should be a new one in about 4 hours.



I am a little bit disturbed that there isn't only one UI where I can give a location of interest, a time line and find out if either the USGS, NASA and ESA or other civilian satellites can tell me if there are any data associated with this criteria. It was a pain when doing this type of search when looking for clues on Jim Gray's boat, it continues to be a pain to this day.


Credit Image: ESA/Envisat

Trace: What is it good for ? - How Compressed Sensing relates to Dimensionality Reduction


In reference [1] by Killian Weinberger and Lawrence Saul, one learns that one dimensionality reduction technique uses the maximization of the trace of the Gram matrix in order to unfold a manifold . From [1]

..We also emphasize that in eq. (9), we are maximizing the trace, not minimizing it. While a standard relaxation to minimizing the rank [9] of a semidefinite matrix is to minimize its trace, the intuition here is just the opposite: we will obtain a low dimensional embedding by maximizing the trace of the Gram matrix...


At the same time, Benjamin Recht, Maryam Fazel, and Pablo Parrilo provide a somewhat different view in [3] (Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization a paper that draws a parallel between Compressed Sensing using the L1 norm and the nuclear norm for finding low rank approximation of a matrix ) where the Trace of a positive definite matrix (the Gram Matrix) is minimized. The reason is simple [3]:

The nuclear norm is a convex function, can be optimized efficiently, and is the best convex approximation of the rank function over the unit ball of matrices with norm less than one. When the matrix variable is symmetric and positive semidefinite, this heuristic is equivalent to the trace heuristic often used by the control community.....
...
Low-dimensional Euclidean embedding problems
A problem that arises in a variety of fields the determination of configurations of points in low-dimensional Euclidean spaces, subject to some given distance information. In Multi-Dimensional Scaling (MDS) such problems occur in extracting the underlying geometric structure of distance data.
So in one series of paper, we have a minimization and in the other one we have a maximization of the trace of a Gram matrix as an objective function in order to obtain a reduced dimension/low rank approximation of that matrix. This seems odd.

When one goes back to one of the first paper of Killian Weinberger and Lawrence Saul [1], one can read how the heuristic trace argument comes about:

The constraints for local isometry under this neighborhood relation are simply to preserve the lengths of the edges in this graph, as well as the angles between edges at the same node. In practice, it is easier to deal only with constraints on distances, as opposed to angles.....Thus, we must further constrain the optimization to the cone of semidefinite matrices [21]. In sum, there are three types of constraints on the Gram matrix Kij , arising from local isometry, centering, and semidefiniteness. The first two involve linear equality constraints; the last one is not linear, but importantly it is convex. We will exploit this property in what follows.What function of the Gram matrix can we optimize to “unfold” a manifold, as in Fig. 1? As motivation, consider the ends of a piece of string, or the corners of a flag. Any slack in the string serves to decrease the (Euclidean) distance between its two ends; likewise, any furling of the flag serves to bring its corners closer together. More generally, we observe that any “fold” between two points on a manifold serves to decrease the Euclidean distance between the points. This suggests an optimization that we can perform to compute the outputs Yi that unfold a manifold sampled by inputs Xi. In particular,we propose to maximize the sum of pairwise squared distances between outputs....By maximizing eq. (6), we pull the outputs as far apart as possible, subject to the constraints in the previous section. Before expressing this objective function in terms of the Gram matrix Kij , let us verify that it is indeed bounded, meaning that we cannot pull the outputs infinitely far apart. Intuitively, the constraints to preserve local distances (and the assumption that the graph is connected) prevent such a divergence.... Thus, the objective function cannot increase without bound if we enforce the constraints to preserve local distances.

We can express the objective function in eq. (6) directly in terms of the Gram matrix Kij of the outputs Yi. Expanding the terms on the right hand side, and enforcing the constraint that the outputs are centered on the origin, we obtain: T (Y ) =Tr(K). (9)
Thus, we can interpret the objective function for the outputs in several ways: as a sum over pairwise distances in eq. (6), as a measure of variance in eq. (9), or as the trace of their Gram matrix in eq. (9).

In other words, using the max trace argument one finds a lower dimensional manifold by unfolding that manifold subject to distance constraints. In the same way, an argument based on the minimization of the trace could be depicted as squashing that manifold subject to similar distance or angle constraints. Either way, one obtains a reduced version of the manifold : in the MVU case (max trace), some respect is given to the geodesic distance on the embedding whereas in the min case, respect is given to the Euclidian distance. Obviously, in dimension 3, one wonders how a Mobius strip would fare in either situations. This is my interpretation so far, if you have better insight, feel free to enlighten me on this issue. As suggested by a commenter, I just re-read Jon Dattorro's very nice book and it looks like this description of the minimization business is similar to his.( see footnote 4.12, page 241):

Trace (trG = hI , Gi) minimization is a heuristic for rank minimization. (§7.2.2.1) It may be interpreted as squashing G which is bounded below by X^T X as in (512).



References:

[1] Unsupervised learning of image manifolds by semidefinite programming, K.Q. Weinberger and L. K. Saul (2004). (Outstanding student paper award). In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR-04), Washington D.C. pdf

[2] Graph Laplacian Regularization for Large-Scale Semidefinite Programming, K. Q. Weinberger, F. Sha, Q. Zhu and L. K. Saul (2007). In B. Schoelkopf, J. Platt, and T. Hofmann (eds.), Advances in Neural Information Processing Systems 19. MIT Press: Cambridge, MA. pdf

[3] Benjamin Recht, Maryam Fazel, and Pablo Parrilo, Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization

[4] Jon Dattorro, Convex Optimization & Euclidean Distance Geometry, Chapter 4.

Tuesday, October 23, 2007

Compressed Sensing: Author Needs Help

Does any reader of this blog have the following paper in their harddrive ?

David Donoho and Yaakov Tsaig, Fast solution of ell-1-norm minimization problems when the solution may be sparse. (Preprint, 2006)

It used to be at: http://www.stanford.edu/%7Etsaig/Papers/FastL1.pdf

The html version is available from Google. It was listed on the Compressed Sensing repository website as of January 6, 2007 and was accessible until recently. The Compressed Sensing repository at Rice did not have a local copy and one of the author is looking for your help in finding this preprint (his account was wiped out).

Thanks for your help.

Thursday, October 18, 2007

Compressed Sensing: Hardware Implementation, Part III, The explosive birth of coded aperture.

{Update Nov. '08, I have made a list of most Compressed Sensing Hardware here ]

Before we start on the subject of hardware implementation, there is a new and comprehensive tutorial on Compressed Sensing by Mike Wakin and Justin Romberg.

I initially started the listing of hardware implementation in compressed sensing though the presentation of the single pixel camera at Rice, the random lens imager at MIT, and some spectrometer work at Duke, then there was a hyperspectral imager at Yale and an analog imager at Georgia Tech. I have seldom mentioned the obvious hardware implementation of the whole MRI work spearheaded at Stanford but this one doesn't require hardware modification per se, rather a change of operation of the hardware. There is also the whole work surrounding the design of new A/D hardware named A/I (analog to information) that will be useful for capturing very high frequency signals either through a Nonuniform sampler (NUS) or a Random pre-integrator (RPI) .

More work has been performed on some of these concepts by looking at how one can use the sparsity of spectral measurement and spatial information. At MIT, I had mentioned the Random Lens Imager ( Random Lens Imaging, Rob Fergus, Antonio Torralba, Bill Freeman ). In that project, they mentioned the fact that using machine learning techniques one could provide depth information from single shots. Instead of doing that, they have resorted to putting additional filters in a lens to eventually provide depth information:

Image and Depth from a Conventional Camera with a Coded Aperture by Anat Levin, Rob Fergus, Fredo Durand, and Bill Freeman
the abstract reads:
A conventional camera captures blurred versions of scene information away from the plane of focus. Camera systems have been proposed that allow for recording all-focus images, or for extracting depth, but to record both simultaneously has required more extensive hardware and reduced spatial resolution. We propose a simple modi_cation to a conventional camera that allows for the simultaneous recovery of both (a) high resolution image information and (b) depth information adequate for semi-automatic extraction of a layered depth representation of the image. Our modification is to insert a patterned occluder within the aperture of the camera lens, creating a coded aperture. We introduce a criterion for depth discriminability which we use to design the preferred aperture pattern. Using a statistical model of images, we can recover both depth information and an all-focus image from single photographs taken with the modified camera. A layered depth map is then extracted, requiring user-drawn strokes to clarify layer assignments in some cases. The resulting sharp image and layered depth map can be combined for various photographic applications, including automatic scene segmentation, post-exposure refocussing, or re-rendering of the scene from an alternate viewpoint.
One can see the similarity with the work performed by Andrew Ng and Ashutosh Saxena on learning depth from monocular images where a MAP estimation using Laplacian (sparse) priors is performed to infer depth. The point is that this technique can be readily using reconstruction techniques of compressed sensing in order to reconstruct the data.

Also using coded aperture and mentioned in another post is the work performed at Duke on spectrometers and hyperspectral imagers. The paper of Ashwin Wagadarikar, Renu John, Rebecca Willett, David Brady, entitled "Single disperser design for compressive, single-snapshot spectral imaging," in Adaptive Coded Aperture Imaging and Non-imaging Sensors, D. Casasent and T. Clark, eds., Proc. SPIE 6714 (2007) is still not available. However we know that the reconstruction of the spectral cube is performed using Gradient Projection for Sparse Reconstruction (GPSR) method (please note there is new version: GPSR 3.0) . While it is not yet released, Scott (a commenter) pointed out that there is another paper out on a similar subject: Michael Gehm, Renu John, David Brady, Rebecca Willett, and Tim Schulz, "Single-shot compressive spectral imaging with a dual-disperser architecture," Opt. Express 15, 14013-14027 (2007)
This paper describes a single-shot spectral imaging approach based on the concept of compressive sensing. The primary features of the system design are two dispersive elements, arranged in opposition and surrounding a binary-valued aperture code. In contrast to thin-film approaches to spectral filtering, this structure results in easily-controllable, spatially-varying, spectral filter functions with narrow features. Measurement of the input scene through these filters is equivalent to projective measurement in the spectral domain, and hence can be treated with the compressive sensing frameworks recently developed by a number of groups. We present a reconstruction framework and demonstrate its application to experimental data.

Where the solution is reconstructed using an Expectation-Maximization algorithm (e.g. a regularized version of the Richardson-Lucy algorithm). Other related information can be found in references [1] [2] [3].

With Hyper-GeoCam, a random lens imager based on the MIT design, we used random elements that would diffract wavelengths in different spatial locations, thereby enabling a random spectral-spatial mixing akin to the Duke design. The unmixing of these components is therefore directly amenable to Compressed Sensing reconstruction techniques. The main difference between those carefully crafted coded aperture imagers (Duke) and that of the random lens imager reside in the complexity of the reconstruction technique. The reason the random lens imager is not "taking off"(ahah) has to do with the fact that, in the calibration process, one cannot perform a matrix reconstruction with 10^6 elements even tough most of them are sparse. The current techniques just do not allow for it yet (they do not scale well above some hundreds constraints/variables). The crafted coded aperture imager provides an idea of where those terms are hence facilitating the location of the non-zero component of the calibration matrix. Another aspect of the problem is that with random materials, reflection may be wavelength sensitive and the whole calibration process is instrument specific. I am wondering aloud if the technique on Graph Laplacian Regularization for Large-Scale Semidefinite Programming [6] would not much help with respect to the calibration of the random lens imager.

References of interest:
[1] David Brady's lectures on computational imaging
[2] Coded aperture spectroscopy and spectral tomography
[3] Optical Imaging
[4] Ashwin Wagadarikar, Renu John, Rebecca Willett, and David Brady, "Single disperser design for coded aperture snapshot spectral imaging," Submitted to a feature issue on Computational Optical Sensing and Imaging (COSI), Applied Optics (2007)
[5]Ashwin Wagadarikar, Michael Gehm, and David Brady, "Performance comparison of aperture codes for multimodal, multiplex spectroscopy," Applied Optics 46, 4932-4942 (2007)
[6] Graph Laplacian Regularization for Large-Scale Semidefinite Programming, K. Q. Weinberger, F. Sha, Q. Zhu and L. K. Saul (2007). In B. Schoelkopf, J. Platt, and T. Hofmann (eds.), Advances in Neural Information Processing Systems 19. MIT Press: Cambridge, MA

Tuesday, October 16, 2007

Judging the Autism Charts Challenge

When writing this entry on the difficulty of displaying data when diagnosing Autism at age 2, I did not expect such a large number of people putting their minds into making the graphs better. Andrew Gelman and Masanao Yajima made a challenge out of it. It got picked up by the folks at JunkCharts, EagerEyes.org, and at Perceptual edge and much brain computing cycle went into this Challenge. Previously, I had already stated that I thought Autism was a Grand Challenge and this is in no small part because the diagnosis phase is not early enough, not accurate enough and sometimes not useful enough.
  • Not early enough is the reason why this study [1] by Catherine Lord was performed. Instead of taking a "normal" test at age 3, age 2 is preferred with regards to intervention.
  • Not accurate enough and not useful: there is currently very little difference whatsoever in treatment between kids that have been diagnosed Autistic or with Pervasive Developmental Disorder- Not Other Specified (PDD-NOS).
Eventually, one cannot even devise clinical trials for drug development until some of these tests are better refined. The figures entered as part of the Challenge can be seen in full scale here:










Please go to Andrew Gelman and Masanao Yajima's blog and answer the survey at the very end. Please note there are TWO questions.
  • Which plot did you like best?
  • Which plots did you find useful?
Please answer them both.. Thank you to Antony Unwin, Stack Lee, Robert Kosara, Patrick Murphy, Andrew Gelman and Masanao Yajima for contributing to this effort.

[1] Autism From 2 to 9 Years of Age, Catherine Lord, Susan Risi, Pamela S. DiLavore, Cory Shulman, Audrey Thurm, Andrew Pickles. Arch Gen Psychiatry. 694-701, Vol. 63, June 2006

Liked this entry ? subscribe to the Nuit Blanche feed, there's more where that came from

Friday, October 12, 2007

Is your desk a reflection of your memorization process ?


It's funny, it looks like one of the ways I am staying on top of several subjects that are not part of my day-to-day job resemble some of the techniques Jessica Logan describes as potentially interesting for memorization. She just gave a seminar at Rice on "Brain and Behavior: How We Create and Maintain Memory in Early Adulthood and Advanced Age" (webcast is here)

Brain and Behavior: How We Create and Maintain Memory in Early Adulthood and Advanced Age” - No one likes it, but it happens to the best of us—that frustrating feeling of trying to remember something you know that you knew at one time, but which now inexplicably eludes you. From older adults lamenting another "senior moment" to students stumped on a final exam, memory failures are an unwelcome but woefully familiar experience for everyone. This talk will focus on research that strives to help us understand how some of these failures occur and how we can improve our chances of avoiding them in the future. Cognitive research that integrates both behavioral and neuroimaging (fMRI) techniques to explore memory formation and retrieval in healthy younger and older adults will be discussed.


In her fascinating presentation, she makes the point that studies seem to show that cramming for a subject area is worse in the long run with regards to remembering it. She also present results for two other techniques called equal and spaced interval technique. In cramming, the memorization happens by repeating over and over the same subject area and then take a test on it. Most college students know that the technique might work for a test but that they are not optimal strategies for the long term. The two other techniques are using some type of time interval between repeating the same subject area in order to reinforce memorization. The equal time interval asks for an equal amount of time between recalls whereas the spaced interval techniques demands that recall occur at different (random) time intervals. Interestingly, young adult fare badly in recalling memory acquired through the spaced and timed intervals whereas old adults do see a benefit to these methods (as opposed to cramming). Additional information can be found in this paper [1]. I find it fascinating because the way this blog is structured and how I set up my desk remind me on how I eventually always randomly come back to a subject, rediscover it and remind myself very quickly of the important lessons from it. The market for product on memory is very large, yet none of these techniques seem to be proposed. I note her statement about spaced memorization:
  • Making practice more difficult initially can produce better performance on a final test
  • Challenging the learner in between practice attempts enhances the benefit of repetition
I think this is the main reason why people should read this blog on compressed sensing when trying to learn something else. The issue of sleep is also mentioned indirectly in her 24 hours study and I wonder if they yield the same results as for the sensorimotor control.

Reference: [1] Does Expanded Retrieval Produce Benefits Over Equal-Interval Spacing? David Balota, Janet Duchek, Susan Sergent-Marshall, and Henry Roediger III, Department of Psychology, Washington University.
Photo credit: ESA/NASA/ASI/JPL/SSI, Radar view of Titan's methane lakes.

Thursday, October 11, 2007

Producing Maps using Commercial Overflights: Part deux


In a previous entry, I mentioned the simple possibility of producing maps from commercial airliners using point and shoot cameras. The reason for that post was to show real examples that can be obtained from different heights and how these images can be put together using state of the art stitching programs. In the comment section Paul asked the questions:

the only questions I have are whether a) they'd be systematic enough to provide reliable coverage, and b) who'd be tasked with cleaning and analyzing the images (always a pain in the whatnot).
while Mikel asked:

Given that most commercial flights go along similar flight paths, would the coverage be potentially comprehensive enough to be useful?

What area magnitude could be covered by a single shot of the sensor at 35000 feet?


Let me first try to address the issue of coverage which seems to be an underlying issue. First let us remember that this solution is really when you have to survey or map a region hours if not minutes after a disaster. We are talking about finding out if specific large structures, roads, villages, town are in good shape or not. This can be applied to either New Orleans and all its surrounding regions right after Katrina or to California if the big one hits a large swath of land. In both cases, we know that satellites will provide imagery within the next three to six days. In some other countries where accurate maps do not exist, we may not have meaningful satellite coverage at all. In either of these extreme cases, we are talking about providing as much information as possible. The coverage of this technique might be small compared to Google Maps or Google Earth or to a specialized satellite campaign but the issue is trumped in that the information is timely. Obviously there is the issue of flight corridors but I am willing to bet that a Houston-Miami flight could happen to take a another lawful corridor if somehow the pilot knew that some imagery made in the plane could help some of the search and rescue effort in New Orleans. Let us also note that private efforts could yield improved coverage. If one follows the example of the NOAA that flew Cessnas over the Katrina ravaged area and incorporated their results into Google Maps. While NOAA is a government agency, a similar private effort could be undertaken. Please note the overlapping photos in these NOAA shots: stitching these images is easy to do with the software I mention later. The difference between this NOAA effort and one using commercial overflight with point and shoot cameras reside in three ways
  • no need to retrofit a plane with an outside camera
  • no need for GPS correction, pointing hardware and engineer's time, the stitching algorithm does the map
  • it has the ability to provide much faster response time.


The other enabling feature of this idea can be tracked back to the recent advances made in the intelligent stitching algorithm borne out of machine vision. Currently, the algorithm can automatically "merge" photos taken from nearly the same altitude, point of view and optical focus by different cameras. The merging/stitching is done absolutely automatically: No need for a human expert in the loop when putting those images together. You should not trust me, you try it. The free version of autopano produces panorama with watermarks. So next time you take a plane, use your point and shoot camera and try it.

Why would we need several shots of the same place on different airplanes or from different people in the same plane ? Mostly because of the clouds. Clouds are a pain and one can begin to remove them only if you have enough pictures that eventually cover the ground. Also, since window seats are on one side, it pays to have people from either side point their cameras in different directions. It has been my experience that in airliners, the best window was the one next to the flight attendants at the very end of the plane. It also allow you to have a view near straight down (nadir). I'll make a more technical note on what worked and what did not in a future entry.


With regard to actual coverage at 35,000 feet, one can begin to have an idea of the features of interest by usign Google Earth and get to a 35,000 feet altitude. Generally, the camera has a smaller view angle so one really sees less than that in the plane. But I would venture, that we can get a swath of at least 30 miles of useful data (depending on the optical zoom of the point and shoot camera).

To summarize, you get what you can with regards to coverage, but the more cameras in the air the most likely you get data that accumulates to eventually produce some type of a map. No need for experts to clean up the data, just a laptop, a stitching software and a point and shoot camera will do.

Eventually the idea is pretty simple, once a disaster area is known and that commercial airlines are flying over it, willing passengers take photos during the whole trip over from their window seat (with some simple instructions). Once landed either they give their 8GB sd cards to somebody, upload their data directly on the laptop at the airport arrival or upload them on the web. Then, using one of these intelligent stitcher program that cost $100 you can put all the images together even if they have been taken by different cameras at different times (given the altitude was about the same). One can then produce a relatively large map of land and put it on the web without the need to "connect" it to google maps or google earth.


Photo 1: This picture was taken at 35,000 feet over Iceland. Another plane is using the same corridor ?

Photo 2: Credit NOAA

Tuesday, October 09, 2007

Compressed Sensing: Reweighted L1 and a nice summary on Compressed Sampling


In a previous entry, I noted that Emmanuel Candès had presented a new iterative scheme at IMA that was sure to improve the current L1 reconstruction techniques used in Compressed Sensing. The scheme was called reweighted L1. It is now the subject of a technical report: Enhancing sparsity by reweighted L1 minimization by Emmanuel Candès, Mike Wakin and Stephen Boyd

It details the technique but also features a discussion on the reconstruction with overcomplete dictionaries, a comparison with the l2 based FOCUSS algorithm and a brief comparison with Chartrand's work. I noted two items from this paper which I am sure will be developed later:

  • The potential for the analysis-based reconstruction being superior to the synthesis based l1 recovery (they were shown to be different in the case of overcomplete bases see [1],[2])
  • The potential usefulness of nonconvex penalties in sparse signal recovery

It also looks like there is an issue coming up of IEEE Signal Processing Magazine on Compressed Sensing in which Candes and Wakin have written the following: People hearing without listening: an introduction to compressive sampling.

This is a well written introduction to Compressed Sensing with a particular attention to explaining clearly the Restricted Isometric Property. There is also a good summary of the Analog to Information (A/I) work with a mention on how reweighted L1 being used in that context. Much of it had already been shown on the IMA videos, now it is on paper. The most fascinating part of the A/I attempts is the reason why Compressed Sensing is really needed: the ability to sample at a much higher rate than currently feasible.
Whereas it may not be possible to digitize an analog signal at a very high rate rate, it may be quite possible to change its polarity at a high rate. The idea is then to multiply the signal by a pseudo-random sequence of plus and minus ones, integrate the product over time windows, and digitize the integral at the end of each time interval. This is a parallel architecture and one has several of these random multiplier-integrator pairs running in parallel using distinct or event nearly independent pseudo-random sign sequences.
When reading this, I could not but stop thinking how it could be used in optical interferometry.

References:
[1] M. Elad, P. Milanfar, and R. Rubinstein, "Analysis Versus Synthesis in Signal Priors", Inverse Problems. Vol. 23, no. 3, pages 947-968, June 2007.

[2] J.-L. Starck, M. Elad, and D.L. Donoho, "Redundant Multiscale Transforms and their Application for Morphological Component Analysis", the Journal of Advances in Imaging and Electron Physics, Vol. 132, pp. 287-348, 2004.

Friday, October 05, 2007

Compressed Sensing: Chaining Pursuit Code for dimension reduction in the L1 norm for sparse vectors


Make that seven codes useful to do compressed sensing. Joel Tropp made available a Matlab toolbox to perform Compressed Sensing using Chaining pursuit [tar.gz]. The Chaining Pursuit algorithm was explained in the following paper entitled Algorithmic linear dimension reduction in the l`1 norm for sparse vectors by Anna Gilbert, Martin Strauss, Joel Tropp, and Roman Vershynin:


Using a number of different algorithms, we can recover approximately a sparse signal with limited noise, i.e, a vector of length d with at least d−m zeros or near-zeros, using little more than mlog(d) nonadaptive linear measurements rather than the d measurements needed to recover an arbitrary signal of length d. We focus on two important properties of such algorithms.
• Uniformity. A single measurement matrix should work simultaneously for all signals.
• Computational Efficiency. The time to recover such an m sparse signal should be close to the obvious lower bound, mlog(d/m).
This paper develops a new method for recovering m sparse signals that is simultaneously uniform and quick. We present a reconstruction algorithm whose run time, O(mlog2(m) log2(d)), is sublinear in the length d of the signal. The reconstruction error is within a logarithmic factor (in m) of the optimal m-term approximation error in l`1. In particular, the algorithm recovers m-sparse signals perfectly and noisy signals are recovered with polylogarithmic distortion. Our algorithm makes O(mlog2(d)) measurements, which is within a logarithmic factor of optimal. We also present a small space implementation of the algorithm.
These sketching techniques and the corresponding reconstruction algorithms provide an algorithmic dimension reduction in the l`1 norm. In particular, vectors of support m in dimension d can be linearly embedded into O(mlog2 d) dimensions with polylogarithmic distortion. We can reconstruct a vector from its low-dimensional sketch in time O(mlog2(m) log2(d)). Furthermore, this reconstruction is stable and robust under small perturbations.

Joel also came out with bounds of interest to the previous small example I gave earlier on approximating sines with diracs (Compressed Sensing: How to Wow your friends). The article is entitled: "On the linear independence of spikes and sines" [ .pdf | arXiv math.FA 0709.0517 ]

Photo Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute: New Horizons Spacecraft watching Io and its volcano and Europa in the same photo shoot.