Pages

Wednesday, September 26, 2007

"but that can’t be — it’s still in Google Maps!": Making maps using commercial overflights

Paul Currion talks (in Open street Map and the next disaster ) about the need for Google maps to be updated for disasters. As we know, commercial airlines fly over disaster areas all the time. As can be seen from this magnificient flight pattern over the U.S: much of the territory is flown by airlines.


And sometimes, the needs is not for very high resolution as much as knowing that a bridge still exists. From Mikel Maron presentation and notes:
My friend Jesse Robbins… headed down and helped lead the set up of a relief operation, not too far from where this bridge on US Route 90 had been completely destroyed. However, the Red Cross was giving evacuation directions to cross this bridge, so loads of cars would stop at the edge of this pennisula with confused drivers. Jesse phoned the Red Cross multiple times to complain the bridge wasn’t there anymore .. and they responded “but that can’t be — it’s still in Google Maps!”
Maybe one of the way to bring maps without too much details is to use people taking pictures from planes and expecting them to upload all their pictures onto a web site. The stitching algorithm and software would do the rest. Since nobody is really an expert about stitching you can use a simple program like Autopano Pro that ask people to drag and drop images in a folder and Voila!.

Here is an example of overflying an area using a Commercial plane at 3,000 feet with a 3x optical zoom (not 10,000 feet as written, click on the link).

Here is an example of overflying an area with a jet at 30,000 feet at 500 mph with a 3x optical zoom (it is reduced to 10 percent but it can be shared on the web through zoomify at full scale).

Here is an example of overflying an area with a stratospheric balloon at 120,000 feet at 50 mph with a 12x optical zoom.
Clearly, a 3x point and shoot camera can tell you if the bridge is still there.

Cognitive Dissonance


No, that would be "We're in front of you, Discovery" Thank you Nasawatch. Let's enjoy pictures like this one before the shuttle is retired (in 2 years). That is just enough time for IMAX to do a movie on the last Hubble mission. We had a project that aimed at replacing the film based technology to digital, but it eventually got canceled after it became clear that the Shuttle would not be around for too much longer.

Tuesday, September 25, 2007

Don't mess with the Pyramids, people take it personnally


When Michel Barsoum came in town to present his latest findings I was not expecting that it would have any relevancy to an items of my research interest. His presentation was about the process involved in the construction of the Pyramids.

Michel Barsoum and his colleagues have found some evidence that parts of the Great Pyramids of Giza were built using an early form of concrete, debunking an age old myth that they were built using only cut limestone blocks.

The amount of resistance that goes with this theory is pretty impressive. For an idea of the fierceness of the debate, one can read the comments of this blog. As far as I can tell his main interest in the theory stands in the discovery of low cost concrete materials to be used in poor countries. The pyramid story, while interesting on its own right, is clearly setting the stage for more advances in our understanding low cost construction materials.

During the presentation, Michel Barsoum mentioned that if his theory holds, i.e. the upper part of the pyramids are made of concrete, then it is very likely to the top part of the pyramid still hold millions of liters of water. That water would be the reason early electromagnetic measurements were negative.

It so happens that there are other methods that can be used to find out if there is water in rocks:

  • Neutron Thermalization is one of them. In neutron transport, it is very well known that neutrons slow down very fast when in contact with water. This is the mechanism at the heart of Pressurized or Boiling Water Reactors (PWR / BWR) in use in most nuclear reactors. So when we try to find water on the Moon or Mars, neutrons are generated and scattered through rocks. Detection on how they have been slowed down (this is called thermalization) is key to understand the medium of interest. The idea is that neutrons decelerate to low speed (thermal) very fast when they scatter with hydrogen (because they have about of the same mass). Hydrogen is generally an indicator of water. Neutrons can either be galactic neutrons (very high energy GeV range) or produced by man-made generators (14 MeV) (like in the oil business). In either case the scattering down is so rapid that you don't have any population of neutrons in the intermediate range (epithermal). The technique was used during the Clementine mission to find water on the south pole of the Moon.
  • Another possibility of detecting water is with infra-red. One can take a look at satellite data that have IR sensors such the Hyperion hyperspectral camera on board EO-1. The spectral bands include wavelength in the IR range. One shot of the pyramids can be seen on the side of this paragraph (but if one wants a better one, one can task the satellite for $750 by going through the USGS interface.)
When asked, at the end of his presentation, why he felt there was so much resistance for a new explanation ( an explanation that is very persuasive as it includes photos taken at the Pyramids and other artifacts at the Louvres in France) his answer was sort of funny. Most kids, all over the world, are taught about the pyramids and how they were built with enormous blocks of stones. People take it personally when you tell them that this childhood explanation has holes and can be better explained with a simpler and more elegant solution.

On the difficulty of Autism diagnosis: Can we plot this better ?

[ New Update: see here on the Challenge, results and survey]

[Update: I have put some update in the text of this entry to reflect my better understanding of the graphs which doesn't change the graph itself]

When I met Catherine Lord a year ago, I was struck by this graph in her presentation (excerpted from "Autism From 2 to 9 Years of Age" [1]):

How can one understand it ? each circle denotes a set of people that have gone through one test designed to figure out whether or not kids were affected by Autism (PL-ADOS indicates Pre-Linguistic Autism Diagnostic Observation Schedule; ADI-R, Autism Diagnostic Interview–Revised and Clinician indicates that a clinician made an assessment based on an interview with the kid). Intersection between circles points to populations of kids that have gone through several tests [Update: and tested positive at these tests]. The intersection of all three circles indicate people that have gone through the three tests (clinician, ADI-R, PL-ADOS) [and tested positive at these three tests. A kid could conceivably be tested positive on the three tests and not appear in either graph A or B]. The number in the circle indicates the number of kids at age 2 that have been deemed Autistic (left circles) or in the Autistic Spectrum (right circles) testing associated with each circle (Clinician, ADI-R, PL-ADOS). The number in the parenthesis indicates the number of kids that have been diagnosed with the same diagnosis at age 9. Let's take an example: in the first three circles in the left, there is a label indicating: 16 (56%). This can be translated into: [Update: There 16 kids that were diagnosed with Autism at age 2 AND] they were also diagnosed with autism using ADI-R test, yet 56% remained in that diagnosis at age 9. [Update: One more thing: The category Autism [A] and ASD [B] denote kids that have been deemed Autistic or with ASD through a Best Estimate method at age 2, this method use several means to come to that conclusion. The percentages in parenthesis denote whether these kids are still Autistic or with ASD seven year later using a Best estimate method at age 9]. Some of these numbers are simply stunning because they show our current inability to do a good job a determining what constitute reliably autism at age 2. This is all the more important that an earlier diagnosis is really needed to probably change the outcome of this condition. Catherine and her co-workers eventually comment in the paper:

Diagnosis of autism in 2-year-olds was quite stable up through 9 years of age, with the majority of change associated with increasing certainty of classifications moving from ASD/PDD-NOS to autism. Only 1 of 84 children with best-estimate diagnoses of autism at age 2 years received a nonspectrum diagnosis at age 9 years, and more than half of children initially diagnosed with PDD-NOS later met autism criteria. Nevertheless, more than 10% of children with diagnoses of PDD-NOS at age 2 years received nonspectrum best-estimate diagnoses (ie, not autism or ASD) by age 9 years, and nearly 30% continued to receive diagnoses of PDD-NOS,indicating mild symptoms at age 9 years. A significant minority of children with milder difficulties within ASD at age 2 years showed only mild deficits in the clinical ASD range at age 9 years. Classifications changed substantially more often from ages 2 to 5 years than from ages 5 to 9 years. The bulk of change in diagnosis occurring in early years is consistent with another recent study. At age 2 years, diagnostic groups were more similar in functioning and IQ than the diagnostic groups identified at age 9 years, when the autistic group showed very poor adaptive functioning and the PDD-NOS group, much less abnormal verbal and nonverbal IQ. Among this specialized group of clinicians, clinical judgment of autism at age 2 years was a better predictor of later diagnosis than either standardized interview or observation. Contemporaneous agreement between clinical judgment and best-estimate judgment for 2-year olds was equal to that found between experienced raters in the DSM-IV field trials for older children and adults. Though the clinical diagnoses at age 2 years were made without knowledge of the ADI-R and ADOS algorithm scores, each clinician had administered either the PL-ADOS or the ADI-R and had the opportunity to discuss his or her impressions with the experienced clinician who had administered the other instrument. Thus, the information available to them was very different from the information obtained during a typical single office visit to a clinical psychologist or developmental pediatrician. The use of standardized measures seems likely to have improved the stability of diagnosis both directly through straightforward use of algorithms for autism and ASD and also indirectly through structuring clinical judgment. Of cases in which the classifications yielded by both instruments were not supported by the clinicians at age 2 years, 40% were children with severe mental retardation (and not autism) or children with very difficult behavior (and not autism), while the remainder were mild cases of autism characterized as uncertain. On the other hand, clinical judgments were consistently underinclusive at age 2 years, both for narrow diagnoses of autism and for broader classifications of ASD at age 9 years. Thus, scores from standardized instruments also made real contributions beyond their influence on informing and structuring clinical judgment. Overall, while standardized research instruments at age 2 years did not fully capture the insight in the form of certainty ratings made by experienced, well-trained clinicians, this insight was not by itself sufficient.

My main problem with this graph is that it does not make sense right away. Even though I have been thinking about it for a while, I still cannot get around a better way of displaying this data. This assessment over time is unique in the annals of Autism studies and hence a major milestone. I wish it were better designed to convey some of its underlying statistics or lack thereof. Examples shown by Andrew Gelman in his blog or Edward Tufte maybe a good starting point. In particular, I am wondering how one could adapt the graphical on cancer survival rate redesigned by Tufte to this study. Initially the Lancet study on cncer survival rate showed this graph:


Tufte redesigned it to a stunning comprehensible table


Can we do a better plot of the Autism study ?
[ Update 1: Andrew has posted the beginning of an answer here ]



[1] Autism From 2 to 9 Years of Age, Catherine Lord, Susan Risi, Pamela S. DiLavore, Cory Shulman, Audrey Thurm, Andrew Pickles. Arch Gen Psychiatry. 694-701, VOL 63, JUNE 2006

Monday, September 24, 2007

Guiding Intent


No MRI for you: It looks as though one of the primary reason for not using MRI to detect behavior (and make money off of it) is not because the science of dimensionality reduction from brain activity is barely understood. More likely, it is because some type of regulation will forbid the use of MRI altogether. This is a stunning development as there is no scientific ground on which these regulations stand. I am only half joking on this topic as I cannot understand how an entity like the EU can pass a law that in effect will kill people and prevent them from doing Compressed Sensing.

This news and the fact that not everybody has access to a full scale fMRI system is all the more reason to consider reverting to more passive means of detecting intention. Previously, I mentioned the issue of eye tracking to detect autism (a business case Part I, Part II, Part III). The issue of detecting autism early is all the more important for families that already have a case of autism. They want to know very early if this same condition affect the new siblings. The idea is that through some very early detection and appropriate therapy, the brain may train itself to work better very early, resulting in a tremendous difference in the final diagnosis. As it turns out, these posts also were hinting that gaze following deficiency was a likely culprit for language deficiencies. But there seems to be an additional reason as to why eye tracking could even attract a larger crowd (not just the people affected with autism): Scientific Inference making or guiding intention.

Michael J. Spivey and Elizabeth Grant [1] did a study in 2003 suggesting a relationship between eye movements and problem-solving by showing that certain patterns of eye movement were reflected as participants got closer to solving the problem. More recently, Laura E. Thomas and Alejandro Lleras
tried to evaluate this further in this paper [2]
In a recent study, Grant and Spivey (2003) proposed that eye movement trajectories can implicitly impact cognition. In an "insight" problem-solving task, participants whose gaze moved in trajectories reflecting the spatial constraints of the problem's solution were more likely to solve the problem. The authors proposed that perceptual manipulations to the problem diagram that influence eye movement trajectories during inspection would indirectly impact the likelihood of successful problem solving by way of this implicit eye-movement-to-cognition link. However, when testing this claim, Grant and Spivey failed to record eye movements and simply assumed that their perceptual manipulations successfully produced eye movement trajectories compatible with the problem's solution. Our goal was to directly test their claim by asking participants to perform an insight problem-solving task under free-viewing conditions while occasionally guiding their eye movements (via an unrelated tracking task) in either a pattern suggesting the problem's solution (related group) or in patterns that were unrelated to the solution (unrelated group). Eye movements were recorded throughout the experiment. Although participants reported that they were not aware of any relationship between the tracking task and the insight problem, the rate of successful problem solving was higher in the related than in the unrelated group, in spite of there being no scanning differences between groups during the free-viewing intervals. This experiment provides strong support for Grant and Spivey's claim that in spatial tasks, cognition can be "guided" by the patterns in which we move our eyes around the scene.
in the paper they eventually claim:

We believe that eye movement trajectories can serve as implicit “thought” guides in spatial reasoning tasks...Although additional studies are necessary to determine how powerful this link between eye movements and cognition is, it is now clear that not only do eye movements reflect what we are thinking, they can also influence how we think.

This is fascinating and I wonder how the use of serious games for therapy or cognition improvement might be a good start.



[1] EYE MOVEMENTS AND PROBLEM SOLVING: Guiding Attention Guides Thought, Elizabeth R. Grant and Michael J. Spivey
[2] Moving eyes and moving thought: The spatial compatibility between eye movements and cognition, Laura E. Thomas, Alejandro Lleras (or here)
[3] Image: Evil gives, Gapingvoid.com, Hugh Macleod

Friday, September 21, 2007

Compressed Sensing: How to wow your friends.


Here is a short example showing the power of Compressed Sensing. I modified a small program written by Emmanuel Candes from his short course at IMA ("The role of probability in compressive sampling", Talks(A/V) (ram)).
Before you continue on, you need to have Matlab and SeDuMi installed on your system. Let us imagine you have a function f that is unknown to you but for which you know it is sparsely composed of sine functions. Let us imagine for the sake of argument that:

f(x)=2*sin(x)+sin(2x)+21*sin(300x)

The normal way of finding what this decomposition is by computing the scalar product of this function f with sines and cosines with as large as possible of frequency content (in order to capture the high frequency content of this function). This is done iteratively because one doesn't know in advance that a frequency of 300 is part of the solution. But in effect, one is solving a least square problem.

With Compressed Sensing, we know that if we are to measure this function with an incoherent basis to sines and cosines, we are likely to find the decomposition using the L1 minimization. We know that diracs and sines are incoherent (please note we are not using random projections), so we evaluate the scalar product of f with several diracs functions centered at say 18 different locations. In effect, we query for the value of f(x) at the following points x -chosen at random by hand- : 1, 4, 6, 12, 54, 69, 75, 80, 89, 132, 133, 152, 178, 230, 300, 340, 356, 400
We then use these values to solve for the coefficients of the sine series expansion of f(x) by doing an L1 minimization using SeDuMi:

clear
% size of signal
n = 512;
x0 = zeros(n,1);
% function is f(x)=2*sin(x)+sin(2x)+21*sin(300x)
x0(1)=2;
x0(2)=1;
x0(300)=21;
% evaluating f at all sample points xx
% f(1), f(4), f(6).....f(340) f(356) f(400)
xx=[1 4 6 12 54 69 75 80 89 132 133 152 178 230 300 340 356 400];
% C is measurement matrix
for i=1:n
C(:,i)=sin(i*xx)';
end
b = C*x0;
% b is the result of evaluating f at all sample points xx
% f(1), f(4), f(6).....f(340) f(356) f(400)
% C is the measurement matrix
% let us solve for x and see if it is close to x0
% solve l1-minimization using SeDuMi
cvx_begin
variable x(n);
minimize(norm(x,1));
subject to
C*x == b;
cvx_end
figure(1)
plot(abs(x-x0),'o')
title('Error between solution solution found by L1 and actual solution')
figure(2)
plot(x,'*')
hold
plot(x0,'o')
title('Solution found for x using L1 minimization')

With only 20 function evaluations we have a near exact reconstruction. You may want to try the the L2 norm (or least-square) by replacing:
minimize(norm(x,1));

by
minimize(norm(x,2));
and then change the number of function evaluations needed to reach the same result. With 200 functions evaluations, the error between the reconstructed solution and the actual solution is several orders of magnitude larger than the L1 technique with 20 functions evaluations.
Pretty neat uh ?


Liked this entry ? subscribe to the Nuit Blanche feed, there's more where that came from

If you think this blog provides a service, please support it by ordering through the Amazon - Nuit Blanche Reference Store

Tuesday, September 18, 2007

Imaging from the sky: When You Become The Map

There is a new call from the HASP folks about submitting new payloads to be flown next year in a NASA high altitude balloon. Deadline is December 18, 2007, it is directed toward undergraduate projects.
From the HASP website:

September 17, 2007: HASP CALL FOR PAYLOADS 2007-2008 RELEASED: The HASP Call for Payloads 2007-2008 (CFP) has been released and application materials are now available on the HASP website “Participant Info” page. Student groups interested in applying for a seat on the September 2008 flight of HASP should download these materials and prepare an application. New for this year is an increase in the allowed weight of the student payloads. Small class payloads can now mass up to 3 kilograms and large class payloads can weigh as heavy as 20 kilograms. Applications are due December 18, 2007 and selections will be announced by mid-January 2008.


The photos below and sideways are a 10 percent composite of several photos taken at 30,000 feet, with a 3x optical zoom at 500 mph. The speed makes it very unl
ikely to get any good details without some type of processing. And so, for the time being, imaging the ground with some type of precision with some type of point and shoot camera seems to be only feasible to payloads on balloons.
Compared to satellite imagery, one of the interesting capability is to remove the effect of clouds when possible. In satellite imagery, cameras work with pushbroom technology where the imager is a line of pixels (not a square dye). One consequence is the inability of photographing twice the same object with one sweep. Using off the shelf cameras on much slower balloons allow one to obtain multiple images of the same object at different angle. This is important when one wants to evaluate whether the object is noise or not.

Chris Anderson of the Long Tail book mentioned a different approach by Pict'Earth to using images from the sky using UAVs and patching them into Google Earth. This is interesting, but as I have mentioned before, when you take enough images, you don't need Google Earth, you don't need the headache of re-projecting these images onto some maps (even though it looks easier with Yahoo Map Mixer for small images), because you are the map. No need for IMUs or GPS instrumentation. This is clearly an instance of advances in stitching algorithms removing hardware requirements on the sensors. As for the current results Chris is getting from PTGui, I am pretty sure the autopano folks will enable the orthographic projection soon in order to cater to that market. With balloons, the view is from very far, so the patching algorithm has no problem stitching images together. In the case of UAVs, you need the orthographic projections.

Eventually, two other issues become tremendously important (especially in the context of Search And Rescue). Cameras and memory are going cheaper and one is faced with GB's of data to store, map and share. Our experience is that the sharing is challenging when you go over 2 GB of data mostly because of small file format limits (2 GB). Zoomify is interesting and they need to figure out a way to deal with larger images. While Autopano allows for images taken at different times to be overlayed with each other (a very nice feature), the viewer might be interested in this time information. Right now I know of no tool that allows one to switch back and forth between different times for the same map.

References:

1. Comparing Satellite Imagery and GeoCam Data
2. A 150-km panoramic image of New Mexico

Friday, September 14, 2007

Search and Rescue: New Directions.


After the heartbreaking results of GeoCam and Hyper-GeoCam during HASP 2007, we are going to investigate the same type of technique from less unusual ways of putting things in the air. Some of our initial findings can be seen here. We cannot afford to wait for a year to have results like these (small planes,...). In particular, the inability to get systems in a working shape and actually taking data in a rapid turn around is just a big invitation to Murphy's law.
I have already tried to gather similar data from commercial airliners but with a 3X optical zoom point and shoot cameras. I am going to improve that. Since we are talking about an altitude of 10,000 feet and a speed of about 700 km/h, the parameter for map/panorama making are different. The image in this entry was taken over Canada and assembled about 10 photos. In this example there was no attention given to the detail of the scene.

It looks there is some interest from other people in this area (I had no idea), I am going to investigate that as well as with some of the contact I had during the search for the Tenacious. I'll report on this later. One of the most surprising findings of the current search for Steve Fossett is the finding of the location of at least six other crashes. I had known that crashes occured and went missing after a year but I personally had no idea of the the large amount of missing planes:

The search has spread across an area of 17,000 square miles, twice the size of New Jersey. Crews will continue combing sections of that vast landscape, but on Sunday they began focusing on the territory within 50 miles of the ranch. Most crashes occur within that radius during takeoffs or landings, Nevada Civil Air Patrol Maj. Cynthia Ryan said.

``We've got close to 100 percent covered, at least in some cursory fashion,'' Ryan told reporters Sunday. ``We have to eliminate a lot of territory.''

The discovery of at least six previously unknown wrecks in such a short time has been a stark demonstration of the odds against finding Fossett's single-engine Bellanca Citabria Super Decathlon.

The Florida-based Air Force Rescue Coordination Center, which is helping coordinate the search, maintains a registry of known plane wreck sites.

The registry has 129 entries for Nevada. But over the last 50 years, aviation officials estimate, more than 150 small planes have disappeared in Nevada, a state with more than 300 mountain ranges carved with steep ravines, covered with sagebrush and pinon pine trees and with peaks rising to 11,000 feet.


What is currently also very clear in my mind is that the turn-around between instrument data gathering and analysis is taking too long. The Mechanical turk initiative is a noteworthy one, however, it does not address our current inability to process intelligently the wall of data coming from these hyperspectral imagers (which seem to largely never recover any useful data for the searches). I am thinking of probably using some of the compressed techniques to be able to do that on-board the planes. Having to these data used to be difficult, it looks like the European Space Agency understands that more people need to have access to them to find interesting things in niche markets. They make their data available here.

Since the number of posts on the subject has risen over the course of this year, I am summarizing all these entries in a more coherent way here. You can also reach that link by clicking on the right side bar. In that link, there are several subjects that do not have an entry but eventually I want to address them and the necessary improvements needed for the technology to be optimal in terms of operations.

Wednesday, September 12, 2007

Compressed Sensing: Oil, Curvelets, Missing Data and a Condition on strong CS


There is a new batch of Compressed Sensing articles showing up on the Rice Compressed Sensing site. I am told by Mark Davenport that the site is going to change at some point and will allow for RSS feed. This is good.

Three subjects caught my attention in this batch.

First as usual, the oil people have always been pushing the envelope in terms of devising and using the newest applied mathematics to get things done, so I was not overly surprised to see how curvelets are being used to figure out the layers from seismic measurements. In Non-parametric seismic data recovery with curvelet frames, Felix J. Herrmann and Gilles Hennenfent describe a curvelet-based recovery of seismic signal by a sparsity-promoting inversion technique. This is interesting as this is first time I see an additional matrix added in the inverse problem in order to get rid of data that are necessarily shown by the physics to lead to an ill-posed problem.

With regards to the problem of the missing data problem mentioned earlier, Yin Zhang at Rice asks and begins to answer the question "When is missing data recoverable?" and show how the issue of what you are given in the missing data problem could be construed as a random projections from the real data with the hope that using these measurements you can do a good job of inverting the problem at hand.

Yin is also a co-writer of the Fixed-Point Continuation (FPC) matlab code , An algorithm for large-scale image and data processing applications of l1-minimization

General l-1 regularized minimization problems of the form

(1) min ||x||1 + μ f(x),
where f is a convex, but not necessarily strictly convex, function, can be solved with a globally-convergent fixed-point iteration scheme.
Last but not least, Boris S. Kashin and Vladimir N. Temlyakov, A remark on compressed sensing where their equation 1.4 gives a condition I had never seen between the L1 and L2 norm and sparsity number to permit what they call Weak or Strong Compressed Sensing. What we have generally heard before seemed to only be about sparsity number, so I think this is new.

Tuesday, September 11, 2007

This is how science is done, trial and error in the mud.




As noted by the HASP folks, one of our payload fell into the mud after landing from a 37 km fall with a parachute. We just got the cameras this morning and found out it was GeoCam. Hyper-GeoCam is fine. Now we need to open the boxes and find out if there is anything in either of them. This is Science in Motion where Murphy's law always strike. The camera look fine, let's see if they actually took pictures.
[Update: GeoCam is OUT.]

Tex-MEMS: it keeps on going. Tex-MEMS IX at Texas Tech


Tex-MEMS IX
will take place in Lubbock at Texas Tech on September 17th. You can still register. It is a very relaxed type of a meeting. The nice thing about it is that most of the time you get to see the work of people you would not be talking to on campus or in other areas not related to your research. It's a great eye opener and, unlike other professional meetings, there is very little competition between the speakers which makes it a pretty unique place to actually share your thoughts and imagine other ways of doing things. The list of talks is here. Thank you Tim for making it happen.

When Ali Beskok and I started this series of meeting, we had no idea that it would continue for more than 8 years. wow.

Saturday, September 08, 2007

Have you ever flown 37 km up under a pale moonlight ?

Here is a magnificient video taken by the CosmoCam, a webcam, on top of the HASP platform during the last flight that hosted our experiments (GeoCam and Hyper-GeoCam). When you are at 120,000 feet above the ground, and it is noon, the moon shines one you.

Friday, September 07, 2007

Adding Search and Rescue Capabilities (part III): Using a High Altitude Balloon to Search for Non-Evading Targets


In the search for non-evading targets like the Cessna N2700Q there are many solutions. I'd like to highlight one potential capability: A high altitude balloon with a high end digital but low cost camera (with 12 X or more optical focus, we used a Canon S3 IS camera). The idea is that you want to look at a large swath of land and have enough resolution in near real time. This is what we did for GeoCam but with a NASA balloon and many other experiments (HASP). However a homemade balloon is not really hard to do (most of these homemade projects generally aim at getting an edge-of-space picture so there is no good pictures of the ground, especially using the best optical focus). The flyover of a homemade balloon is about 2-3 hours and reach about the same height as a NASA balloon. Because of the shorter time up, the distance covered is less as can be seen in their map. The big issue is to make sure there is a "robotic" platform akin to what people use to do panoramas or Kite Aerial photography. CMU-Google have designed a system called Gigapan, but I am not sure it can be used directly on a small balloon with restricted power issues. Maybe a system like this one might be interesting to investigate.
On GeoCam, we basically used a microcontroller that sent a signal to what we called a finger to push the button on the camera. When powered up, we also had to have a system to get the optical focus to be at a maximum. Results can be seen here. Once shot, one of the issue is making sure that the data then becomes available to a large public. This is not trivial when these panoramas take about 2-4 GB. It is reasonable to cut them into smaller panoramas. Smaller panoramas could be as small as this one or as large as this one. One of the issue is clouds as can be seen here. In our case, timing because photos shots was about 23 seconds with a 4 GB SD card. In our second flight, we looked for a 80 second increments with an 8 GB SD card. The second time increment was designed to take advantage of the 20 hours of flight. One of the interesting element, if the flight is to be short, would be to reduce the time increment in order to allow for the camera to swing and take photos sideways. This also requires another RC-motor and a mechanism to allow for the swinging mechanism. When the balloon lands, it is a matter of putting the card into a computer with Autopano which automatically puts these images together into panoramas. Using a software like Zoomify to put these panoramas on the web is essential to reduce the bandwidth between the server and people helping the search. On a side note, both Zoomify and Autopano were supported by very cool team of people.
We eventually did an an assessment of the resolution of the results and it turns out that we have about a 1 meter resolution. Please note that the panoramas such as this one used images with 50 percent lower resolution (that was due to an earlier version of autopano that has since been fixed). Also, in order have an idea of the size of an airplane, we picked two jets during the GeoCam flight. While the low resolution image on this entry is small, the larger image, makes it a non trivial image. The human eye is clearly able to make up this is a plane. This jet is also most probably larger than a Cessna. Some assembly pictures of GeoCam can be found here.

Thursday, September 06, 2007

Compressed Sensing: Random Thought on a Low Dimensional Embedding

One of these days I am going to have to talk about the Experimental Probabilistic Hypersurface (EPH) that Bernard Beauzamy and Olga Zeydina have been developing. One of the interesting feature of this construction is the ability to answer one of the engineers most potent problem: Given a set of experimental points dependent on a series of parameters, how do you figure if you know your phase space ? In other words, how is the knowledge acquired over time bring about some expertise about a certain area of engineering. One example case I have witnessed is Computational Fluid Dynamics (CFD): When you have a code that takes a non trivial amount of time to converge and the CFD code depends on many different parameters (friction factors for some type of turbulence model, fluid properties,...), how do you figure if you really understand your problem after having run three, ten, one hundred cases ?

The EPH approach is about building a multidimensional probability distribution based on the Maximum Entropy principle. In other words, EPH reproduce what you know about a problem but not more. It can also be used as some sort of storage device. Recently, some constructive comments were made in order to strengthen its applicability[1]. This is very interesting as I think it should be taught to any third year engineering students. I have been involved with the Robust Mathematical Modeling program set up by Bernard but mostly as a spectator so far. Initially, one of the issues I wanted to deal with was, whether there were similar techniques in the published literature. I have not found too many, at least none that were that expressive. The post by Masanao Yajima on Overview of missing data led Aleks Jakulin to kindly provide the beginning of an answer.

The other issue that should be raised is the issue of distance in the parameter space. Right now, one uses L2 but there is no good rationale for this especially when you deal with, say, an already dimensionless Reynolds Number and a Froude Number: i.e. even if you normalize them, how can one be comfortable with the metric distance used. A good solution to this issue is to import expert's knowledge through the redefinition of the norm. An example of that can found in the booming Machine Learning field ( Improving Embeddings by Flexible Exploitation of Side Information) As it turns out, some of these methods use Convex Optimization in order to produce a good distance function. In the paper on Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization by Benjamin Recht, Maryam Fazel, Pablo A. Parrilo, the building of a distance matrix was one of the few example showing the parallel between Compressed Sensing and the heuristics of the nuclear norm minimization. In the example of the nuclear norm minimization case, some element of the distance matrix are known while others are not. The nuclear norm minimization allows one to find the lowest rank approximation to this distance matrix given the known distance elements. The use of the Restricted Isometric Property from the paper [2] allows one to make some statement on the result of the nuclear norm minimization. More specifically from [2]:

Low-dimensional Euclidean embedding problems
A problem that arises in a variety of fields is the determination of configurations of points in low-dimensional Euclidean spaces, subject to some given distance information. In Multi-Dimensional Scaling (MDS), such problems occur in extracting the underlying geometric structure of distance data. In psychometrics, the information about inter-point distances is usually gathered through a set of experiments where subjects are asked to make quantitative (in metric MDS) or qualitative (in non-metric MDS) comparisons of objects. In computational chemistry, they come up in inferring the three-dimensional structure of a molecule (molecular conformation) from information about interatomic distances ....However, in many cases, only a random sampling collection of the distances are available. The problem of finding a valid EDM consistent with the known inter-point distances and with the smallest embedding dimension can be expressed as the rank optimization problem...

In other words, would the RIP-like condition impose a choice on the parameters to be chosen representative of a certain expert's knowledge in the EPH framework ?


Reference:
[1] Comments made by S. Destercke, IRSN, following Olga Zeydina's presentation at IRSN/Cadarache, July 2007, and answers by SCM (August 2007).
[2] Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization, Benjamin Recht, Maryam Fazel, Pablo A. Parrilo

Tuesday, September 04, 2007

Do you feel lucky ahead of time ?


When I last mentioned the issue of super-resolution, I was under the impression that turbulence aided micro-lensing could not be used to do astronomy because the atmosphere layer was too thick. It looks as though, one can wait longer in astronomy and also obtain similar results as explained in the Lucky Image Website. But while the CCD technology is indeed impressive, much of the post processing is essential to the construction of full images. One needs to figure out automatically where the turbulence helped you and where it didn't:

There are several newsgroups that are interested in Lucky Imaging with video sources. They include:

http://groups.yahoo.com/group/videoastro/ , http://www.astronomy-chat.com/astronomy/ and http://www.qcuiag-web.co.uk/

QCUIAG is a very friendly group and visitors wanting to learn more about the techniques are pretty much guaranteed answers to their questions. Coupled with image post processing algorithms, these techniques are producing images of remarkable quality. In the UK Damian Peach is probably the most experienced in using these techniques. Some examples of his work can be seen here: http://www.damianpeach.com/
Programmes such as Astrovideo, which was originally designed to support the video stacking process developed by Steve Wainwright, the founder of QCUIAG, have frames selection algorithms, see: http://www.coaa.co.uk/astrovideo.htm

A working automated system was developed in the program K3CCDTools by QCUIAG member Peter Katreniak in 2001, see: http://qcuiag-archive.technoir.org/2001/msg03113.html

You can see the home page for K3CCDTools here: www.pk3.org/Astro/k3ccdtools.htm
And so one wonders if there would be a way to first acquire the part of the image of interest and then expect it to grow to the full image as turbulence keeps on helping you. The application would not be about astronomy where the stacking of images essentially reduce the noise to signal ratio but could be used to do imaging on earth. When observing the sky, we mostly have point-like features. If one were to decompose one of these images using wavelets, it is likely that the clearest part of the images would have the highest frequency content (besides noise). And so one of the ways to accomplish this task would be to look for parts of images with the sparsest low frequency components. Eventually when one deals with high resolution astronomy images, one is also bound to deal with curvelets and so the reasoning I just mentioned would need to be revised.


References: [1] Damien Peach's breathtaking lunar photographs.
[2] Jean-Luc Starck page.
[3] Palomar observatory lucky image press release.

Sunday, September 02, 2007

Ready to launch



the title says it all. HyperGeoCam will be lifting off in a short while over New Mexico for about 10 to 20 hours taking about 1000 snapshots if all goes well.

[ Update: it went up at 7:30 am mountain time, 8:30 Central time, 15:30 UMT, you can see LIVE what one of the webcam see here]

[ Update 2: HASP has landed. 6.55 miles from the California border after 19 hours of flight. woohoo. We won't know if we have data until the hardware is returned to us.]

Saturday, September 01, 2007

Compressed Sensing: Reweighted L1 meets Europa

In his last talk at the Summer school on Compressive Sampling and Frontiers in Signal Processing, Emmanuel Candès talked about "Applications, experiments and open problems". It was very inspiring and intriguing. While people like Rick Chartrand are looking at Lp with p less than 1, Emmanuel proposes to use a reweighted L1 in order to converge very rapidly to the L0 solution. He then goes on to show this stunning result with overcomplete bases. wow. Maybe that will help deblurring the current images from the Rice camera.

With regards to universal compression, Emmanuel makes the good point that when doing exploration, NASA spends a good amount of energy in the compression of images on board spacecrafts. The idea is that the bandwidth is extremely reduced and you want to send as much as possible interesting data.
Historically, one of the underlying reasons the Russians developed Rorsats was because their Digital processing (or number crunching) capabilities which were not as good as that in the U.S. So in order to provide adequate computational capabilities on board of these satellites systems and because the systems required low orbit, they had to have nuclear reactors. About 20 to 30 of them were actually launched (only one nuclear reactor was sent into space in the U.S.) All of them now contributing to space debris issues. Some of them fell back to earth. Power is also the reason why Prometheus, a nuclear reactor concept was envisioned in order to explore Jupiter and its moons. It was well studied until it got canceled. With the advent of compressed sensing, maybe we should think about doing these missions with hardware that do not require that much power again. I previously highlighted a much more lucrative application where bandwidth was also a real issue.

HASP will launch today.


Our fingers are crossed. The good folks at LSU (Greg Guzik and Michael Stewart) just told us the flight might be today. The CosmoCam (a webcam aboard the HASP high altitude balloon) can be viewed here. HyperGeoCam is on-board. HyperGeoCam is a random lens imager using Compressed Sensing to recover full images. HyperGeocam will be shooting photographs from 120,000 feet (36 km) down. We expect about 1000 images from this experiment. Last year we flew a simple camera (GeoCam) and automatically stitched together images obtained from that camera and produced these amazing panoramas. HyperGeoCam uses the concepts of Random Lens Imaging with random light decomposing elements in order to produce spectrally differentiable images.