Pages

Thursday, October 11, 2007

Producing Maps using Commercial Overflights: Part deux


In a previous entry, I mentioned the simple possibility of producing maps from commercial airliners using point and shoot cameras. The reason for that post was to show real examples that can be obtained from different heights and how these images can be put together using state of the art stitching programs. In the comment section Paul asked the questions:

the only questions I have are whether a) they'd be systematic enough to provide reliable coverage, and b) who'd be tasked with cleaning and analyzing the images (always a pain in the whatnot).
while Mikel asked:

Given that most commercial flights go along similar flight paths, would the coverage be potentially comprehensive enough to be useful?

What area magnitude could be covered by a single shot of the sensor at 35000 feet?


Let me first try to address the issue of coverage which seems to be an underlying issue. First let us remember that this solution is really when you have to survey or map a region hours if not minutes after a disaster. We are talking about finding out if specific large structures, roads, villages, town are in good shape or not. This can be applied to either New Orleans and all its surrounding regions right after Katrina or to California if the big one hits a large swath of land. In both cases, we know that satellites will provide imagery within the next three to six days. In some other countries where accurate maps do not exist, we may not have meaningful satellite coverage at all. In either of these extreme cases, we are talking about providing as much information as possible. The coverage of this technique might be small compared to Google Maps or Google Earth or to a specialized satellite campaign but the issue is trumped in that the information is timely. Obviously there is the issue of flight corridors but I am willing to bet that a Houston-Miami flight could happen to take a another lawful corridor if somehow the pilot knew that some imagery made in the plane could help some of the search and rescue effort in New Orleans. Let us also note that private efforts could yield improved coverage. If one follows the example of the NOAA that flew Cessnas over the Katrina ravaged area and incorporated their results into Google Maps. While NOAA is a government agency, a similar private effort could be undertaken. Please note the overlapping photos in these NOAA shots: stitching these images is easy to do with the software I mention later. The difference between this NOAA effort and one using commercial overflight with point and shoot cameras reside in three ways
  • no need to retrofit a plane with an outside camera
  • no need for GPS correction, pointing hardware and engineer's time, the stitching algorithm does the map
  • it has the ability to provide much faster response time.


The other enabling feature of this idea can be tracked back to the recent advances made in the intelligent stitching algorithm borne out of machine vision. Currently, the algorithm can automatically "merge" photos taken from nearly the same altitude, point of view and optical focus by different cameras. The merging/stitching is done absolutely automatically: No need for a human expert in the loop when putting those images together. You should not trust me, you try it. The free version of autopano produces panorama with watermarks. So next time you take a plane, use your point and shoot camera and try it.

Why would we need several shots of the same place on different airplanes or from different people in the same plane ? Mostly because of the clouds. Clouds are a pain and one can begin to remove them only if you have enough pictures that eventually cover the ground. Also, since window seats are on one side, it pays to have people from either side point their cameras in different directions. It has been my experience that in airliners, the best window was the one next to the flight attendants at the very end of the plane. It also allow you to have a view near straight down (nadir). I'll make a more technical note on what worked and what did not in a future entry.


With regard to actual coverage at 35,000 feet, one can begin to have an idea of the features of interest by usign Google Earth and get to a 35,000 feet altitude. Generally, the camera has a smaller view angle so one really sees less than that in the plane. But I would venture, that we can get a swath of at least 30 miles of useful data (depending on the optical zoom of the point and shoot camera).

To summarize, you get what you can with regards to coverage, but the more cameras in the air the most likely you get data that accumulates to eventually produce some type of a map. No need for experts to clean up the data, just a laptop, a stitching software and a point and shoot camera will do.

Eventually the idea is pretty simple, once a disaster area is known and that commercial airlines are flying over it, willing passengers take photos during the whole trip over from their window seat (with some simple instructions). Once landed either they give their 8GB sd cards to somebody, upload their data directly on the laptop at the airport arrival or upload them on the web. Then, using one of these intelligent stitcher program that cost $100 you can put all the images together even if they have been taken by different cameras at different times (given the altitude was about the same). One can then produce a relatively large map of land and put it on the web without the need to "connect" it to google maps or google earth.


Photo 1: This picture was taken at 35,000 feet over Iceland. Another plane is using the same corridor ?

Photo 2: Credit NOAA

No comments:

Post a Comment