During breaks, we would throw M&Ms at each other for fun. In restrospect, I can only imagine if one of those M&Ms had flown right into the eye ball of one of these two; the mission would have been delayed and then....we never know. So yes, I feel connected to gravitational lenses in a unique way that involves M&Ms. More seriously, in two days, barring any additional funding, the space Shuttle will fly for the last time. The last servicing mission for the Hubble took place a year ago. The Hubble will never be re-serviced again and will eventually crash in the ocean.
I mentioned a K-T extinction level event yesterday when in fact a smaller asteroid can produce a much larger plume than the one from this Icelandic volcano named by somebody who slept on his keyboard or maybe it's just a way for Iceland to tell us in a more direct manner "Forgive Our Debt or We Ground Your Planes." To give a sense of the underlying reason as to why traffic over Europe is stalled, take a look at the inside of the turbines of a F-18 Hornet plane flown on purpose through that cloud by the Icelandic Air Force (Thanks Mikko Hypponen.) Raising the temperature of sand in a jet engine combustion chamber is likely to not produce a pretty picture.
As one can see, not only the trend is not really decreasing for the km size asteroids, only one type of surveys seems to be finding most new objects: the Catalina survey effort.
Which leads us to the AMD competition and myfailed bid to get a 48 core.
Back in January 2006 I was talking to David McKay and who seemed to know people who wanted to use fast algorithms doing star tracking type of analysis to analyze a large set of photos taken by the Royal Astronomical Observatory over the past fifty years. They wanted to find traces of previous close encounters we may have had in the past fifty years that have still not been found yet (because the object has not come back in front of the current survey). The issue was that they had _LOTS_ of photos but even though they might have time to digitize them, they still were looking for a way to figure out where the photo was taken to be able to provide some type of inference as to whether or not every little bright spot on the photo was a normal star or something a little more threatening. I know people who do star tracking algorithms and he knew the folks that had the photos but this discussion never went anywhere. Instead of evaluating the Donoho-Tanner phase transition, a subject that induces much less fear indeed, here is what I should have proposed:
48 Cores to Save Earth from K-T level Extinction Events.
The purpose of this project is to use large amount of old photos from the sky as well as photos taken by amateur astronomers all around the world to enhance the ability to detect large asteroids (km size or more) potentially threatening life on Earth. These large asteroids represent a clear and present danger to Humanity as witnessed by the the Cretaceous–Tertiary extinction event that occurred 65 millions years ago. The most recent survey capability of the current Near Earth Object Program has found a large amount of these asteroids for the past 6 years. However, many objects have been photographed in the past fifty years yet have not been identified because of the lack of adequate processing power (i.e. it meant manpower before 1990's). These yet unidentified but photographed objects have probably not crossed our path again. Similarly, many objects are currently being photographed in the sky by members of amateur astronomy community but rely on each person's knowledge to figure out if this is an object worth paying attention to. We propose to adopt an unstructured distributed approach to this problem. It is unstructured in that we do not need information about attitude determination, just the photos. It is distributed because we will rely on diverse source of sky photographs taken a different latitudes. We propose to use the 48 core machine to centralize an algorithm dedicated to
receiving digitized photos (old or not) over the web
processing these photos to evaluate attitude determination through the use of common star tracker algorithms (it answers the question as to where the photo is pointing in the sky)
processing these photos to then evaluate "things that should not be there" by comparing it with a +18 magnitude star catalog
provide results to the person making the request for evaluation and to specialist for further identification.
Additional processing could also be performed. The server would be hosted at a University.
I'll be in a place where The Google and the Interwebs have trouble reaching people. So this may look like a long post but I have not written any more post to be published later this week.
A unified view of sparse signal processing is presented in tutorial form by bringing together various fields. For each of these fields, various algorithms and techniques, which have been developed to leverage sparsity, are described succinctly. The common benefits of significant reduction in sampling rate and processing manipulations are revealed. The key applications of sparse signal processing are sampling, coding, spectral estimation, array processing, component analysis, and multipath channel estimation. In terms of reconstruction algorithms, linkages are made with random sampling, compressed sensing and rate of innovation. The redundancy introduced by channel coding in finite/real Galois fields is then related to sampling with similar reconstruction algorithms. The methods of Prony, Pisarenko, and MUSIC are next discussed for sparse frequency domain representations. Specifically, the relations of the approach of Prony to an annihilating filter and Error Locator Polynomials in coding are emphasized; the Pisarenko and MUSIC methods are further improvements of the Prony method. Such spectral estimation methods is then related to multi-source location and DOA estimation in array processing. The notions of sparse array beamforming and sparse sensor networks are also introduced. Sparsity in unobservable source signals is also shown to facilitate source separation in SCA; the algorithms developed in this area are also widely used in compressed sensing. Finally, the multipath channel estimation problem is shown to have a sparse formulation; algorithms similar to sampling and coding are used to estimate OFDM channels.
Supporting the Sparsify recent Version 0.4, here are two papers:
Several computationally efficient algorithms have been shown to offer near optimal recovery of sparse signals from a small number of linear measurements. However, whilst many of the methods have similar guarantees whenever the measurements satisfy the so called restricted isometry property, empirical performance of the methods can vary significantly in a regime in which this condition is not satisfied. We here modify the Iterative Hard Thresholding algorithm by including an automatic step-size calculation. This makes the method independent from an arbitrary scaling of the measurement system and leads to a method that shows state of the art empirical performance. What is more, theoretical guarantees derived for the unmodified algorithm carry over to the new method with only minor changes.
When sampling signals below the Nyquist rate, efficient and accurate reconstruction is nevertheless possible, whenever the sampling system is well behaved and the signal is well approximated by a sparse vector. This statement has been formalised in the recently developed theory of compressed sensing, which developed conditions on the sampling system and proved the performance of several efficient algorithms for signal reconstruction under these conditions. In this paper, we prove that a very simple and efficient algorithm, known as Iterative Hard Thresholding, has near optimal performance guarantees rivalling those derived for other state of the art approaches.
Laurent Duval just let me know of a new Call for Papers:
Call for papers Special issue on Advances in Multirate Filter Bank Structures and Multiscale Representations Scope A century after the first outbreak of wavelets in Alfred Haar’s thesis in 1909, and twenty years after the advent of Multiresolution Analysis, filter banks and wavelet transforms lie at the heart of many digital signal processing and communication systems. During the last thirty years, they have been the focus of tremendous theoretical advances and practical applications in a growing digital world. They are for instance present, as local linear expansions, at the core of many existing or forthcoming audio, image or video compression algorithms. Beyond standards, many exciting developments have emerged in filter banks and wavelets from the confrontation between scientists from different fields (including signal and image processing, computer science, harmonic analysis, approximation theory, statistics, bioengineering, physics). At their confluence, multiscale representations of data, associated with their efficient processing in a multirate manner, have unveiled tools or refreshed methods impacting the whole data management process, from acquisition to interpretation, through communications, recovery and visualization. Multirate structures naturally shelter key concepts such as the duality between redundancy and sparsity, as well as means for extracting low dimensional structures from higher ones. In image processing in particular, various extensions of wavelets provide smart linear tools for building insightful geometrical representations of natural images. The purpose of this special issue is to report on recent progresses performed and emerging trends in the domain of multirate filter banks and multiscale representations of signals and images. Answers to the challenge of handling an increasing demand of information extraction and processing from large data sets will be explored. Topics (not exclusive)
I already knew of the Stephen Boyd's class on Youtube such as this one:
What I did not realize was that somebody had gone through the pain of transcribing these videos. Wow, just Wow. For instance the text of the video above is found here. The rest of Stephen Boyd's excellent class can be found here and here with the attendant text.
The Papers presented at SODA are now available. Among the ones, I am surely going to be reading (after some advice from Frank Nielsen that I should know about Coresets)
In a different area, Shenzou VII passed by the International Space Station, here is a simulation by the folks at AGI. In a different area, I look forward to seeing how the recent collision between an Iridium satellite and a Russian satellite will disturb that low earth orbit environment with space debris. To see how bad this collision is, you need to check out the calculations on the Bad Astronomy blog. Have you ever been rearsided by something at 17,000 mph ? Let us recall that we are really close to a prompt critical situation in this area and this is worrisome as most of our daily lives depends on Satellites these days.
You have until tomorrow to go here and leave your name so that it leaves with the Lunar Reconnaissance Orbiter. The spacecraft will then orbit the Moon but will not land on it. If History is any indication, at end of life, it will be de-orbited and crashed on the Moon like we did with the Lunar Prospector nine years ago. For those concerned with ETs putting their hands on the list, the list of names will probably not survive.
I just put together a more static page on the Compressed Sensing Framework where I try to paint the big picture of the subject as far as I understand it. The field is getting bigger and there is a need for newcomers to see what are the current lines of thoughts without being first drowned in details. Let me know if I am doing a good job or not. It is at:
Thanks to a NOTAM,Comos4u and Nasawatch reports that NROL-21 will probably be shot down during the Moon eclipse so that the missile has an overwhelming probability of not mistaking the moon for something else. Phil Plait has more on the subject. My bet ? there is no way we will see a clean kill at Mach 17, but that's just me.
Credit & Copyright:Noel Munford (Palmerston North Astronomical Society, New Zealand)
So it looks like the decision has been made that given that the tanks of NROL-21 are likely to survive the reentry like the tanks of Columbia, shooting it down might be a good solution. Good call, Hydrazine is nasty and so is 300 pounds at terminal velocity....
Here is a different kind of debris. Aviation Week reports that an NRO spacecraft is uncontrollably descending into the atmosphere "at a rate of about 2,310 feet per day, according to Canadian analyst Ted Molczan." The reporter seems to be saying that the Columbia fateful reentry radar and recovery data are aiding in the analysis of the potential for accidents with the debris of that satellite. I am a little bit skeptical on several counts. When people tried to evaluate when a small disco ball would reenter the atmosphere, few guessed the right time and day it eventually happened. Also, when I last looked into it, most of the modeling that used specific small scale experiments did not seem to have some real predictive capability. It seems that this conclusion was also reached by some of the SCM folks who eventually did a new kind of statistical study on this very problem for the French Space Agency. Finally, the two photos below show before and after photographs of our camera that was on-board the Columbia Space Shuttle. It is in aluminum and even though the fusion temperature of Aluminum is 600 C, well below the temperature felt outside of the craft on reentry, the base plate was nearly intact on impact (our camera was outside). The rest melted away except for the optical component in Invar.
After some additional trajectory analysis, there is now a 1 to 25 chance of the 2007 WD5 meteorite hitting Mars on January 30th. If it hits, it will be a 30 megaton impact in a region north of where Opportunitystands. The last time we had an occurrence like this one was when Comet Shoemaker-Levy hit Jupiter in 1994. Many different telescopes were watching the impact then. But this time, it will be the first time in History that one of our robot sees an impact from the point of the view of the planet being hit. wow. Much can be learned about Mars atmosphere and even geology from such an event that it is mind boggling.
On a different note, this news is really possible because of one man: Gene Shoemaker. Gene is one of the discoverer of the Shoemaker-Levy comet that got people to realize the devastation Earth could sustain if it were to be hit by a similar object. His discovery led to the creation of the Near-Earth Object division at NASA. Gene actually trained to be the first scientist on the Moon and lobbied hard within NASA so that an Apollo astronaut would have the right qualification to pick up the right rocks. He was disqualified for health reasons but his lobbying paid off and Harrison Schmitt (a geologist) got to take that slot on Apollo 17. Gene died in a car accident in 1994. On the one hand, you don't get to walk on the Moon, on the other hand, your legacy extends far beyond the human exploration of space: It makes Space relevant to our history and maybe even our survival. This was not a bad deal after all.
Closer to some of the interest expressed in this blog is Space Situational Awareness. Bob Plemmons has used this method to perform the spectral unmixing from data sensed for the Maui Hyperspectral sensors. Much of his work is mentioned in [1] .
By looking up from the ground, and knowing the particular reflectance as a function of wavelength of particular elements such as aluminum, black paints,... one should be able to identify specific elements orbiting the Earth. It is important because the population of earth orbiting elements is constantly changing and needs to be appraised often. Space debris can have a significant impact on other orbiting spacecrafts and satellites.
Since Bob mentions the Maui shots of the Columbia spacecraft, I wonder if those shots were taken with the same hyperspectral sensor ?
Because in case it is, we can see that most of the Shuttle is white.....
whereas our camera that was on top of the Spacehab module was very shiny (it was behind the white box on the roof).
It would be of interest to see if the method can do a good job at spatial resolution. I can provide more information if needed. From there we could infer if there were missing pieces from the RCC panels as shown from the Southwest Research Institute foam impact experiment.
We formulate the problem of hyperspectral image unmixing as a nonconvex optimization problem, similar to nonnegative matrix factorization. We present a heuristic for approximately solving this problem using an alternating projected subgradient approach. Finally, we present the result of applying this method on the 1990 AVIRIS image of Cuprite, Nevada and show that our results are in agreement with similar studies on the same data.
With regards to universal compression, Emmanuel makes the good point that when doing exploration, NASA spends a good amount of energy in the compression of images on board spacecrafts. The idea is that the bandwidth is extremely reduced and you want to send as much as possible interesting data. Historically, one of the underlying reasons the Russians developed Rorsats was because their Digital processing (or number crunching) capabilities which were not as good as that in the U.S. So in order to provide adequate computational capabilities on board of these satellites systems and because the systems required low orbit, they had to have nuclear reactors. About 20 to 30 of them were actually launched (only one nuclear reactor was sent into space in the U.S.) All of them now contributing to space debris issues. Some of them fell back to earth. Power is also the reason why Prometheus, a nuclear reactor concept was envisioned in order to explore Jupiter and its moons. It was well studied until it got canceled. With the advent of compressed sensing, maybe we should think about doing these missions with hardware that do not require that much power again. I previously highlighted a much more lucrative application where bandwidth was also a real issue.
Damaris has a small entry on her work currently looking at the Shuttle damaged tiles. We had a similar project at STC where we would look at the belly of the shuttle using an HD camera at 60 frames per second and eventually provide 3D photogrammetry of the belly of the Orbiter from several hundred meters down.
The IPAM/UCLA Graduate Summer School on Probabilistic Models of Cognition: The Mathematics of Mind has finished. The presentations and webcasts can be found here.
With regards to the Chinese test: I have not seen this being articulated in any of the websites I have seen on the subject but while LEO is indeed a very sensitive location at least for ISS, has anybody quantified how much of these debris flux would eventually migrate to GEO ? My point is that with debris of similar sizes, collision is likely to produce elements with a large angular deviation (like in neutron transport) and might produce a flux to GEO. GEO would be a problem for everybody not just the ISS or low earth orbit countries.
If one looks at this 1999 UN report, one can see that there are in fact few models that take into account both LEO and GEO and therefore I am expecting that very few have coupling between LEO and GEO models within these codes.
The LEGEND model developed at JSC seems to address this issue. LEGEND was produced...
..To continue to improve our understanding of the orbital debris environment, the NASA Orbital Debris Program Office initiated an effort in 2001 to develop a new model to describe the near-Earth debris environment. LEGEND, a LEO-to-GEO Environment Debris model, is a full-scale three-dimensional debris evolutionary model. It covers the near-Earth space between 200 and 50,000 km altitude, including low Earth orbit (LEO), medium Earth orbit (MEO), and geosynchronous orbit (GEO) regions. ... The main function of the LEGEND historical component is to reproduce the historical debris environment (1957 to present) to validate the techniques used for the future projection component of the model. The model utilizes a recently updated historical satellite launch database (DBS database), two efficient state-of-the-art propagators (PROP3D and GEOPROP), and the NASA Standard Breakup Model. .... A key element in the LEGEND future projection component is a three-dimensional collision probability evaluation model. It provides a fast and accurate way to estimate future on-orbit collisions from LEO to GEO. Since no assumptions regarding the right ascensions of the ascending node and arguments of perigee of objects involved are required, this model captures the collision characteristics in real three-dimensional physical space. It is a critical component of a true three-dimensional debris evolutionary model.
The typical projection period in LEGEND is 100 years. Due to uncertainties involved in the process (e.g., future launch traffic, solar activity, explosions, collisions), conclusions are usually drawn based on averaged results from 30 Monte Carlo simulations.
J-C Liou one of the researcher involved in the development of LEGEND explained a year ago that
The current debris population in the LEO region has reached the point where the environment is unstable and collisions will become the most dominant debris-generating mechanism in the future...Even without new launches, collisions will continue to occur in the LEO environment over the next 200 years, primarily driven by the high collision activities in the region between 900- and 1000-km altitudes, and will force the debris population to increase.
Downscatter from MEO to LEO seems to be well taken into account but I still wonder about upscattering from MEO to GEO or from LEO to GEO.
{I put more justification to what I said in this entry here]
This NYT piece on Orbiting Debris becoming a Threat is interesting but it does not point to the real issue in my opinion. There is a good likelihood that some of the debris created by collisions between elements in low earth orbit (LEO) produce junk in Geostationary Orbit (GEO). This orbit is very important to the whole communication infrastructure.
When one is confronted with images that are very difficult to analyze because the background is very bright or very dark as is usually the case for images taken in space from say star trackers, one should look into using the AVIS FITS viewer. By playing with the lower and upper threshold of the histogram toggle, one can pretty much isolate interesting component that are a little bit above the background.
After receiving the first images from Starnav 1, we figured that most of the surrounding of the camera was shining too much light into the camera. After going through the AVIS viewer and playing with the filter threshold, we could find other unknown things being in front of that camera (item B and C).
Item B was very difficult to find because it was really only a few pixel above a certain background and one had to remove brighter area around it. Only then, one could see the round shape of it.
In a previous entry on prompt critical space debris, I was really mentionning the possibility of igniting a chain reaction in low earth orbit by the continuing add-on of space debris in LEO.
At some point, a colleague of mine and I looked into devising a transport equation similar to the one we use in neutron transport theory in order to evaluate what, we in the nuclear engineering world call criticality studies. Being critical is one thing, but when a system is "prompt critical", delayed neutrons cannot slow down a chain reaction yielding a very hazardous situation.
for the moment, except that we are talking about a potential substantial increase of particles in orbit beyond the sum of all the debris from just the target of that test. What is really interesting is that there is a loss factor: drag from earth atmosphere has the ability to remove some of these debris over time. Yet, we don't really know how to fit experiments with observations. Some are attempting to do just that within the Robust Mathematical Modeling framework.