Thursday, April 09, 2009

Sparsity in Everything: Natural Laws from Experimental Data

There are many examples of sparsity in real life such as Solutions to Everyday Decision Making, Noisy Protein-Protein Interaction, Non-Junk DNA and here is a new one. We are already seeing this in the numerous bayesian approaches to compressive sensing but there are other approaches to sparse approximation that look like combinatorial trials such as in the latest paper by Michael Schmidt and Hod Lipson entitled Distilling Free-Form Natural Laws from Experimental Data. The abstract reads:

For centuries, scientists have attempted to identify and document analytical laws that underlie physical phenomena in nature. Despite the prevalence of computing power, the process of finding natural laws and their corresponding equations has resisted automation. A key challenge to finding analytic relations automatically is defining algorithmically what makes a correlation in observed data important and insightful. We propose a principle for the identification of nontriviality. We demonstrated this approach by automatically searching motion-tracking data captured from various physical systems, ranging from simple harmonic oscillators to chaotic double-pendula. Without any prior knowledge about physics, kinematics, or geometry, the algorithm discovered Hamiltonians, Lagrangians, and other laws of geometric and momentum conservation. The discovery rate accelerated as laws found for simpler systems were used to bootstrap explanations for more complex systems, gradually uncovering the “alphabet” used to describe those systems.
To fit the phenomena at hand, a combinatorial search is performed using genetic algorithms. As one can see in the graph above, the trade-off between parsimony and accuracy tends to lean towards simpler and therefore unique models. I note that the learning is performed using videos of the effects being studied. I also note that the examples being studied do not seem to have much occlusion effects ( see the entry on the non-differentiability in images featuring occluding elements) and therefore they are really learning on a smooth manifold like in [1], [2]. Here is a video presentation of the work and their site:





This experiment really begs the question as to whether a compressive sensing technique avoiding the combinatorial search could be attempted. Anyway, if you have other examples, send them to me by E-mail, or in the comment section or by Twitter.

[1] Compressed learning: Universal sparse dimensionality reduction and learning in the measurement domain by Robert Calderbank, Sina Jafarpour, and Robert Schapire,
[2] Richard Baraniuk: Manifold models for signal acquisition, compression, and processing, Slides (pdf), Video (flv), a joint work with Mark Davenport, Marco Duarte, Chinmay Hegde, and Michael Wakin.

No comments:

Printfriendly