Pages

Tuesday, December 18, 2012

Around the blogs in 80 summer hours (NIPS and more)


Suresh makes a summary of his thoughts about NIPS in:
In his NIPS ruminations I Suresh mentioned this about the Earth Mover's distance:

Kernel distances: Ever since I discovered the kernel distance (as in, found it, not invented it) I've been fascinated by how it behaves more or less like the earth mover distance, but is so much easier to compute. Scott Aaronson (at his NIPS invited talk) made this joke about how nature loves ℓ 2. The kernel distance is "essentially" the ℓ variant of EMD (which makes so many things easier). There's been a series of papers by Sriperumbudur et al. on this topic, and in a series of works they have shown that (a) the kernel distance captures the notion of "distance covariance" that has become popular in statistics as a way of testing independence of distributions. (b) as an estimator of distance between distributions, the kernel distance has more efficient estimators than (say) the EMD because its estimator can be computed in closed form instead of needing an algorithm that solves a transportation problem and (c ) the kernel that optimizes the efficient of the two-sample estimator can also be determined (the NIPS paper).
We've seen EMD recently. In Learning Manifolds in the Wild by Chinmay Hegde, Aswin C. Sankaranarayanan, Richard Baraniuk made the case that one could learn manifolds through the use of the Earth Mover’s Distance on top of keypoint descriptors. Does that mean that a combination of the faster FREAKs and kernel distance might provide for a speedier way of learning manifold from images and videos ? [Update: the answer seems to be Yes]



As an aside, one of the commenters pointed out to a manuscript on the Earth Mover's distance by Cedric Villani where you get to learn (page 33) the historical nature of the concept from Monge. It is quite fascinating that indeed it has to do with moving soil from one place to another.







On the PSD matrices remark, I am reminded of the work by Frédéric Barbaresco  on Applications Radar de la Géométrie de l'information associée aux matrices de covariances : traitements spatio-temporels. It's in French but understandable, there is also a video of him on Applications of Information Geometry to Radar Signal Processing  (Interactions between symmetric cone and Information Geomtries: Bruhat-Tits and Siegel Spaces, Models for High Resolution Autoregressive Doppler Imagery). As another aside, I wonder if one could multiplex the various modailities of radar.

Sergey talks about Algebraic topology – now in compressed sensingRich talks about NuMax – A Convex Approach for Learning Near-Isometric Linear Embeddings, we featured here earlier.
Larry has New Names For Statistical Methods
Bob points out that Even humans can make extreme octave errors
Danny talks about Collaborative filtering:


Mathblogging features a new blog in Mathematical Instruments: Haggis the Sheep
John mentions the The Lindy effect
Laurent features the ERBlet transform (on WITS: Where is the starlet)
Emmanuel talks about a Paper: Inverting and Visualizing Features for Object Detection
Andrew ia announcing Sublinear.Info!, that page is now part of the highly technical reference page
OpenPicus has a 20% discount megasaale until december 31st.

1 comment: