Pages

Thursday, May 31, 2012

FREAK implementation on Github (1-bit Quantized Difference of Gaussians Descriptor)



A while backPierre let us know about FREAK, Fast Retina Keypoint, a competitor to SIFT. I mentioned that "As soon as the code is there, I'll make a longer mention of it". Today is the day: The paper is FREAK: Fast Retina Keypoint by Alexandre Alahi, Alexandre; Raphael Ortiz, Pierre Vandergheynst. The abstract reads:

A large number of vision applications rely on matching keypoints across images. The last decade featured an arms-race towards faster and more robust keypoints and association algorithms: Scale Invariant Feature Transform (SIFT), Speed-up Robust Feature (SURF), and more recently Binary Robust Invariant Scalable Keypoints (BRISK) to name a few. These days, the deployment of vision algorithms on smart phones and embedded devices with low memory and computation complexity has even upped the ante: the goal is to make descriptors faster to compute, more compact while remaining robust to scale, rotation and noise. To best address the current requirements, we propose a novel keypoint descriptor inspired by the human visual system and more precisely the retina, coined Fast Retina Keypoint (FREAK). A cascade of binary strings is computed by efficiently comparing image intensities over a retinal sampling pattern. Our experiments show that FREAKs are in general faster to compute with lower memory load and also more robust than SIFT, SURF or BRISK. They are thus competitive alternatives to existing keypoints in particular for embedded applications.
From the conclusions:


We have presented a retina-inspired keypoint descriptor to enhance the performance of current image descriptors. It outperforms recent state-of-the-art keypoint descriptors while remaining simple (faster with lower memory load). We do not claim any biological significance but find it remarkable that the used learning stage to identify the most relevant Difference of Gaussians could match one possible understanding of the resource optimization of the human visual system. In fact, as a future work, we want to investigate more on the selection of such relevant pairs for high level applications such as object recognition.



The attendant algorithm is on Gitbub


Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Post a Comment