Wednesday, November 07, 2012

From Bits to Images: Inversion of Local Binary Descriptors - implementation -

This is a follow-up to this previous entry: Do Androids Recall Dreams of Electric Sheeps ? Just off the press, wow (for some context and a word from one of the co-author, Emmanuel d'Angelo, see below).

Local Binary Descriptors are becoming more and more popular for image matching tasks, especially when going mobile. While they are extensively studied in this context, their ability to carry enough information in order to infer the original image is seldom addressed.
In this work, we leverage an inverse problem approach to show that it is possible to directly reconstruct the image content from Local Binary Descriptors. This process relies on very broad assumptions besides the knowledge of the pattern of the descriptor at hand. This generalizes previous results that required either a prior learning database or non-binarized features.
Furthermore, our reconstruction scheme reveals differences in the way different Local Binary Descriptors capture and encode image information. Hence, the potential applications of our work are multiple, ranging from privacy issues caused by eavesdropping image keypoints streamed by mobile devices to the design of better descriptors through the visualization and the analysis of their geometric content.
The attendant implementation is here.

- Update - It looks like the blog entry was faster than the email sent by Emmanuel to me, here it is:

Dear Igor,
I'm pretty sure you remember our preliminary work on image reconstruction from local quantized descriptors featured on your blog:
I told you on Twitter at the time that I would release the code only once the job is done, i.e. when we have an algorithm that works for 1-bit quantized descriptors (our previous algorithm featured in the blog post was for floating-point feature vectors only). Well, it is time :-)
We have successfully adapted the Binary Iterative Hard Thresholding of Jacques et al. to invert binarized image local descriptors and reconstruct images. The implications of this seem huge: from smart cameras streaming descriptors instead of images to privacy breaches in cloud-based pattern recognition.
The pre-print is here:
And as promised the code is on github, so everybody is welcome to clone / fork / play with it:
Right now, I'm about to leave for the ICPR'12 conference in Tsukuba (Japan), where I will present the preliminary version of this work (the one with the real on-quantized descriptors). So, if anybody interested is also attending the conference, I'm sure we'll have time to discuss about it !
Have a nice day !

Thanks Emmanuel !

Join our Reddit Experiment, Join the CompressiveSensing subreddit and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments: