Pages

Friday, May 25, 2012

Inpainting Algorithm on GitHub (TV-L2 denoising and inpainting)




In light of the recent entry showing the results of an inpainting algorithm within an Analysis Operator Learning approachEmmanuel d'Angelo let me know that he made available his  TV-L2 denoising and inpainting code on Github. The photo above represents another 90% missing pixel reconstruction of Lena. The two approaches are very different in that the analysis operator approach is first learned in the first paper whereas in the case of Emmanuel's code, the analysis operator is given ( Finite Difference/TV). The first approach may yield better results in the inpainting stage but requires a learning phase. 

Emmanuel is still writing up his thesis so he is pretty busy for the moment, hence only the inpainting component of the more general algorithm is available. The general algorithm will include a fast version that can deal with the speed of video streams. According to him 

" The TV-L2 denoising is exactly what is described in the Chambolle-Pock [1], Alg. 1 of the primal-dual paper (no acceleration). The inpainting procedure is not described as is in the paper, but can be considered as a straightforward deduction from this same paper"
On his blog he describes the algorithm in more detail, it is in French however, so let me translate the remarks:


  • The functional used here is nothing remarkable: it is the famous ROF, namely argmin 1/2 ∥ Ax-y∥^2_2 + λ ∥ x ∥ TV, where A is a random mask and TV is Total Variation.
  • The implementation is slightly modified to simulate an infinite parameter λ where the mask is nonzero, and 0 otherwise.
  • The algorithm used is called primal-dual and is described this paper [1].
  • For the Open Lab dys et EPFL, I wrote a demo that used the video stream from a webcam on the Mac, which made quite an impression  (and not just among the general public). It is used by my boss regularly when he wants to impress visitors. The algorithm was FISTA this time (Ref. 2), which converges fast..
  • The demo was initially written in the days of Mac OS X 1.5 Leopard withOpenCV version 1.0. Changes to QuickTime and OpenCV made that demo almost obsolete, so I invite you instead to go to the version of the code on Github.


He added in a follow-up e-mail on the state of the current code on Github

The code is C++ only, and targeted to work with OpenCV 2.4 and later. My optical flow code will be both OpenCV/C++ and GPGPU/OpenCL (it will be a bug free version of the one I presented at ICIP). I also want to add multithreading with Grand Central Dispatch (aka libdispatch), which is an Apple equivalent of Intel TBB. GCD is open sourced and available for Mac OS, iOS, and FreeBSD.

If I have the time to implement a "good" wavelet transform, I will submit it top the main OpenCV module.
Let us stay tuned for the optical flow algorithm on the Github repository that will run at 30 fps. In the meantime, Emmanuel's publications are listed here.


[1] A first-order primal-dual algorithm for convex problems with applications to imaging, Antonin Chambolle et Thomas Pock Journal of Mathematical Imaging and Vision, 2011, Volume 40, Number 1, Pages 120-145
[2] A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. Amir Beck and Marc Teboulle, SIAM J. Imaging Sci. 2, 183 (2009), DOI:10.1137/080716542



Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

2 comments:

  1. Hi everyone,

    "The first approach may yield better results in the inpainting stage but requires a learning phase."
    This sentence suggests that an operator has to be learned for every individual inpainting task. However, the operator is learned offline and universal, a replacement for the TV, so to speak.
    It would be interesting to see how a combination of both approaches would perform.

    Martin

    ReplyDelete
  2. Good point. I did not mean to imply that if it came across as so, then I ll make a mentiin in the entry.

    I agree, once the learning stage is over we have very comparable methods for inpainting.

    Deep down though i am far more inerested to see how the learning of the analysis operator will chage with different cameras. I realize that this is not what the paper looked into but I am curious to see if a spcific analysis operator and certsin kinds of picture will yield a specific analysis operator. And then how do we compare those operators is another story.

    Thanks Martin,

    Igor.

    ReplyDelete