Yin Zhang is at it again, he just released two reconstruction codes with attendant papers. First a new code:
A MATLAB code for image reconstruction from partial Fourier data that solves models with total-variation and l_1 regularization and an l_2-norm fidelity to fit the available incomplete Fourier data. Co-developed with Junfeng Yang and Wotao Yin. RecPF solved the following modelwhere
-- u is the signal/image to be reconstructed
-- TV(u) is the total variation regularization term
-- Ψ is a sparsifying basis
-- Fp is a partial Fourier matrix
-- fp is a vector of partial Fourier coefficients
and a new version of FTVd (version 3.0)
A MATLAB code for image deblurring and denoising that solves the model with total-variation regularization and l_2-norm fidelity. Co-developed with Junfeng Yang, Yilun Wang and Wotao Yin. To recall:
This is a Matlab package for recovering images, gray scale or color, from blurry and noisy observations based on solving one of the following 2 problems:min TV(u) + (p/2) ||h*u -f||2 or min TV(u) + p ||h*u -f||1, where f is an input blurry and noise image, u is the output image, h is a blurring kernel, and p>0 is a regularization parameter. The noise can be either Gaussian or impulsive like salt-and-pepper.
Both codes are now listed in the reconstruction section of the big picture.
On top of making the code available, the authors also have created a course on Connexions entitled A Class of Fast Algorithms for Total Variation Image Restoration.
FYI, Connexions is a place to view and share educational material made of small knowledge chunks called modules that can be organized as courses, books, reports, etc. Anyone may view or contribute. Richard Baraniuk makes a case for it in this TED talk:
2 comments:
nice post,
1) I have a question about this post, I think that TV norm is equivalent to Markovian Random field (MRF) prior with quadratic potential, do you think that it is true?
2) I agree that for l_1 norm, we need to resort to a method which can handle non-differentiability of the norm (subgradient, etc), but why is that for TV norm? One can write squared of TV norm of a vector u as
u^T Q u in which Q is a PSD matrix and can be formulated. I mean, if an optimization problem does not have l_1 norm, adding TV norm should like any other quadratic term, right? please correct me if I am wrong.
Thanks,
Kayhan
Kayhan,
If I remember correctly what was said in Yves Meyer, Gabriel Peyre, Stan Osher and Tony Chan's presentations (all different presentations),
Total Variation (TV)-based image restoration is a regularization that works well for images, i.e. data with large elements of textures and mostly line discontinuities. That's it! It was discovered before the l_1-sparsity connection and now we see regularizations effort that try to impose both constraints as they both push for "higher image quality" and sparsity.
Anybody who would have a less empirical argument is welcome to chime in.
Igor.
Post a Comment