Pages

Friday, January 30, 2009

CS: Two jobs, Optically multiplexed imaging, Robust estimation of Gaussian mixtures by l_1 penalization, multivariate distribution for subimages,ASIFT

Pierre Vandergheynst has two open positions in his lab. Both involve sparsity and Compressive Sensing. They are also listed in the Compressive Sensing Jobs page.




Continuing on the concept of multiplexing imaging, yesterday it was through coded aperture, today the set-up is a little different and aim at a specific task: tracking. The paper is entitled: Optically multiplexed imaging with superposition space tracking by Shikhar Uttam, Nathan A. Goodman, Mark A. Neifeld, Changsoon Kim, Renu John, Jungsang Kim, and David Brady. The abstract reads:

We describe a novel method to track targets in a large field of view. This method simultaneously images multiple, encoded sub-fields of view onto a common focal plane. Sub-field encoding enables target tracking by creating a unique connection between target characteristics in superposition space and the target’s true position in real space. This is accomplished without reconstructing a conventional image of the large field of view. Potential encoding schemes include spatial shift, rotation, and magnification. We discuss each of these encoding schemes, but the main emphasis of the paper and all examples are based on one-dimensional spatial shift encoding. System performance is evaluated in terms of two criteria: average decoding time and probability of decoding error. We study these performance criteria as a function of resolution in the encoding scheme and signal-to-noise ratio. Finally, we include simulation and experimental results d onstrating our novel tracking method.
On a somewhat different note, yet tracking can be performed by some of this modeling, here is Robust estimation of Gaussian mixtures by l_1 penalization: an experimental study by Stephane Chretien. The abstract reads:
Many experiments in medicine and ecology can be conveniently modeled by finite Gaussian mixtures but face the problem of dealing with small data sets. We propose a robust version of the estimator based on self-regression and sparsity promoting penalization in order to estimate the components of Gaussian mixtures in such contexts. A space alternating version of the penalized EM algorithm is obtained and we prove that its cluster points satisfy the Karush-Kuhn-Tucker conditions. Monte Carlo experiments are presented in order to compare the results obtained by our method and by standard maximum likelihood estimation. In particular, our estimator is seen to perform well better than the maximum likelihood estimator.
Again, tracking is also of importance here with A multivariate distribution for subimages by Steve Maybank . The abstract reads:
A new method for obtaining multivariate distributions for sub-images of natural images is described. The information in each sub-image is summarized by a measurement vector in a measurement space. The dimension of the measurement space is reduced by applying a random projection to the truncated output of the discrete cosine transforms of the sub-images. The measurement space is then reparametrized, such that a Gaussian distribution is a good model for the measurement vectors in the reparametrized space. An Ornstein–Uhlenbeck process, associated with the Gaussian distribution, is used to model the differences between measurement vectors obtained from matching sub-images. The probability of a false alarm and the probability of accepting a correct match are calculated. The accuracy of the resulting statistical model for matching sub-images is tested using images from the MIDDLEBURY stereo database with promising results. In particular, if the probability of accepting a correct match is relatively large, then there is good agreement between the calculated and the experimental probabilities of obtaining a unique match that is also a correct match.

In a similar vein, if you have ever tried SIFT, you know that sometimes it does not work optimally as expected. There is a new algorithm called ASIFT developed by Jean Michel Morel and Guoshen Yu that seems to be very efficient. You can try it directly on two of your own images here at: http://mw.cmla.ens-cachan.fr/megawave/demo/asift/

Summary:
A fully affine invariant image comparison method, Affine SIFT (A-SIFT) is introduced. While SIFT is fully invariant with respect to only four parameters, the new method treats the two left over parameters : the angles defining the camera axis orientation. Against any prognosis, simulating all views depending on these two parameters is feasible with no dramatic computational load. The method permits to reliably identify features that have undergone large transition tilts, up to 36 and more, while state-of-the-art affine normalization methods hardly exceed transition tilts of 2 (SIFT), 2.5 (Harris-Affine and Hessian-Affine) and 10 (MSER).

References:
J.M. Morel and G.Yu, ASIFT: A New Framework for Fully Affine Invariant Image Comparison, to appear in SIAM Journal on Imaging Sciences, 2009.
G. Yu and J.M. Morel, A Fully Affine Invariant Image Comparison Method, accepted to IEEE ICASSP, Taipei, 2009.
J.M. Morel and G.Yu, On the consistency of the SIFT Method, Preprint, CMLA 2008-26, Sept 2008.


Here are some of the comparison between SIFT and ASIFT in videos, this is pretty stunning:






Compare that to SIFT only

No comments:

Post a Comment