Pages

Sunday, September 04, 2011

It's stunning and quite amazingly rich and no ... it's not your father's signal processing

Whoever has ever calibrated a camera has at some point has had to use the lifesaver toolbox of Jean-Yves Bouguet. Whoever is reading the questions on signal processing of the new Q&A on signal processing will certainly find this new site also a life saver, but, yawn, all this is quite plainly boooooring. The reason I have started the Matrix Factorization page (and the attendant series of Matrix Factorization entries) is because we are beginning to see some new powerful tools that can perform some new and old signal processing tasks. The connection to compressive sensing is indirect, on the one hand, many of these techniques were born out the successful solvers started for the reconstruction of signals in compressive sensing but more importantly, all the tools used to calibrate 'normal' sensors that focus on removing other than gaussian noise are also much welcome addition to the whole field of compressive sensor design, yes that includes MRI as well. The following results are interesting on their own and are going to change the way we do signal processing and calibration. You don't believe me ? Take a look and read the following jaw dropping papers:
:









In this paper, we show how to efficiently and effectively extract a class of "low-rank textures" in a 3D scene from 2D images despite significant corruptions and warping. The low-rank textures capture geometrically meaningful structures in an image, which encompass conventional local features such as edges and corners as well as all kinds of regular, symmetric patterns ubiquitous in urban environments and man-made objects. Our approach to finding these low-rank textures leverages the recent breakthroughs in convex optimization that enable robust recovery of a high-dimensional low-rank matrix despite gross sparse errors. In the case of planar regions with significant affine or projective deformation, our method can accurately recover both the intrinsic low-rank texture and the precise domain transformation, and hence the 3D geometry and appearance of the planar regions. Extensive experimental results demonstrate that this new technique works effectively for many regular and near-regular patterns or objects that are approximately low-rank, such as symmetrical patterns, building facades, printed texts, and human faces.

One of the most compeling part of the story is hidden in Remark 4:
Remark 4 (TILT vs. Transformed PCA.) One might argue that the low-rank objective can be directly enforced, as in Transformed Component Analysis (TCA) proposed by Frey and Jojic (1999), which uses an EM algorithm to compute principal components, subject to domain transformations drawn from a known group. The TCA deals with Gaussian noise and essentially minimizes the 2-norm of the error term E. So the reader might wonder if such a \transformed principal component analysis" approach could apply to our image recti fication problem here. Let us ignore gross corruption or occlusion for the time being. We could attempt to recover a rank-r texture by solving the following optimization problem:
min_I0,\tau || I(\tau(.) -   I_0||^2_F s:t: rank(I_0)  \le r (4)
One can solve (4) by minimizing against the low-rank component I_0 and the deformation  iteratively: with \tau fixed, estimate the rank-r component I_0 via PCA, and with I_0 fi xed, solve the deformation \\tau in a greedy fashion to minimize the least-squares objective. 
Figure 3 shows some representative results of using such a \Transformed PCA" approach. However, even for simple patterns like the checker-board, it works only with a correct initial guess of the rank r = 2 beforehand If we assume a wrong rank, say r = 1 or 3, solving (4) would not converge to a correct solution, even with a small initial deformation. For complex textures like a building facade shown in Figure 3 whose rank is impossible to guess in advance, we have to try all possibilities. Moreover, (4) can only handle small Gaussian noise. For images taken in real world, partial occlusion and other types of corruption are often present. The naive transformed PCA does not work robustly for such images. As we will see in the rest of this paper, the TILT algorithm that we propose next can automatically nd the minimal matrix rank in an efficient manner and handle very large deformations and non-Gaussian errors of large magnitude.



In the recent CVPR conference,some of the authors took the concept further with Camera Calibration with Lens Distortion from Low-rank Textures by Zhengdong Zhang, Yasuyuki MatsushitaYi Ma. The abstract reads:
We present a simple, accurate, and flexible method to calibrate intrinsic parameters of a camera together with (possibly significant) lens distortion. This new method can work under a wide range of practical scenarios: using multiple images of a known pattern, multiple images of an unknown pattern, single or multiple image(s) of multiple patterns etc. Moreover, this new method does not rely on extracting any low-level features such as corners or edges. It can tolerate considerably large lens distortion, noise, error, illumination and viewpoint change, and still obtain accurate estimation of the camera parameters. The new method leverages on the recent breakthroughs in powerful highdimensional convex optimization tools, especial those for matrix rank minimization and sparse signal recovery. We will show how the camera calibration problem can be formulated as an important extension to principal component pursuit, and solved by similar techniques. We characterize to exactly what extent the parameters can be recovered in case of ambiguity. We verify the efficacy and accuracy of the proposed algorithm with extensive experiments on real images.
wow, no need for specific checkerboards! But it doesn't stop there:

We present a robust radiometric calibration method that capitalizes on the transform invariant low-rank structure of sensor irradiances recorded from a static scene with different exposure times. We formulate the radiometric calibration problem as a rank minimization problem. Unlike previous approaches, our method naturally avoids over-fitting problem; therefore, it is robust against biased distribution of the input data, which is common in practice. When the exposure times are completely unknown, the proposed method can robustly estimate the response function up to an exponential ambiguity. The method is evaluated using both simulation and real-world datasets and shows a superior performance than previous approaches.

Radiometric Calibration by Rank Minimization by Joon-Young Lee,  Yasuyuki Matsushita, Boxin Shi, In-So Kweon, and Katsushi Ikeuchi. The abstract reads:
We present a robust radiometric calibration framework that capitalizes on the transform invariant low-rank structure in the various types of observations, such as sensor irradiances recorded from a static scene with different exposure times, or linear structure of irradiance color mixtures around edges. We show that various radiometric calibration problems can be treated in a principled framework that uses a rank minimization approach. Unlike previous approaches, our method can avoid the over-fitting problem; therefore, it is robust against noise and biased distributions of the input data, which are common in practice. The proposed approach is evaluated using both simulation and real-world datasets and shows superior performance to previous approaches.


The attendant presentation slide of this paper is here.

We address the effects of noise in low-light images. Color images are usually captured by a sensor with a color filter array (CFA) requiring a demosaicing process to generate a full color image. The captured images typically have low signal-to-noise ratio, and the demosaicing step further corrupts the image, which we show to be the leading cause of visually objectionable random noise patterns (splotches). To avoid this problem, we propose a combined framework of denoising and demosaicing, where we use information about the image inferred in the denoising step to perform demosaicing. Our experiments show that such a framework results in sharper low-light images that are devoid of splotches and other noise artifacts.

The project site is here.And finally,

High-resolution Hyperspectral Imaging via Matrix Facorization by Rei KawakamiJohn WrightYu-Wing TaiYasuyuki MatsushitaMoshe Ben-Ezra, and Katsushi Ikeuchi . The abstract reads:
Hyperspectral imaging is a promising tool for applications in geosensing, cultural heritage and beyond. However, compared to current RGB cameras, existing hyperspectral cameras are severely limited in spatial resolution. In this paper, we introduce a simple new technique for reconstructing a very high-resolution hyperspectral image from two readily obtained measurements: A lower-resolution hyperspectral image and a high-resolution RGB image. Our approach is divided into two stages: We first apply an unmixing algorithm to the hyperspectral input, to estimate a basis representing reflectance spectra. We then use this representation in conjunction with the RGB input to produce the desired result. Our approach to unmixing is motivated by the spatial sparsity of the hyperspectral input, and casts the unmixing problem as the search for a factorization of the input into a basis and a set of maximally sparse coefficients. Experiments show that this simple approach performs reasonably well on both simulations and real data examples.

The following e-mail and paper came out from the left field over the week-end:

Hi Igor,
.... I noticed that you have a special post about matrix factorization recently. I thought that you may want to check this paper that we have published recently:

http://www.ncbi.nlm.nih.gov/pubmed/21791408 pdf file: https://www.rad.upenn.edu/sbia/Nematollah.Batmanghelich/Kayhan.Batmanghelich/Publications_files/batmanghelich_final.pdf
In some sense it has similarities with another work [1] that is published at the same time; I guess you have recently featured its software :
http://nuit-blanche.blogspot.com/2011/07/sparse-modeling-software-now-open.html
We are going to release its software soon. Now that I have learned about SPAMS, I would love to see these two pieces of softwares combined with each other at some point.
Regards,
Kayhan

[1] http://arxiv.org/abs/1009.5358

Thanks Kayhan, one more reason for the Matrix Factorization Jungle page to exist. Here is the paper that uses some other kind of structured norm.


Generative-Discriminative Basis Learning for Medical Imaging by Nematollah K. Batmanghelich, Ben Taskar, Christos Davatzikos. The abstract reads:
This paper presents a novel dimensionality reduction method for classification in medical imaging. The goal is to transform very high-dimensional input (typically, millions of voxels) to a low-dimensional representation (small number of constructed features) that preserves discriminative signal and is clinically interpretable. We formulate the task as a constrained optimization problem that combines generative and discriminative objectives and show how to extend it to the semisupervised learning (SSL) setting. We propose a novel largescale algorithm to solve the resulting optimization problem. In the fully supervised case, we demonstrate accuracy rates that are better than or comparable to state-of-the-art algorithms on several datasets while producing a representation of the group difference that is consistent with prior clinical reports. Effectiveness of the proposed algorithm for SSL is evaluated with both benchmark and medical imaging datasets. In the benchmark datasets, the results are better than or comparable to the state-of-the-art methods for SSL. For evaluation of the SSL setting in medical datasets, we use images of subjects with Mild Cognitive Impairment (MCI), which is believed to be a precursor to Alzheimer’s disease (AD), as unlabeled data. AD subjects and Normal Control (NC) subjects are used as labeled data, and we try to predict conversion from MCI to AD on follow-up. The semi-supervised extension of this method not only improves the generalization accuracy for the labeled data (AD/NC) slightly but is also able to predict subjects which are likely to converge to AD




Liked this entry ? subscribe to the Nuit Blanche feed, there's more where that came from

No comments:

Post a Comment