Aleks Jakulin alerts us to the fact ( in Netflix Prize scoring function isn't Bayesian ) that with examples like Napoleon Dynamite, the grading system used by Netflix is throwing all the nice recommending systems off. This doesn't stop Shiqian Ma, Donald Goldfarb, Lifeng Chen from trying to improve the current approaches to nuclear norm minimization instead of rank minimization in the same spirit as minimizing the l_1 norm instead of the l_o norm in traditional compressed sensing. Their paper is entitled: Fixed point and Bregman iterative methods for matrix rank minimization. The abstract reads:
The linearly constrained matrix rank minimization problem is widely applicable in many fields such as control, signal processing and system identification. The linearly constrained nuclear norm minimization is a convex relaxation of this problem. Although it can be cast as a semidefinite programming problem, the nuclear norm minimization problem is expensive to solve when the matrices are large. In this paper, we propose fixed point and Bregman iterative algorithms for solving the nuclear norm minimization problem and prove convergence of the first of these algorithms. By using a homotopy approach together with an approximate singular value decomposition procedure, we get a very fast, robust and powerful algorithm that can solve very large matrix rank minimization problems. Our numerical results on randomly generated and real matrix completion problems demonstrate that this algorithm is much faster and provides much better recoverability than semidefinite programming solvers such as SDPT3.
Lee Potter, Phil Schniter, and Justin Ziniel just released A Fast Posterior Update for Sparse Undetermined Linear Models. The abstract reads:
A Bayesian approach is adopted for linear regression, and a fast algorithm is given for updating posterior probabilities. Emphasis is given to the underdetermined and sparse case, i.e., fewer observations than regression coefficients and the belief that only a few regression coefficients are non-zero. The fast update allows for a low-complexity method of reporting a set of models with high posterior probability and their exact posterior odds. As a byproduct, this Bayesian model averaged approach yields the minimum mean squared error estimate of unknown coefficients. Algorithm complexity is linear in the number of unknown coefficients, the number of observations and the number of nonzero coefficients. For the case in which hyperparameters are unknown, a maximum likelihood estimate is found by a generalized expectation maximization algorithm.
We also have three talks on CS in the coming week:
Terry Tao will give a lecture in Norway entitled "Compressed sensing", in Auditorium EL2, Electrical Engineering Building , NTNU Gløshaugen, at 11:15–12:00 on Tuesday, December 9th.
Applied Mathematics Colloquium at Caltech, Monday December 1, 2008, 4:00 PM, 101 Guggenheim Lab, Lees-Kubota Lecture Hall, Title: New, Fast and Effective Algorithms for Imaging, Compressed Sensing and Related Problems, with Applications by Stanley Osher, UCLA
and finally,
Leo Grady, Senior Research Scientist at Siemens Corporate Research will talk at MIT on Wed 12/3 at 4pm, Room 3189 (McGovern Institute Seminar Room), MIT Building 46 (Brain and Cognitive Sciences)
Abstract:
An ideal image segmentation algorithm could be applied to the segmentation of objects (e.g., tissue classes) without any adjustment for image acquisition device or application. However, a general-purpose, multiway segmentation of objects in an image/volume remains a challenging problem. In this talk, I will describe a recently developed approach to this problem this has been successful in several Siemens products. This segmentation approach inputs a few labeled points from a user (e.g., from mouse clicks) and produces a segmentation by computing the probabilities that a random walker leaving unlabeled pixels/voxels will first strike the labeled set. These probabilities may be computed analytically and deterministically by noting the exact mathematical equivalence with a combinatorial Laplace equation. The solution of the combinatorial Laplace equation admits interpretation of the algorithm as a steady-state electrical circuit simulation or as a minimization of the Dirichlet energy. Going beyond the segmentation problem, we may employ this energy minimization interpretation to minimize the p-norm of the spatial gradient of any function over neighboring pixels. By setting p=0 (i.e., maximizing sparsity), I willshow how we can also apply this energy minimization algorithm to image reconstruction for compressed sensing. Although compressed sensing and image segmentation appear to be very different problems, the conclusion of this work is that a common energy minimization approach may used to produce very good results for both problems.
I like the fact that there is a mention of an actual implementation of an l_1 minimzation in actual working products.
[ Update: Leo just corrected me, the inclusion in Hardware is not the CS part. Furthermore, p=0 is a give away that says he is more interested in solving an l_0 problem than an l_1 one. I screwed up on this one! ]
These events have been added to the calendar.
Credit: NASA/JPL/Space Science Institute, Prometheus Brings Change to the F Ring
No comments:
Post a Comment