Pages

Tuesday, May 14, 2013

A day at the Big Data: Theoretical and Practical Challenges Workshop

Today, I was at the Big data: theoretical and practical challenges workshop at IHP and liked it very much. In particular, I was pleasantly surprised that this issue of phase transition being center to demonstration arguments. Michael Jordan talked about a new kind of (simple but seemingly powerful) K-means algorithm. Alexandre D'Aspremont presented some new work on the approximation bounds for sparse principal component analysis. Alfred Hero described phase transition issues on his talk on information gathered from correlation mining. Finally, Martin Wainwright, provided a theoretical analysis as to why some gradient descent algorithm work well for particular non convex loss and regularization functions. I particularly liked this specific figure excerpted from paper [1] on which the presentation was partially excerpted from:


Why ? because it reminded me of this discussion that Donoho, Johnstone, Hoch and Stern had back in 1992 about the different regularizers they were using to obtain a sparse spectrum back [2]. In it, they compared the L1 norm, least squares and the maximum entropy functional. In the end, L_1 was found superior based on an empirical error minimum. As we seen recently in How to find real-world applications for compressive sensing, it's only when we got to learn which ensemble worked well with L_1 that compressive sensing was born. I wonder if like Maximum entropy, SCAD or MCP could yield different ensembles of interest to the sensing hardware makers. I also look forward to using the work of Alfred Hero for calibration purposes as he mentioned in his presentation. I am sure the slides will be made available later. 

[1] Regularized M-estimators with nonconvexity: Statistical and algorithmic theory for local optima
Po-Ling Loh, Martin J. Wainwright
We establish theoretical results concerning all local optima of various regularized M-estimators, where both loss and penalty functions are allowed to be nonconvex. Our results show that as long as the loss function satisfies restricted strong convexity and the penalty function satisfies suitable regularity conditions, any local optimum of the composite objective function lies within statistical precision of the true parameter vector. Our theory covers a broad class of nonconvex objective functions, including corrected versions of the Lasso for error-in-variables linear models; regression in generalized linear models using nonconvex regularizers such as SCAD and MCP; and graph and inverse covariance matrix estimation. On the optimization side, we show that a simple adaptation of composite gradient descent may be used to compute a global optimum up to the statistical precision epsilon in log(1/epsilon) iterations, which is the fastest possible rate of any first-order method. We provide a variety of simulations to illustrate the sharpness of our theoretical predictions.
[2] Maximum Entropy and the Nearly Black Object by Donoho, Johnstone, Hoch and Stern



Image Credit: NASA/Carla Cioffi
Expedition 35 Landing

The Soyuz TMA-07M spacecraft is seen as it lands with Expedition 35 Commander Chris Hadfield of the Canadian Space Agency (CSA), NASA Flight Engineer Tom Marshburn and Russian Flight Engineer Roman Romanenko of the Russian Federal Space Agency (Roscosmos) in a remote area near the town of Zhezkazgan, Kazakhstan, on Tuesday, May 14, 2013. Hadfield, Marshburn and Romanenko returned from five months onboard the International Space Station where they served as members of the Expedition 34 and 35 crews.

No comments:

Post a Comment