If you think that I did not cover your stuff this month and want to be counted in the new Nuit Blanche in Review for October, please let me know.
This fall at Rice there is a new course CAAM 654: Sparse Optimization by Wotao Yin and: Ming Yan. I note their judicious use of Nuit Blanche as one of the links to use for students :-) It might also have been interesting to add both the Big Picture in Compressive Sensing and the Matrix Factorization Jungle Page.
There are some interesting discussions and questions on the LinkedIn Compressive Sesning Group right now. The group now boast more than 1830 members. The LinkedIn Matrix Factorization group has more than 444 members.
Many widely used models in unsupervised learning can be viewed as matrix decompositions, where the input matrix is expressed as sums and products of matrices drawn from a few simple priors. We present a unifying framework for matrix decompositions in terms of a context-free grammar which generates a wide variety of structures through the compositional application of a few simple rules. We use our grammar to generically and efficiently infer latent components and estimate predictive likelihood for nearly 1000 structures using a small toolbox of reusable algorithms. Using best-first search over our grammar, we can automatically choose the decomposition structure from raw data by evaluating only a tiny fraction of all models. This gives a recipe for selecting model structure in unsupervised learning situations. The proposed method almost always finds the right structure for synthetic data and backs off gracefully to simpler models under heavy noise. It learns plausible structures for datasets as diverse as image patches, motion capture, 20 Questions, and U.S. Senate votes, all using exactly the same code.
From: Haim Avron <firstname.lastname@example.org> Date: Fri, 19 Oct 2012 11:32:59 -0400 Subject: Postdoc Position, Randomized Numerical Linear Algebra, IBM The High Performance Computing for Analytics group within the Business Analytics and Mathematical Sciences Department at IBM's T.J. Watson Research Center is seeking a Post Doctoral Researcher to work on investigating and implementing randomized numerical linear algebra kernels for distributed computing platforms, with applications to machine learning problems. The candidate is expected to contribute to the development of new ideas and implementations, publish in top-tier journals, and file patent disclosures when appropriate. We are especially interested in candidates who have experience and strong interest in large-scale distributed data analysis with emphasis on linear algebra techniques. The successful candidate will work with an interdisciplinary team of researchers. The candidate must have strong programming capabilities, and have excellent verbal and written skills. Preference may be given to candidates with extensive knowledge of C/C++ along with MPI and multi-threaded programming. Knowledge of Python is a plus. PhD candidates in Computer Science or Mathematics are preferred. For more information on the requirements, and to apply, see: https://jobs3.netmedia1.com/ cp/job_summary.jsp?job_id=RES- 0526905 IBM is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.
Image Credit: NASA/JPL-Caltech
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.