Pages

Friday, March 28, 2008

Call for Papers: Sparse Optimization and Variable Selection

Here is a workshop related to many of the themes mentioned on Nuit Blanche. It is an ICML/UAI/COLT 2008 workshop on Sparse Optimization and Variable Selection. There are 38 days before the deadline submission on May 5. From the site:


Call for Papers

Please submit an extended abstract (1 to 3 pages in two-column ICML format) to the workshop email address sparse.ws@gmail.com. The abstract should include author names, affiliations, and contact information. Papers will be reviewed by at least 3 members of the program committee.


Overview

Variable selection is an important issue in many applications of machine learning and statistics where the main objective is discovering predictive patterns in data that would enhance our understanding of underlying physical, biological and other natural processes, beyond just building accurate 'black-box' predictors. Common examples include biomarker selection in biological applications [1], finding brain areas predictive about 'brain states' based on fMRI data [2], and identifying network bottlenecks best explaining end-to-end performance [3,4], just to name a few.

Recent years have witnessed a flurry of research on algorithms and theory for variable selection and estimation involving sparsity constraints. Various types of convex relaxation, particularly L1-regularization, have proven very effective: examples include the LASSO [5], boosted LASSO [6], Elastic Net [1], L1-regularized GLMs [7], sparse classifiers such as sparse (1-norm) SVM [8,9], as well as sparse dimensionality reduction methods (e.g. sparse component analysis [10], and particularly sparse PCA [11,12] and sparse NMF [13,14]). Applications of these methods are wide-ranging, including computational biology, neuroscience, graphical model selection [15], and the rapidly growing area of compressed sensing [16-19]. Theoretical work has provided some conditions when various relaxation methods are capable of recovering an underlying sparse signal, provided bounds on sample complexity, and investigated trade-offs between different choices of design matrix properties that guarantee good performance.

We would like to invite researchers working on the methodology, theory and applications of sparse models and selection methods to share their experiences and insights into both the basic properties of the methods, and the properties of the application domains that make particular methods more (or less) suitable. We hope to further explore connections between variable selection and related areas such as dimensionality reduction, optimization and compressed sensing.

Suggested Topics

We would welcome submissions on various aspects of sparsity in machine-learning,from theoretical results to novel algorithms and interesting applications. Questions of interest include, but are not limited to:

* Does variable selection provide a meaningful interpretation of interest to domain experts?
* What method (e.g., combination of regularizers) is best-suited for a particular application and why?
* How robust is the method with respect to various type of noise in the data?
* What are the theoretical guarantees on the reconstruction ability of the method? consistency? sample complexity?

Comparison of different variable selection and dimensionality reduction methods with respect to their accuracy, robustness, and interpretability is encouraged.


I need to ask if some recordings of the tutorials will be made, the same way IMA did for the course on Compressed Sensing last year.

No comments:

Post a Comment