Pages

Tuesday, March 11, 2014

Path Thresholding: Asymptotically Tuning-Free High-Dimensional Sparse Regression - implementation -


In this paper, we address the challenging problem of selecting tuning parameters for high-dimensional sparse regression. We propose a simple and computationally efficient method, called path thresholding (PaTh), that transforms any tuning parameter-dependent sparse regression algorithm into an asymptotically tuning-free sparse regression algorithm. More specifically, we prove that, as the problem size becomes large (in the number of variables and in the number of observations), PaTh performs accurate sparse regression, under appropriate conditions, without specifying a tuning parameter. In finite-dimensional settings, we demonstrate that PaTh can alleviate the computational burden of model selection algorithms by significantly reducing the search space of tuning parameters.
Divyanshu tells me the implementation is here at http://dsp.rice.edu/software/path

From the page:

Path Thresholding (PaTh)
Selecting tuning parameters in sparse regression algorithms is known to be a difficult task since the optimal tuning parameter either depends on the unknown sparsity level of the sparse vector or on the unknown noise variance. In practice, practitioners resort to either cross-validation (CV) or information-criterion based methods. Both these methods are known to be suboptimal in the high-dimensional setting where the number of variables is much greater than the number of observations. Although stability selection is suitable for high-dimensional problems, it can require significant computations.
PaTh has two appealing properties:
(1) It can be used with ANY sparse regression algorithm.
(2) It transforms the model dependent tuning parameter into a parameter that is a constant
When given a sufficient number of observations, while still in the high-dimensional regime, the performance of PaTh is independent of any tuning parameter. In other settings, PaTh can significantly reduce the possible number of sparse solutions. The code is written in such a way that PaTh can be used with any implementation of a sparse regression algorithm. In particular, we demonstrate how to use the code with the Lasso, OMP, and SWAP.

of related interested: High-Dimensional Screening Using Multiple Grouping of Variables by Divyanshu Vats

Screening is the problem of finding a superset of the set of non-zero entries in an unknown p-dimensional vector \beta* given n noisy observations. Naturally, we want this superset to be as small as possible. We propose a novel framework for screening, which we refer to as Multiple Grouping (MuG), that groups variables, performs variable selection over the groups, and repeats this process multiple number of times to estimate a sequence of sets that contains the non-zero entries in \beta*. Screening is done by taking an intersection of all these estimated sets. The MuG framework can be used in conjunction with any group based variable selection algorithm. In the high-dimensional setting, where p >> n, we show that when MuG is used with the group Lasso estimator, screening can be consistently performed without using any tuning parameter. Our numerical simulations clearly show the merits of using the MuG framework in practice.

No comments:

Post a Comment