Using random projection in a random forest approach. Interesting !
From the paper, I note the connection to Robst PCA
and further:
Here is the paper: Randomer Forests by Tyler M. Tomita, Mauro Maggioni, Joshua T. Vogelstein
Moreover, we demonstrate that a variant of RerF is approximately is both affine invariant and robust to outliers, two properties that are relatively easy to obtain independently, though difficult to obtain jointly and inexpensively (see recent work on robust PCA following Candes et al., 2009 [9]).
and further:
Instead we use random projections: we generate matrices A that are distributed in a rotation invariant fashion, but maintain the space and time complexity of RFs, by employing very sparse random projections [10]. Rather than sampling d non-zero elements of A, enforcing that each column gets a single non-zero number (without replacement), which is always1, we relax these constraints and select d non-zero numbers from f
Here is the paper: Randomer Forests by Tyler M. Tomita, Mauro Maggioni, Joshua T. Vogelstein
Random forests (RF) is a popular general purpose classifier that has been shown to outperform many other classifiers on a variety of datasets. The widespread use of random forests can be attributed to several factors, some of which include its excellent empirical performance, scale and unit invariance, robustness to outliers, time and space complexity, and interpretability. While RF has many desirable qualities, one drawback is its sensitivity to rotations and other operations that "mix" variables. In this work, we establish a generalized forest building scheme, linear threshold forests. Random forests and many other currently existing decision forest algorithms can be viewed as special cases of this scheme. With this scheme in mind, we propose a few special cases which we call randomer forests (RerFs). RerFs are linear threshold forest that exhibit all of the nice properties of RF, in addition to approximate affine invariance. In simulated datasets designed for RF to do well, we demonstrate that RerF outperforms RF. We also demonstrate that one particular variant of RerF is approximately affine invariant. Lastly, in an evaluation on 121 benchmark datasets, we observe that RerF outperforms RF. We therefore putatively propose that RerF be considered a replacement for RF as the general purpose classifier of choice. Open source code is available at this http URL
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment