In the Matrix Factorization Jungle Page, there is a subsection for subspace clustering that reads like:
Subspace Clustering: A = AX with unknown X, solve for sparse/other conditions on X
In recent times, when people were looking for subspace clustering algorithms, the conditions imposed on X were focused on X having a zero main diagonal and sparse entries otherwise. In today's paper, the authors seek a low rank version of X instead and call that matrix/operator a linear autoencoder.
Let us note that product AX now provides an evaluation of the SVD of X. Following this line of thought, subspace clustering could also be construed as another instance of a Linear Autoencoder. Can this approach help in better designing noisy autoencoders as we recently saw [1] ? The question bears asking. Without further afo, here is: Stable Autoencoding: A Flexible Framework for Regularized Low-Rank Matrix Estimation by Julie Josse, Stefan Wager
We develop a framework for low-rank matrix estimation that allows us to transform noise models into regularization schemes via a simple parametric bootstrap. Effectively, our procedure seeks an autoencoding basis for the observed matrix that is robust with respect to the specified noise model. In the simplest case, with an isotropic noise model, our procedure is equivalent to a classical singular value shrinkage estimator. For non-isotropic noise models, however, our method does not reduce to singular value shrinkage, and instead yields new estimators that perform well in experiments. Moreover, by iterating our stable autoencoding scheme, we can automatically generate low-rank estimates without specifying the target rank as a tuning parameter.
From [2]
Let us note that in the subspace clustering approach of the paper if diag(Z) = 0, then Tr(Z) = 0 which really means that this decomposition yields a volume preserving Z (see Lie Algebra). One then wonders if, like in fluid mechanics, we should be aiming to find a mixed decomposition with a volume preserving transformation (really quantifying the deformation) on the one had and one that quantifies (low) volume change (low rank matrix, the paper featured today) for the autoencoders on the other.
Let us also note that while the aim for these subspace clustering algorithms is to have a zero diagonal, the regularizer may provide a solution that is close enough but is also low rank (see LRR for instance). Let us hope that this approach can provide some light on how to devise nonlinear autoencoders !
Let us also note that while the aim for these subspace clustering algorithms is to have a zero diagonal, the regularizer may provide a solution that is close enough but is also low rank (see LRR for instance). Let us hope that this approach can provide some light on how to devise nonlinear autoencoders !
- [1] Provable Bounds for Learning Some Deep Representations
- Autoencoders, Unsupervised Learning, and Deep Architectures by Pierre Baldi
- LSR : Robust and Efficient Subspace Segmentation via Least Squares Regression by Canyi Lu, Hai Min, Zhong-Qiu Zhao, Lin Zhu, De-Shuang Huang, and Shuicheng Yan
- LRRSC : Subspace Clustering by Exploiting a Low-Rank Representation with a Symmetric Constraint by Jie Chen, Zhang Yi
- SSC : Sparse Subspace Clustering: Algorithm, Theory, and Applications by Ehsan Elhamifar, Rene Vidal.
- SMCE : Sparse Manifold Clustering and Embedding by Ehsan Elhamifar, Rene Vidal,
- Local Linear Embedding (LLE)
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment