Hi Igor
I would like to point you to our recent paper on the arXiv: Revisiting Skip-Gram Negative Sampling Model With Regularization (https://arxiv.org/pdf/1804.00306.pdf), which essentially deals with one specific low-rank matrix factorization model.
The abstract is as follows:
We revisit skip-gram negative sampling (SGNS), a popular neural-network based approach to learning distributed word representation. We first point out the ambiguity issue undermining the SGNS model, in the sense that the word vectors can be entirely distorted without changing the objective value. To resolve this issue, we rectify the SGNS model with quadratic regularization. A theoretical justification, which provides a novel insight into quadratic regularization, is presented. Preliminary experiments are also conducted on Google’s analytical reasoning task to support the modified SGNS model.
Your opinion will be much appreciated!
Thanks, Matt Mu
Thanks Matt for the heads-up !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
1 comment:
Thanks, Nuit!
Post a Comment