Tuesday, April 17, 2018

Revisiting Skip-Gram Negative Sampling Model With Regularization



Matt just sent me the following

Hi Igor  
I would like to point you to our recent paper on the arXiv: Revisiting Skip-Gram Negative Sampling Model With Regularization (https://arxiv.org/pdf/1804.00306.pdf), which essentially deals with one specific low-rank matrix factorization model.  
The abstract is as follows:
We revisit skip-gram negative sampling (SGNS), a popular neural-network based approach to learning distributed word representation. We first point out the ambiguity issue undermining the SGNS model, in the sense that the word vectors can be entirely distorted without changing the objective value. To resolve this issue, we rectify the SGNS model with quadratic regularization. A theoretical justification, which provides a novel insight into quadratic regularization, is presented. Preliminary experiments are also conducted on Google’s analytical reasoning task to support the modified SGNS model.  
Your opinion will be much appreciated! 
Thanks, Matt Mu

Printfriendly