Improving Word Representations Via Global Context And Multiple Word Prototypes

Unsupervised word representations are very useful in NLP tasks both as inputs to learning algorithms and as extra word features in NLP systems. However, most of these models are built with only local context and one representation per word. This is problematic because words are often polysemous and global context can also provide useful information for learning word meanings. We present a new neural network architecture which 1) learns word embeddings that better capture the semantics of words by incorporating both local and global document context, and 2) accounts for homonymy and polysemy by learning multiple embeddings per word. We introduce a new dataset with human judgments on pairs of words in sentential context, and evaluate our model on it, showing that our model outperforms competitive baselines and other neural language models.

Paper Download

Word Vectors and Code

Dataset: Stanford's Contextual Word Similarities (SCWS)

Training Corpus

Bibtex

Comments

Add Comment 
Sign as Author 
Enter code:

RichardSocher24 July 2014, 09:27

@Anon, A new paper by Pennington, Socher and Manning will have all these word vectors with really great performance :)

Anon?15 April 2014, 23:57

3. Is it also possible for you to release the vectors for a larger vocab size of 1M to 2M ?

Anon?15 April 2014, 23:55

1. May I know how long does your code take to learn these vectors ? 2. Is it possible for you guys to release the vectors of size - 100, 300 , 600 and 1000.