Skip to content

Latest commit

 

History

History
14 lines (9 loc) · 553 Bytes

README.md

File metadata and controls

14 lines (9 loc) · 553 Bytes

Usage Instructions :

Download the glove embeddings from http://nlp.stanford.edu/data/wordvecs/glove.6B.zip and extract them in current directory

wget http://nlp.stanford.edu/data/wordvecs/glove.6B.zip -O glove6B.zip
unzip glove6B.zip -d glove6B/

Note that Wikepdia 2014 + Gigaword 5 word embeddings were used (822 MB), but the twitter corpus might have better performance (1.42 GB)# tweet_sentiment_extraction

Download the data from https://www.kaggle.com/c/tweet-sentiment-extraction/data and place in a folder called data.

Run main.py