我尝试使用Latent Dirichlet分配LDA来提取一些主题。 本教程以端到端的自然语言处理流程为特色,从原始数据开始,贯穿准备,建模,可视化论文。
我们将涉及以下几点
使用LDA进行主题建模
使用pyLDAvis可视化主题模型
使用t-SNE和散景可视化LDA结果
In [1]:
from scipy import sparse as sp
Populating the interactive namespace from numpy and matplotlib
In [2]:
docs = array(p_df['PaperText'])
预处理和矢量化文档
In [3]:
from nltk.stem.wordnet import WordNetLemmatizer
from nltk.tokenize import RegexpTokenizer
def docs_preprocessor(docs):
tokenizer = RegexpTokenizer(r'\w+')
for idx in range(len(docs)):
docs[idx] = docs[idx].lower() # Convert to lowercase.
docs[idx] = tokenizer.tokenize(docs[idx]) # Split into words.
# Remove numbers, but not words that contain numbers.
docs = [[token for token in doc if not token.isdigit()] for doc in docs]
# Remove words that are only one character.
docs = [[token for token in doc if len(token) > 3] for doc in docs]
# Lemmatize all words in documents.
lemmatizer = WordNetLemmatizer()
docs = [[lemmatizer.lemmatize(token) for token in doc] for doc in docs]
return docs
In [4]:
docs = docs_preprocessor(docs)
计算双字母组/三元组:
正弦主题非常相似,可以区分它们是短语而不是单个/单个单词。
In [5]:
from gensim.models import Phrases
# Add bigrams and trigrams to docs (only ones that appear 10 times or more).
bigram = Phrases(docs, min_count=10)
trigram = Phrases(bigram[docs])
for idx in range(len(docs)):
for token in bigram[docs[idx]]:
if '_' in token:
# Token is a bigram, add to document.
docs[idx].append(token)
for token in trigram[docs[idx]]:
if '_' in token:
# Token is a bigram, add to document.
docs[idx].append(token)
Using TensorFlow backend.
/opt/conda/lib/python3.6/site-packages/gensim/models/phrases.py:316: UserWarning: For a faster implementation, use the gensim.models.phrases.Phraser class
warnings.warn("For a faster implementation, use the gensim.models.phrases.Phraser class")</