LDA那些事

开心呀,要转换成处理中文,再建立LDA模型
是还有很长的路要走呀

邮箱topic

这个是大神的代码,跑通不易,皮皮小白记录历程
链接: link.

向作者大大致敬,么么么么❤❤❤❤❤❤❤
(小白弱弱的说一句,如果侵权了,联系我删哦)

下面展示一些 代码

import numpy as np
import pandas as pd
import re
df = pd.read_csv("HillaryEmails.csv")
# 去掉原邮件数据中Nan值。
df = df[['Id','ExtractedBodyText']].dropna()

def clean_email_text(text):
    text = text.replace('\n'," ") #去掉没有意义的
    text = re.sub(r"-", " ", text) #把 "-" 的两个单词,分开。
    text = re.sub(r"\d+/\d+/\d+", "", text) #日期
    text = re.sub(r"[0-2]?[0-9]:[0-6][0-9]", "", text) #时间
    text = re.sub(r"[\w]+@[\.\w]+", "", text) #邮件地址
    text = re.sub(r"/[a-zA-Z]*[:\//\]*[A-Za-z0-9\-_]+\.+[A-Za-z0-9\.\/%&=\?\-_]+/i", "", text) #网址
    pure_text = ''
    # 以防还有其他特殊字符(数字),将其过滤掉
    for letter in text:
        # 留下字母和空格
        if letter.isalpha() or letter==' ':
            pure_text += letter
    # 再把那些去除特殊字符后落单的单词,直接排除。
    #只剩下有意义的单词了。
    text = ' '.join(word for word in pure_text.split() if len(word)>1)
    return text

docs = df['ExtractedBodyText']
docs = docs.apply(lambda s: clean_email_text(s))
docs.head(1).values
doclist = docs.values
from gensim import corpora, models, similarities
import gensim
stoplist = ['very', 'ourselves', 'am', 'doesn', 'through', 'me', 'against', 'up', 'just', 'her', 'ours', 
            'couldn', 'because', 'is', 'isn', 'it', 'only', 'in', 'such', 'too', 'mustn', 'under', 'their', 
            'if', 'to', 'my', 'himself', 'after', 'why', 'while', 'can', 'each', 'itself', 'his', 'all', 'once', 
            'herself', 'more', 'our', 'they', 'hasn', 'on', 'ma', 'them', 'its', 'where', 'did', 'll', 'you', 
            'didn', 'nor', 'as', 'now', 'before', 'those', 'yours', 'from', 'who', 'was', 'm', 'been', 'will', 
            'into', 'same', 'how', 'some', 'of', 'out', 'with', 's', 'being', 't', 'mightn', 'she', 'again', 'be', 
            'by', 'shan', 'have', 'yourselves', 'needn', 'and', 'are', 'o', 'these', 'further', 'most', 'yourself', 
            'having', 'aren', 'here', 'he', 'were', 'but', 'this', 'myself', 'own', 'we', 'so', 'i', 'does', 'both', 
            'when', 'between', 'd', 'had', 'the', 'y', 'has', 'down', 'off', 'than', 'haven', 'whom', 'wouldn', 
            'should', 've', 'over', 'themselves', 'few', 'then', 'hadn', 'what', 'until', 'won', 'no', 'about', 
            'any', 'that', 'for', 'shouldn', 'don', 'do', 'there', 'doing', 'an', 'or', 'ain', 'hers', 'wasn', 
            'weren', 'above', 'a', 'at', 'your', 'theirs', 'below', 'other', 'not', 're', 'him', 'during', 'which']

texts = [[word for word in doc.lower().split() if word not in stoplist] for doc in doclist]
texts[0]

dictionary = corpora.Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]
corpus[0]
[(0, 3),
 (1, 2),
 (2, 1),
 (3, 2),
 (4, 1),
 (5, 2),
 (6, 2),
 (7, 2),
 (8, 1),
 (9, 1),
 (10, 1),
 (11, 3),
 (12, 1)]

打印第一句,前五个关键词及其概率。

lda = gensim.models.ldamodel.LdaModel(corpus=corpus, id2word=dictionary, num_topics=20)

lda.print_topic(1, topn=5)
'0.040*"pls" + 0.023*"print" + 0.014*"add" + 0.013*"taliban" + 0.011*"call"'

展示所有的20个topic

for i in lda.print_topics(num_topics=20, num_words=5):
    print(i)
(0, '0.018*"pm" + 0.012*"call" + 0.011*"fw" + 0.010*"tomorrow" + 0.010*"talk"')
(1, '0.089*"pm" + 0.044*"office" + 0.037*"secretarys" + 0.027*"room" + 0.024*"meeting"')
(2, '0.023*"israeli" + 0.017*"palestinian" + 0.016*"israel" + 0.007*"jerusalem" + 0.007*"lauren"')
(3, '0.038*"pm" + 0.035*"fyi" + 0.018*"office" + 0.014*"meeting" + 0.014*"state"')
(4, '0.018*"good" + 0.015*"sounds" + 0.013*"germany" + 0.007*"deadline" + 0.006*"copenhagen"')
(5, '0.012*"gender" + 0.010*"mtg" + 0.010*"add" + 0.010*"see" + 0.008*"call"')
(6, '0.009*"mr" + 0.006*"would" + 0.006*"one" + 0.006*"party" + 0.005*"us"')
(7, '0.047*"call" + 0.021*"email" + 0.017*"thx" + 0.015*"today" + 0.012*"im"')
(8, '0.010*"bi" + 0.010*"message" + 0.009*"see" + 0.009*"received" + 0.008*"nixon"')
(9, '0.009*"said" + 0.009*"us" + 0.006*"new" + 0.006*"afghan" + 0.006*"israel"')
(10, '0.006*"would" + 0.006*"said" + 0.006*"new" + 0.006*"one" + 0.005*"percent"')
(11, '0.016*"yes" + 0.008*"right" + 0.005*"us" + 0.005*"thought" + 0.004*"would"')
(12, '0.013*"website" + 0.011*"alexander" + 0.007*"unlike" + 0.007*"shaun" + 0.006*"forgot"')
(13, '0.008*"us" + 0.007*"people" + 0.006*"states" + 0.006*"new" + 0.006*"would"')
(14, '0.009*"also" + 0.007*"wjc" + 0.007*"haiti" + 0.007*"see" + 0.006*"got"')
(15, '0.015*"know" + 0.011*"sent" + 0.010*"good" + 0.009*"would" + 0.008*"hope"')
(16, '0.017*"state" + 0.014*"lona" + 0.014*"negotiating" + 0.013*"assistant" + 0.012*"secretary"')
(17, '0.012*"us" + 0.011*"state" + 0.009*"house" + 0.007*"would" + 0.006*"said"')
(18, '0.007*"would" + 0.007*"time" + 0.007*"american" + 0.006*"us" + 0.006*"people"')
(19, '0.038*"pls" + 0.031*"ok" + 0.028*"print" + 0.021*"part" + 0.020*"release"')

进行判断

print(lda.get_document_topics(corpus[0]))
[(0, 0.95865613)]

加油加油呀
20年最后一天啦,NLP来年我也要对你宣战

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值