本文翻译自
这个文档介绍获取和处理维基百科的过程,以至于每个人都可以复制其结果;
准备语料库(Preparing the corpus)
1、首先,下载所有维基百科文章的转储 地址((you want the file enwiki-latest-pages-articles.xml.bz2, or enwiki-YYYYMMDD-pages-articles.xml.bz2 for date-specific dumps)这是一个文件,大概8G,包括所有维基百科中的文章;
2、把所有的文件转化为纯文本(处理WiKi标记),把结果存储为稀疏TF-IDF向量的形式;在Python中,这是很容易做的,我们甚至不需要解压缩整个文档到磁盘,这是Gensim中包含的一个脚本用来做这些,运行如下:
$ python -m gensim.scripts.make_wiki
这个预处理步骤遍历了两遍这个8.2G的压缩wiki转储(一次用来构建字典,一次用来构建和存储稀疏矩阵)
而且,你大约需要35GB的磁盘空间来存储这个稀疏生成矩阵,建议立即压缩它们。Genism能够直接处理压缩文件;
潜在语义分析(Latent Semantic Analysis)
让我们加载语料库迭代器和字典,在上面构建的:
>>> import logging, gensim, bz2
>>> logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
>>> # load id->word mapping (the dictionary), one of the results of step 2 above
>>> id2word = gensim.corpora.Dictionary.load_from_text('wiki_en_wordids.txt')
>>> # load corpus iterator
>>> mm = gensim.corpora.MmCorpus('wiki_en_tfidf.mm')
>>> # mm = gensim.corpora.MmCorpus(bz2.BZ2File('wiki_en_tfidf.mm.bz2')) # use this if you compressed the TFIDF output (recommended)
>>> print(mm)
MmCorpus(3931787 documents, 100000 features, 756379027 non-zero entries)
构建LSI模型要花费几小时
能够看出,总处理时间主要有预处理步骤决定;
在Gensim中使用的算法仅仅需要查看每个输入文档一次,这适用于不可重复流文档的情况;
潜在的狄利克雷分配(Latent Dirichlet Allocation)
首先加载语料迭代器和词典:
>>> import logging, gensim, bz2
>>> logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
>>> # load id->word mapping (the dictionary), one of the results of step 2 above
>>> id2word = gensim.corpora.Dictionary.load_from_text('wiki_en_wordids.txt')
>>> # load corpus iterator
>>> mm = gensim.corpora.MmCorpus('wiki_en_tfidf.mm')
>>> # mm = gensim.corpora.MmCorpus(bz2.BZ2File('wiki_en_tfidf.mm.bz2')) # use this if you compressed the TFIDF output
>>> print(mm)
MmCorpus(3931787 documents, 100000 features, 756379027 non-zero entries)
我们将要运行在线LDA,这是一个需要文本块的算法,更新LDA模型,再获取文件块,再更新LDA模型…
在线LDA可以与批量LDA相对比,后者处理整个语料库,之后更新模型,在进行另外一次迭代,之后更新模型…
之间的差别在于给予一个合理的固定的文件流,在线更新基于更小的文本块(子库),所以这个模型估计收敛更快;所以,可能我们仅仅需要完整的遍历所有的文件一次:如果这个语料库包含3million文章,我们需要每100,000文章对模型更新一次,这意味着整个遍历过程中,需要300次更新,很有可能主题有更精确的预测:
>>> # extract 100 LDA topics, using 1 pass and updating once every 1 chunk (10,000 documents)
>>> lda = gensim.models.ldamodel.LdaModel(corpus=mm, id2word=id2word, num_topics=100, update_every=1, chunksize=10000, passes=1)
using serial LDA version on this node
running online LDA training, 100 topics, 1 passes over the supplied corpus of 3931787 documents, updating model once every 10000 documents
...
不像LSA,LDA中的主题更容易理解:
>>> # print the most contributing words for 20 randomly selected topics
>>> lda.print_topics(20)
topic #0: 0.009*river + 0.008*lake + 0.006*island + 0.005*mountain + 0.004*area + 0.004*park + 0.004*antarctic + 0.004*south + 0.004*mountains + 0.004*dam
topic #1: 0.026*relay + 0.026*athletics + 0.025*metres + 0.023*freestyle + 0.022*hurdles + 0.020*ret + 0.017*divisão + 0.017*athletes + 0.016*bundesliga + 0.014*medals
topic #2: 0.002*were + 0.002*he + 0.002*court + 0.002*his + 0.002*had + 0.002*law + 0.002*government + 0.002*police + 0.002*patrolling + 0.002*their
topic #3: 0.040*courcelles + 0.035*centimeters + 0.023*mattythewhite + 0.021*wine + 0.019*stamps + 0.018*oko + 0.017*perennial + 0.014*stubs + 0.012*ovate + 0.011*greyish
topic #4: 0.039*al + 0.029*sysop + 0.019*iran + 0.015*pakistan + 0.014*ali + 0.013*arab + 0.010*islamic + 0.010*arabic + 0.010*saudi + 0.010*muhammad
topic #5: 0.020*copyrighted + 0.020*northamerica + 0.014*uncopyrighted + 0.007*rihanna + 0.005*cloudz + 0.005*knowles + 0.004*gaga + 0.004*zombie + 0.004*wigan + 0.003*maccabi
topic #6: 0.061*israel + 0.056*israeli + 0.030*sockpuppet + 0.025*jerusalem + 0.025*tel + 0.023*aviv + 0.022*palestinian + 0.019*ifk + 0.016*palestine + 0.014*hebrew
topic #7: 0.015*melbourne + 0.014*rovers + 0.013*vfl + 0.012*australian + 0.012*wanderers + 0.011*afl + 0.008*dinamo + 0.008*queensland + 0.008*tracklist + 0.008*brisbane
topic #8: 0.011*film + 0.007*her + 0.007*she + 0.004*he + 0.004*series + 0.004*his + 0.004*episode + 0.003*films + 0.003*television + 0.003*best
topic #9: 0.019*wrestling + 0.013*château + 0.013*ligue + 0.012*discus + 0.012*estonian + 0.009*uci + 0.008*hockeyarchives + 0.008*wwe + 0.008*estonia + 0.007*reign
topic #10: 0.078*edits + 0.059*notability + 0.035*archived + 0.025*clearer + 0.022*speedy + 0.021*deleted + 0.016*hook + 0.015*checkuser + 0.014*ron + 0.011*nominator
topic #11: 0.013*admins + 0.009*acid + 0.009*molniya + 0.009*chemical + 0.007*ch + 0.007*chemistry + 0.007*compound + 0.007*anemone + 0.006*mg + 0.006*reaction
topic #12: 0.018*india + 0.013*indian + 0.010*tamil + 0.009*singh + 0.008*film + 0.008*temple + 0.006*kumar + 0.006*hindi + 0.006*delhi + 0.005*bengal
topic #13: 0.047*bwebs + 0.024*malta + 0.020*hobart + 0.019*basa + 0.019*columella + 0.019*huon + 0.018*tasmania + 0.016*popups + 0.014*tasmanian + 0.014*modèle
topic #14: 0.014*jewish + 0.011*rabbi + 0.008*bgwhite + 0.008*lebanese + 0.007*lebanon + 0.006*homs + 0.005*beirut + 0.004*jews + 0.004*hebrew + 0.004*caligari
topic #15: 0.025*german + 0.020*der + 0.017*von + 0.015*und + 0.014*berlin + 0.012*germany + 0.012*die + 0.010*des + 0.008*kategorie + 0.007*cross
topic #16: 0.003*can + 0.003*system + 0.003*power + 0.003*are + 0.003*energy + 0.002*data + 0.002*be + 0.002*used + 0.002*or + 0.002*using
topic #17: 0.049*indonesia + 0.042*indonesian + 0.031*malaysia + 0.024*singapore + 0.022*greek + 0.021*jakarta + 0.016*greece + 0.015*dord + 0.014*athens + 0.011*malaysian
topic #18: 0.031*stakes + 0.029*webs + 0.018*futsal + 0.014*whitish + 0.013*hyun + 0.012*thoroughbred + 0.012*dnf + 0.012*jockey + 0.011*medalists + 0.011*racehorse
topic #19: 0.119*oblast + 0.034*uploaded + 0.034*uploads + 0.033*nordland + 0.025*selsoviet + 0.023*raion + 0.022*krai + 0.018*okrug + 0.015*hålogaland + 0.015*russiae + 0.020*manga + 0.017*dragon + 0.012*theme + 0.011*dvd + 0.011*super + 0.011*hunter + 0.009*ash + 0.009*dream + 0.009*angel
注意运行LDA和LSA之间的差别:我们要求LSA获取400个主题,LDA仅仅100个主题(所以速度的差异可能更大),第二,LSA在Gensim中的实现真正的online:在一个小量的更新中,如果随着时间,输入流在性质上有了变化,LSA模型能够重新定位自身,反映这些变化。相反,LDA不是真正的online,后期的更新对模型的影响逐渐减少;如果在文档流中,存在主题的变化,LDA会出现问题;
总之,如果随着时间增量式的有新的文档加入,使用LDA时要注意;LDA批量使用,要不需要事先了解整个训练语料库,要不没有主题的变化;
运行批量LDA模型:
# extract 100 LDA topics, using 20 full passes, no online updates
>>> lda = gensim.models.ldamodel.LdaModel(corpus=mm, id2word=id2word, num_topics=100, update_every=0, passes=20)
通常,一个被训练的模型能够用来转换新的,不可见的文档(词袋计数向量)为LDA主题分布:
>>> doc_lda = lda[doc_bow]