lda主题词评论python_从scikit learn的LDA获取主题词分布

看看^{}:components_ : array, [n_topics, n_features]

Topic word distribution. components_[i, j] represents word j in topic i.

下面是一个最小的例子:import numpy as np

from sklearn.decomposition import LatentDirichletAllocation

from sklearn.feature_extraction.text import CountVectorizer

data = ['blah blah foo bar', 'foo foo foo foo bar', 'bar bar bar bar foo',

'foo bar bar bar baz foo', 'foo foo foo bar baz', 'blah banana',

'cookies candy', 'more text please', 'hey there are more words here',

'bananas', 'i am a real boy', 'boy', 'girl']

vectorizer = CountVectorizer()

X = vectorizer.fit_transform(data)

vocab = vectorizer.get_feature_names()

n_top_words = 5

k = 2

model = LatentDirichletAllocation(n_topics=k, random_state=100)

id_topic = model.fit_transform(X)

topic_words = {}

for topic, comp in enumerate(model.components_):

# for the n-dimensional array "arr":

# argsort() returns a ranked n-dimensional array of arr, call it "ranked_array"

# which contains the indices that would sort arr in a descending fashion

# for the ith element in ranked_array, ranked_array[i] represents the index of the

# element in arr that should be at the ith index in ranked_array

# ex. arr = [3,7,1,0,3,6]

# np.argsort(arr) -> [3, 2, 0, 4, 5, 1]

# word_idx contains the indices in "topic" of the top num_top_words most relevant

# to a given topic ... it is sorted ascending to begin with and then reversed (desc. now)

word_idx = np.argsort(comp)[::-1][:n_top_words]

# store the words most relevant to the topic

topic_words[topic] = [vocab[i] for i in word_idx]

查看结果:

^{pr2}$

显然,您应该使用更大的文本体来尝试此代码,但这是在给定数量的主题中获取信息最丰富的单词的一种方法。在

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值