用Kmeans算法对指定中文数据集聚类:
(1)使用jieba进行分词
(2)去除停用词
(3)构建特征向量
(4)使用Kmeans算法进行聚类
一、分词、去除停用词、构建特征向量
# 具体步骤详见002_文本分析与挖掘
文本如下:
代码展示:
# (1)分词并去除停用词——————————————————————————————————————————————
import jieba
# 读入需要处理的文本
with open ("Dataset.txt", "r", encoding='ANSI') as fp:
words = fp.read()
result_text = jieba.lcut(words)
#print(result_text)
# 读入停用词表,利用jieba分词以列表形式输出
with open ("stopwords_all.txt", "r", encoding='UTF-8') as fs:
stopwords = fs.read()
result_stop = jieba.lcut(stopwords)
#print(result_stop)
new_words = []
for i in result_text:
if i not in stopwords:
new_words.append(i)
# print("去除停用词后的结果:", new_words)
#(2)构建特征向量———————————————————————————————————————————————————
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import CountVectorizer
# 计算TF-IDF
cv = CountVectorizer()
tt = TfidfTransformer()
tv_fit = tt.fit_transform(cv.fit_transform(new_words))
# 获取所有词语特征数量
word = cv.get_feature_names_out()
print("词语特征数量: {}".format(len(word)))
# 导出权重,到这边就实现了将文字向量化的过程,矩阵中的每一行就是一个文档的向量表示
tv_weight = tv_fit.toarray()
print("TF-IDF文档-词矩阵:\n", tv_weight)
#(3)对向量进行聚类———————————————————————————————————————————————————
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=4)
kmeans.fit(tv_weight)
labels = kmeans.labels_
centers = kmeans.cluster_centers_
二、可视化
(1)需要将多维数据转化为二维,才能绘图
(2)使用T-SNE算法,对权重和聚类中心进行降维(这里只介绍用到的两个参数)
TSNE(perplexity=30.0, n_components=2)
- n_components,维度,默认2
- perplexity,浮点型,预估每个cluster可能有多少个元素,默认30,一般取值在5-50之间
根据可视化图,perplexity参数可自行调节
#(4)可视化———————————————————————————————————————————————————
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
# 使用T-SNE算法,对权重进行降维
tsne = TSNE(perplexity=30, n_components=2)
words_fit = tsne.fit_transform(tv_weight)
tsne1 = TSNE(perplexity=2, n_components=2)
centers_fit = tsne1.fit_transform(centers)
x = []
y = []
for i in words_fit:
x.append(i[0])
y.append(i[1])
a = []
b = []
for i in centers_fit:
a.append(i[0])
b.append(i[1])
# ax = plt.axes()
plt.scatter(x, y, c=labels, marker="x")
plt.scatter(a, b, c='red', marker='X', s=200, label='Centroids')
plt.title('K-means Clustering')
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')
plt.legend()
plt.show()