beautifulsoup爬取百度新闻,方法参见之前的方案
https://blog.csdn.net/weixin_41044499/article/details/94382539
整理如下:
将新闻预语料做结巴分词,skilearn的feature_extration方法将每个词向量化,放入朴素贝叶斯模型进行训练,观察军事、汽车、娱乐三类文本的分类效果,之前的准确率为 0.875
(https://blog.csdn.net/weixin_41044499/article/details/94591356)
这里将词频统计改进为n-gram 1-2
# !/usr/bin/python # -*- coding:utf-8 -*- import pandas as pd from sklearn.model_selection import train_test_split data = pd.read_excel('learning.xlsx', encoding='utf-8') from sklearn.feature_extraction.text import CountVectorizer vec = CountVectorizer( lowercase=True, # lowercase the text analyzer='char_wb', # tokenise by character ngrams ngram_range=(1,2), # use ngrams of size 1 and 2 max_features=1000 # keep the most common 1000 ngrams ) cv_fit = vec.fit_transform(data['新闻内容']) print(cv_fit.shape) x_train,x_test,y_train,y_test = train_test_split(cv_fit,data['新闻类型']) from sklearn.naive_bayes import MultinomialNB classifier = MultinomialNB() classifier.fit(x_train,y_train) y_test_prediction = classifier.predict(x_test) for m,n in zip(y_test_prediction,y_test): print("预测类别:{0},真实类别:{1}".format(m,n)) print("准确率:",sum(y_test_prediction == y_test)/len(y_test))
预测类别:0,真实类别:0
准确率果然得到进一步提升!!
预测类别:0,真实类别:0
预测类别:1,真实类别:1
预测类别:0,真实类别:0
预测类别:2,真实类别:2
预测类别:0,真实类别:0
预测类别:0,真实类别:0
预测类别:0,真实类别:0
预测类别:2,真实类别:2
预测类别:0,真实类别:0
预测类别:2,真实类别:2
预测类别:0,真实类别:0
预测类别:2,真实类别:2
预测类别:0,真实类别:0
预测类别:0,真实类别:0
预测类别:0,真实类别:0
预测类别:0,真实类别:1
预测类别:1,真实类别:0
预测类别:1,真实类别:1
预测类别:0,真实类别:0
预测类别:1,真实类别:1
预测类别:2,真实类别:2
预测类别:0,真实类别:0
预测类别:2,真实类别:2
准确率: 0.9166666666666666