基于条件之间的独立假设
详情:https://blog.csdn.net/qq_39422642/article/details/78389063
案例:新闻文本分类,任意输入一句话,预测其类别
#coding = utf-8
# -*- coding: utf-8 -*-
from sklearn.datasets import fetch_20newsgroups
categories = ['alt.atheism', 'soc.religion.christian', 'comp.graphics', 'sci.med']
twenty_train = fetch_20newsgroups(subset='train', categories=categories)
# '''
# 这是开始提取特征,这里的特征是词频统计。
# '''
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(twenty_train.data)
# '''
# 这是开始提取特征,这里的特征是TFIDF特征。
# '''
from sklearn.feature_extraction.text import TfidfTransformer
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
# '''
# 使用朴素贝叶斯分类,并做出简单的预测
# '''
from sklearn.naive_bayes import MultinomialNB
clf = MultinomialNB().fit(X_train_tfidf, twenty_train.target)
docs_new = ['God is love', 'OpenGL on the GPU is fast']
X_new_counts = count_vect.transform(docs_new)
X_new_tfidf = tfidf_transformer.transform(X_new_counts)
predicted = clf.predict(X_new_tfidf)
for doc, category in zip(docs_new, predicted):
print('%r => %s' % (doc, twenty_train.target_names[category]))
# '''
# 使用测试集来评估模型好坏。
# '''
from sklearn import metrics
import numpy as np
twenty_test = fetch_20newsgroups(subset='test',categories=categories, shuffle=True, random_state=42)
docs_test = twenty_test.data
X_test_counts = count_vect.transform(docs_test)
X_test_tfidf = tfidf_transformer.transform(X_test_counts)
predicted = clf.predict(X_test_tfidf)
print(metrics.classification_report(twenty_test.target, predicted,target_names=twenty_test.target_names))
print("accurary\t"+str(np.mean(predicted == twenty_test.target)))
朴素贝叶斯的三个常用模型:高斯、多项式、伯努利
高斯:处理特征是连续型变量的情况 |
多项式:最常见,要求特征是离散数据,多用于文本 |
伯努利:要求特征是离散的,且为布尔类型,即true和false,或者1和0 |
https://blog.csdn.net/a8530764/article/details/78894778