数学建模老哥-python基础和机器学习(七)(下)分类算法

该笔记为个人学习笔记,看的课程是B站-数学建模老哥:7 Python数学模型选择_哔哩哔哩_bilibili

数据集来源:数学建模老哥-python基础和机器学习(四)数据导入+数据理解+数据可视化-CSDN博客

目录

2.4.3分类算法

2.4.3.1线性算法

2.4.3.1.1逻辑回归

2.4.3.1.2线性判断分析(LDA)

2.4.3.2非线性算法

2.4.3.2.1K近邻

2.4.3.2.2贝叶斯分类器

2.4.3.2.3分类与回归树

2.4.3.2.4支持向量机

2.4.3.3总代码


2.4.3分类算法

2.4.3.1线性算法
2.4.3.1.1逻辑回归

#逻辑回归
import pandas as pd
from pandas import read_csv
from sklearn.model_selection import ShuffleSplit
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
filename = 'pima.csv'
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
data = read_csv(filename, names=names)
array = data.values
X= array[:,0:8]
Y = array[:,8]
# 设置交叉验证的参数
n_splits = 10
test_size = 0.33
seed = 7
# 注意:ShuffleSplit并不直接分割成测试集和训练集,而是随机选择一部分数据作为“测试”
# 在这里,我们设置train_size来模拟KFold的行为,但请注意这不是严格的KFold
kfold = ShuffleSplit(n_splits=n_splits, train_size=1 - test_size, random_state=seed)
# 初始化逻辑回归模型
model = LogisticRegression(multi_class='multinomial', max_iter=1100)
# 执行交叉验证
result = cross_val_score(model, X, Y, cv=kfold)
# 打印结果
print("算法评估: %.3f%% (%.3f%%)" % (result.mean() * 100, result.std() * 100))

输出结果:

算法评估: 76.378% (2.316%)
2.4.3.1.2线性判断分析(LDA)

#线性判断分析(LDA)
import pandas as pd
from pandas import read_csv
from sklearn.model_selection import KFold
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.model_selection import cross_val_score
filename = 'pima.csv'
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
data = read_csv(filename, names=names)
array = data.values
X= array[:,0:8]
Y = array[:,8]
num_fold = 10
seed = 7
# 初始化KFold对象,设置分割次数、随机种子和是否打乱数据
kfold = KFold(n_splits=num_fold, random_state=seed, shuffle=True)
# 初始化线性判别分析模型
model = LinearDiscriminantAnalysis()
# 假设X是特征数据,Y是目标变量
# 这里需要确保X和Y已经被正确定义和加载
# 使用cross_val_score函数进行交叉验证,计算模型的平均分数
# 注意:cross_val_score的调用中,函数名应为cross_val_score,且参数之间不应有逗号分隔错误
result = cross_val_score(model, X, Y, cv=kfold)
# 打印平均分数
print(result.mean())

 输出结果:

0.7669685577580315
2.4.3.2非线性算法
2.4.3.2.1K近邻

#K近邻算法 --K个相似分为一类
from pandas import read_csv
from sklearn.model_selection import KFold
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import cross_val_score
filename = 'pima.csv'
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
data = read_csv(filename, names=names)
array = data.values
X= array[:,0:8]
Y = array[:,8]
num_fold = 10
seed = 7
# 初始化KFold对象,设置分割次数、随机种子和是否打乱数据
kfold = KFold(n_splits=num_fold, random_state=seed, shuffle=True)
model =KNeighborsClassifier()
result = cross_val_score(model, X, Y, cv=kfold)
print(result.mean())

 输出结果:

0.7109876965140123
2.4.3.2.2贝叶斯分类器

#贝叶斯分类器:概率  朴素贝叶斯
from pandas import read_csv
from sklearn.model_selection import KFold
from sklearn.naive_bayes import GaussianNB
from sklearn.model_selection import cross_val_score
filename = 'pima.csv'
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
data = read_csv(filename, names=names)
array = data.values
X= array[:,0:8]
Y = array[:,8]
num_fold = 10
seed = 7
# 初始化KFold对象,设置分割次数、随机种子和是否打乱数据
kfold = KFold(n_splits=num_fold, random_state=seed, shuffle=True)
model =GaussianNB()
result = cross_val_score(model, X, Y, cv=kfold)
print(result.mean())

输出结果:

0.7591421736158578
2.4.3.2.3分类与回归树

#分类回归树 CART决策树 基于基尼指数 是否 树的生成 树的剪枝
from pandas import read_csv
from sklearn.model_selection import KFold
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import cross_val_score
filename = 'pima.csv'
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
data = read_csv(filename, names=names)
array = data.values
X= array[:,0:8]
Y = array[:,8]
num_fold = 10
seed = 7
# 初始化KFold对象,设置分割次数、随机种子和是否打乱数据
kfold = KFold(n_splits=num_fold, random_state=seed, shuffle=True)
model =DecisionTreeClassifier()
result = cross_val_score(model, X, Y, cv=kfold)
print(result.mean())

 输出结果:

0.696719070403281
2.4.3.2.4支持向量机

#支持向量机 SVM
from pandas import read_csv
from sklearn.model_selection import KFold
from sklearn.svm import SVC
from sklearn.model_selection import cross_val_score
filename = 'pima.csv'
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
data = read_csv(filename, names=names)
array = data.values
X= array[:,0:8]
Y = array[:,8]
num_fold = 10
seed = 7
# 初始化KFold对象,设置分割次数、随机种子和是否打乱数据
kfold = KFold(n_splits=num_fold, random_state=seed, shuffle=True)
model =SVC()
result = cross_val_score(model, X, Y, cv=kfold)
print(result.mean())

 输出结果:

0.760457963089542
2.4.3.3总代码
#逻辑回归
import pandas as pd
from pandas import read_csv
from sklearn.model_selection import ShuffleSplit
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
filename = 'pima.csv'
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
data = read_csv(filename, names=names)
array = data.values
X= array[:,0:8]
Y = array[:,8]
# 设置交叉验证的参数
n_splits = 10
test_size = 0.33
seed = 7
# 注意:ShuffleSplit并不直接分割成测试集和训练集,而是随机选择一部分数据作为“测试”
# 在这里,我们设置train_size来模拟KFold的行为,但请注意这不是严格的KFold
kfold = ShuffleSplit(n_splits=n_splits, train_size=1 - test_size, random_state=seed)
# 初始化逻辑回归模型
model = LogisticRegression(multi_class='multinomial', max_iter=1100)
# 执行交叉验证
result = cross_val_score(model, X, Y, cv=kfold)
# 打印结果
print("算法评估: %.3f%% (%.3f%%)" % (result.mean() * 100, result.std() * 100))

#线性判断分析(LDA)
import pandas as pd
from pandas import read_csv
from sklearn.model_selection import KFold
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.model_selection import cross_val_score
filename = 'pima.csv'
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
data = read_csv(filename, names=names)
array = data.values
X= array[:,0:8]
Y = array[:,8]
num_fold = 10
seed = 7
# 初始化KFold对象,设置分割次数、随机种子和是否打乱数据
kfold = KFold(n_splits=num_fold, random_state=seed, shuffle=True)
# 初始化线性判别分析模型
model = LinearDiscriminantAnalysis()
# 假设X是特征数据,Y是目标变量
# 这里需要确保X和Y已经被正确定义和加载
# 使用cross_val_score函数进行交叉验证,计算模型的平均分数
# 注意:cross_val_score的调用中,函数名应为cross_val_score,且参数之间不应有逗号分隔错误
result = cross_val_score(model, X, Y, cv=kfold)
# 打印平均分数
print(result.mean())

#K近邻算法 --K个相似分为一类
from pandas import read_csv
from sklearn.model_selection import KFold
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import cross_val_score
filename = 'pima.csv'
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
data = read_csv(filename, names=names)
array = data.values
X= array[:,0:8]
Y = array[:,8]
num_fold = 10
seed = 7
# 初始化KFold对象,设置分割次数、随机种子和是否打乱数据
kfold = KFold(n_splits=num_fold, random_state=seed, shuffle=True)
model =KNeighborsClassifier()
result = cross_val_score(model, X, Y, cv=kfold)
print(result.mean())

#贝叶斯分类器:概率  朴素贝叶斯
from pandas import read_csv
from sklearn.model_selection import KFold
from sklearn.naive_bayes import GaussianNB
from sklearn.model_selection import cross_val_score
filename = 'pima.csv'
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
data = read_csv(filename, names=names)
array = data.values
X= array[:,0:8]
Y = array[:,8]
num_fold = 10
seed = 7
# 初始化KFold对象,设置分割次数、随机种子和是否打乱数据
kfold = KFold(n_splits=num_fold, random_state=seed, shuffle=True)
model =GaussianNB()
result = cross_val_score(model, X, Y, cv=kfold)
print(result.mean())

#分类回归树 CART决策树 基于基尼指数 是否 树的生成 树的剪枝
from pandas import read_csv
from sklearn.model_selection import KFold
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import cross_val_score
filename = 'pima.csv'
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
data = read_csv(filename, names=names)
array = data.values
X= array[:,0:8]
Y = array[:,8]
num_fold = 10
seed = 7
# 初始化KFold对象,设置分割次数、随机种子和是否打乱数据
kfold = KFold(n_splits=num_fold, random_state=seed, shuffle=True)
model =DecisionTreeClassifier()
result = cross_val_score(model, X, Y, cv=kfold)
print(result.mean())

#支持向量机 SVM
from pandas import read_csv
from sklearn.model_selection import KFold
from sklearn.svm import SVC
from sklearn.model_selection import cross_val_score
filename = 'pima.csv'
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
data = read_csv(filename, names=names)
array = data.values
X= array[:,0:8]
Y = array[:,8]
num_fold = 10
seed = 7
# 初始化KFold对象,设置分割次数、随机种子和是否打乱数据
kfold = KFold(n_splits=num_fold, random_state=seed, shuffle=True)
model =SVC()
result = cross_val_score(model, X, Y, cv=kfold)
print(result.mean())

  • 3
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值