KNN 分类(选择最佳的 K 值,并可视化模型精度与 n_neighbors 的关系)

import matplotlib.pyplot as plt
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier


# 导入乳腺癌数据集
cancer = load_breast_cancer()

# 划分训练集和测试集
X_train, X_test, y_train, y_test = train_test_split(cancer.data, cancer.target, random_state=66, stratify=cancer.target)

training_accuracy = []
test_accuracy = []

# n_neighbors 取值范围为 [1, 10]
neighbors_settings = range(1, 11)

# 模型构建及模型评估
for n_neighbors in neighbors_settings:
    clf = KNeighborsClassifier(n_neighbors=n_neighbors)
    clf.fit(X_train, y_train)
    training_accuracy.append(clf.score(X_train, y_train))  # 记录训练精度
    test_accuracy.append(clf.score(X_test, y_test))  # 记录泛化精度

# 打印不同的 n_neighbors 值对应的训练精度和泛化精度 
neighbor_dict = {}
for n_neighbors in neighbors_settings:
    neighbor_dict[n_neighbors] = [training_accuracy[n_neighbors - 1], test_accuracy[n_neighbors - 1]]
print(neighbor_dict)

# 可视化模型精度与 n_neighbors 的关系
plt.plot(neighbors_settings, training_accuracy, label='training_accuracy')
plt.plot(neighbors_settings, test_accuracy, label='test_accuracy')
plt.xlabel('n_neighbors')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
---------
{1: [1.0, 0.9020979020979021], 
 2: [0.9765258215962441, 0.8881118881118881], 
 3: [0.9577464788732394, 0.9230769230769231], 
 4: [0.9553990610328639, 0.9230769230769231], 
 5: [0.9483568075117371, 0.9230769230769231], 
 6: [0.9460093896713615, 0.9370629370629371], 
 7: [0.9436619718309859, 0.9300699300699301], 
 8: [0.9413145539906104, 0.9300699300699301], 
 9: [0.9342723004694836, 0.916083916083916], 
 10: [0.9389671361502347, 0.916083916083916]}

在这里插入图片描述

  • 10
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值