knn程序的原理步骤

本文介绍了使用KNN算法进行预测的过程。首先随机选取数据并分为训练集和测试集,接着利用KNN算法在训练集上建立模型,然后对测试集的X值进行预测,并与实际Y值对比。示例代码中,使用了sklearn库加载鸢尾花数据集,随机划分数据,并用KNeighborsClassifier进行分类预测。
摘要由CSDN通过智能技术生成

Explanation of KNNprogram

Aim:Using KNN to predict the target data of the test data.

1.Randomlyrise a gorup of data and split them into two parts,train data and test data respectively.in addition ,every data has X value and Y value.

2.usetrain data to exercise a model by KNN algorithm .

3.inputthe X value of test data to predict.

3.takea comparision between results of predict and Y value of test data.

 

 代码:

Program

#!/usr/bin/env pyhton
# -*- coding:<encoding name> -*-

importnumpy as np
from sklearn import datasets

iris = datasets.load_iris()
#have a train data,
iris_X = iris.data
iris_y = iris.target
np.unique(iris_y)
# Split iris data in trainand test data
# A random permutation, to split the data randomly
np.random.seed(0)
# permutation随机生成一个范围内的序列
indices= np.random.permutation(len(iris_X))
# 通过随机序列将数据随机进行测试集和训练集的划分
iris_X_train= iris_X[indices[:-10]]
iris_y_train = iris_y[indices[:-
10]]
iris_X_test = iris_X[indices[-
10:]]
iris_y_test = iris_y[indices[-
10:]]
print(iris_y_test)
# Create and fit anearest-neighbor classifier
from sklearn.neighbors import KNeighborsClassifier

knn = KNeighborsClassifier()
knn.fit(iris_X_train, iris_y_train)

KNeighborsClassifier(
algorithm='auto', leaf_size=30, metric='minkowski',
                    
metric_params=None, n_jobs=1, n_neighbors=5, p=2,
                    
weights='uniform')
knn.predict(iris_X_test)
print (iris_y_test)

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值