KNN基于Numpy的python实现

学了KNN以后,不用SKlearn现成的包,基于numpy自己实现了一下KNN算法。可以按照顺序把代码贴进去,自己跑一下试试。

导入需要的包

import time #调用时间,显示算法运行时间
import os
import math
import numpy as np
import scipy as sp
import pandas as pd
# 使np矩阵不显示科学计数
np.set_printoptions(suppress=True)

主体KNN代码

class KNN(object):
    """
    Classifier implementing the k-nearest neighbors vote.
    Parameters
    ----------
    n_neighbors: int, required, default=None
        Number of neighbors to use by default for :meth:`kneighbors` queries.
    ----------
    
    function1-fit: Fit the model by k-nearest neighbors.
    Parameters
    ----------
    X_train: train dataset
    y_train: label of X_train
    ----------
    
    function2-predict: return x_test's classification result after KNN process.
    Parameters
    ----------
    y : x_test
    ----------
    """
    
    def __init__(self,n_neighbors):
        self.n_neighbors=n_neighbors
        self._X_train = None 
        self._y_train = None
        
    def fit(self,X_train,y_train):
        self._X_train=X_train
        self._y_train=y_train
        return self 
            
    def predict(self,X_test):
        distances = [np.linalg.norm(x_test-self._X_train,ord=2,axis=1) for x_test in X_test.values] # caculate the distance
        distances_sort=[ np.argsort(distance)[0:self.n_neighbors] for distance in distances] # sort by distance and only select Top n_neighbors point with shortest distance
        target=[self._y_train[distance_s] for distance_s in distances_sort]  # get the label of these point
        class_result=sp.stats.mode(target, axis = 1)[0].flatten()  #select mode label of the distance
        del distances,distances_sort,target   #del variable and release memory
        return class_result

缺点:循环较多,性能不够优化

找个数据集测试一下:

from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split

iris = load_iris()
iris.target=pd.DataFrame(iris.target)
iris.data=pd.DataFrame(iris.data)
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.4, random_state=1)
np.set_printoptions(suppress=True)
from sklearn.metrics import accuracy_score

start_time = time.time()

knn = KNN(n_neighbors=3)
knn.fit(X_train, y_train.values.ravel())
y_train_knn = knn.predict(X_train)
y_test_knn = knn.predict(X_test)
print("Use custom KNN algorithm\naccuracy on train set: ",accuracy_score(y_train,y_train_knn ),"\naccuracy on test set: ",
    accuracy_score(y_test,y_test_knn))

print("--- %s seconds ---" % (time.time() - start_time))

可以看到结果有90+准确率。
在这里插入图片描述

再用sklearn自带的KNeighborsClassifier包对比一下:

from sklearn.neighbors import KNeighborsClassifier
start_time = time.time()

knn2 = KNeighborsClassifier(n_neighbors=3)
knn2.fit(X_train, y_train.values.ravel())
y_train_knn = knn2.predict(X_train)
y_test_knn = knn2.predict(X_test)
print("Use sklearn KNN algorithm\naccuracy on train set: ",accuracy_score(y_train,y_train_knn ),"\naccuracy on test set: ",
    accuracy_score(y_test,y_test_knn))

print("--- %s seconds ---" % (time.time() - start_time))

在这里插入图片描述准确率和前面手写的KNN算法是一样的,但是速度快了一倍。大概是因为自定义算法内部循环较多的缘故,如换成矩阵运算应该会优化速度。

最后用混淆矩阵看一下测试集上的分类效果:

import matplotlib.pyplot as plt
import seaborn as sns
con_matrix = pd.crosstab(pd.Series(y_test.values.flatten(), name='Actual' ),pd.Series(y_test_knn, name='Predicted'))
plt.title("Test set Confusion Matrix on KNN")
sns.heatmap(con_matrix, cmap="Greys", annot=True, fmt='g')
plt.show()

在这里插入图片描述

算法主体部分代码放到GitHub上了,有兴趣欢迎关注:
https://github.com/JuneYaooo/ml-algorithms

  • 1
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值