K进邻算法(KNN) python代码实现

  • 思想极度简单
  • 应用数学知识少(近乎为零)
  • 效果好

K近邻算法

import numpy as np
from math import sqrt
from collections import Counter
from ml_utils.metrics import accuray_score
class kNN_classify:
    def __init__(self,k):
        """初始化kNN分类器"""
        assert  k>=1,"k must be valid"
        self.k = k
        self._X_train = None
        self._y_train = None
    def fit(self,X_train,y_train):
        """根据训练数据集 X_train 和 y_train 训练kNN 分类器"""
        assert X_train.shape[0] == y_train.shape[0], \
            "the size of X_train must equal to size of y_train"
        assert self.k <= X_train.shape[0], \
            "the size of X_train must at least k."
        self._X_train = X_train
        self._y_train =y_train
        return self
    def predict(self,X_predict,weight='distance'):
        """给定预测数据集X—-predict ,返回表示X_predict的结果向量"""
        assert self._X_train is not None and self._y_train is not None,\
        "must fit before predict!"
        assert X_predict.shape[1] == self._X_train.shape[1],\
        "the feature number of X——predict must to X_train "
        y_predict =[self._predict(x,weight) for x in X_predict]
        return np.array(y_predict)
    def _predict(self,x,weight):
        """给定单个待预测数据x,返回X的预测结果值"""
        assert x.shape[0] == self._X_train.shape[1], \
            "the feature number of x  must to be equal X_train "
        # 计算欧式距离
        distances = [sqrt(sum((x_train - x) ** 2)) for x_train in self._X_train]
        # argsort 给出从大到小排序索引
        nearest = np.argsort(distances)
        # 按索引筛选出 最近距离 所对应的k个点的ytrain分类
        topk_y = [self._y_train[m] for m in nearest[:self.k]]
        if(weight=='uniform'):
            #基于权重 样本点距离越近则权重越大
            top_distinct = [ 1/distances[m] for m in nearest[:self.k]]
            distinct_byweight={}
            for m in range(len(topk_y)):
                key = [topk_y[m]][0]
                if(key not in distinct_byweight):
                    distinct_byweight[key] = top_distinct[m]
                else:
                    distinct_byweight[key] += top_distinct[m]
            maxClasser=None
            maxclass = 0.0
            #计算最大距离权重的点
            for m in distinct_byweight:
                if(distinct_byweight[m]>maxclass):
                    maxclass = distinct_byweight[m]
                    maxClasser = m
            return maxClasser


        # 统计分类中的的元素
        votes = Counter(topk_y)
        # 返回第一多的元素 的 值
        return votes.most_common(1)[0][0]
    def score(self,X_test,y_test,weight='distance'):
        "根据测试数据及测试 X_test 和 y_test  确定模型的准确度"
        y_predict = self.predict(X_test,weight)
        return accuray_score(y_predict,y_test)

    def __repr__(self):
        return "KNN(k = % d)" % self.k



KNN 核心 是使用 欧拉距离公式 二维平面上两点a(x1,y1)与b(x2,y2)间的欧欧氏距离: ( x 1 − y 1 ) 2 + ( x 2 − y 2 ) 2 \sqrt{(x_1-y_1)^2+(x_2-y_2)^2} (x1y1)2+(x2y2)2 = ∑ i = 1 n ( x i a − x i b ) 2 \sqrt{\sum_{i=1}^{n}{(x_i^a-x_i^b)^2}} i=1n(xiaxib)2
曼哈顿距离: ∑ i = 1 n ∣ X i a − X i b ∣ \sum_{i=1}^{n}|X^{a}_i-X^{b}_i| i=1nXiaXib
在这里插入图片描述
明可夫斯基距离: ( ∑ i = 1 n ∣ X i a − X i b ∣ p ) 1 / p (\sum_{i=1}^{n}|X^{a}_i-X^{b}_i|^p)^{1/p} (i=1nXiaXibp)1/p

1.计算 数据集里每个样本点与 预测样本点 的 距离

distances = [sqrt(sum((x_train - x) ** 2)) for x_train in self._X_train]

2.找到最近的k个样本点的距离所对应的类别

# argsort 给出从大到小排序索引
nearest = np.argsort(distances)
# 按索引筛选出 最近距离 所对应的k个点的ytrain分类
topk_y = [self._y_train[m] for m in nearest[:self.k]]

统计出和自己最相似类别返回这个预测的类别

# 统计分类中的的元素
votes = Counter(topk_y)
# 返回第一多的元素 的 值
return votes.most_common(1)[0][0]

由于给定的数据可能是 按类别顺序排列的 并且只有一份数据 所以就需要把数据集做打乱 按比例分割 成 训练;训练数据 训练数据标签 测试数据 测试数据标签

def train_test_split(X,y,test_ratio = 0.2,seed =None):
    """将数据X和y 按照 test_ratio分割成X_train,X_test,y_train,y_test"""
    assert X.shape[0] ==y.shape[0],\
        "the size of X must be equal to size of y"
    assert 0.0 <= test_ratio <= 1.0,\
        "test_ration must be valid"
    if seed:
    	#随机打乱
        np.random.seed(seed)
    #相当于 随机生成 训练数据集的索引
    shuffle_indexes = np.random.permutation(len(X))
    #按比例分割大小
    test_size = int(len(X) * test_ratio)
    #按一定比例 划分数据集的索引
    test_indexes = shuffle_indexes[:test_size]
    train_indexes = shuffle_indexes[test_size:]
    #根据打乱的 索引返回对应的数据
    x_train = X[train_indexes]
    y_train = y[train_indexes]
    x_test = X[test_indexes]
    y_test = y[test_indexes]
    return x_train,y_train,x_test,y_test
def accuray_score(y_true,y_predict):
    '''计算y_true 和 y_predict之间的准确率'''
    assert y_true.shape[0] == y_predict.shape[0],\
    "the size of y_true must be equal to the size of y_predict"
    return sum(y_true == y_predict) / len(y_true)    

测试

if __name__ == '__main__':
    digits = datasets.load_digits()
    x = digits.data
    y = digits.target
    some_fdigit = x[777]
    some_digit_image = some_fdigit.reshape(8, 8)
    plt.imshow(some_digit_image, cmap=matplotlib.cm.binary)
    plt.show()
    x_train, y_train, x_test, y_test = train_test_split(x, y);
    knnclassify = kNN_classify(k=3);
    knnclassify.fit(x_train, y_train)
    print(knnclassify.score(x_test,y_test))

在这里插入图片描述
如果
距离的倒数 距离越远权重越低

if(uniform):
            #基于权重 样本点距离越近则权重越大
            top_distinct = [ 1/distances[m] for m in nearest[:self.k]]
            distinct_byweight={}
            for m in range(len(topk_y)):
                key = [topk_y[m]][0]
                if(key not in distinct_byweight):
                    distinct_byweight[key] = top_distinct[m]
                else:
                    distinct_byweight[key] += top_distinct[m]
            maxClasser=None
            maxclass = 0.0
            #计算最大距离权重的点
            for m in distinct_byweight:
                if(distinct_byweight[m]>maxclass):
                    maxclass = distinct_byweight[m]
                    maxClasser = m
            return maxClasser

调参(超参数)

  1. 明可夫斯基距离
  2. 基于权重

KNN能干嘛

knn简单,效果强大

  • 解决分类问题(多分类)
  • 解决回归问题(KNeighborsRegressor)
    缺点:
  1. 效率低下(如果训练集有m个样本,n个特征,则预测每一个新的数据,需要O(m*n))
  2. 高度数据相关
  3. 预测结果不具有可解释性
  4. 维数灾难(随着纬度的增加,“看似相近”的两个点之间距离越来越大)
1维0到1的距离1
2维(0,0)到(1,1)的距离1.414
3维(0,0,0) 到 (1,1,1)的距离1.73
64维(0,0…,0,0)到 (1,1…,1,1)8
10000维(0,0,0…,0,0)到 (1,1,1…,1,1)100
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值