文章目录
1、手写KNN解决分类回归问题
下列代码均在华为云云耀云服务器上运行
yum install python-pip pip install numpy
用国内镜像!!!不然贼慢
mkdir -p ~/.pip
vi ~/.pip/pip.conf
[global] index-url = https://pypi.tuna.tsinghua.edu.cn/simple
pip install numpy pip config list
这就很快了。别忘了装这个:
pip install scickit-learn
1.1、分类问题
from sklearn import datasets
from collections import Counter
from sklearn.model_selection import train_test_split
import numpy as np
1.1.1、数据处理
归一化和标准化通常二选一
#load data
iris = datasets.load_iris()
X = iris.data
y = iris.target
X_train,X_test,y_train,y_test = train_test_split(X,y,random_state=500)
from sklearn.preprocessing import StandardScaler,Normalizer
# #标准化
# scaler = StandardScaler().fit(X_train)
# X_train = scaler.transform(X_train)
# X_test =scaler.transform(X_test)
#归一化
scaler = Normalizer().fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
1.1.2、手写距离方法
欧氏距离 |曼哈顿距离 |切比雪夫距离
这里统一用闵氏距离公式去实现
def min_dis(instance1, instance2, p):
# dist = np.sqrt(sum(instance1 - instance2)**2)
if p>=float('inf'): return max(np.abs(instance1 - instance2))
#dist = np.power((sum(np.abs(instance1 - instance2))**p), 1/p)
dist = np.power(np.sum(np.abs(instance1 - instance2) ** p), 1/p)
return dist
1.1.3、手写分类方法
def knn_classify(X,y,testInstance,k):
distances = [min_dis(x,testInstance, p_type) for x in X]
kneighbors = np.argsort(distances)[:k]
count = Counter(y[kneighbors])
return count.most_common()[0][0]
1.1.4、结果预测
定义超参数,选择距离公式进行预测,使用Accuracy
作为标准。
1)欧氏距离
#超参数
p_types={'ecu':2,'man':1,'xf':float('inf')}
p_type = p_types['ecu']
k=3
predictions = [knn_classify(X_train,y_train,data,k) for data in X_test]
correct = np.count_nonzero((predictions == y_test)==True)
print("Accuracy is :%.3f" %(correct/len(X_test)))
未处理、标准化、归一化的结果依次如下:
2)曼哈顿距离
#超参数
p_types={'ecu':2,'man':1,'xf':float('inf')}
p_type = p_types['man']
k=3
predictions = [knn_classify(X_train,y_train,data,k) for data in X_test]
correct = np.count_nonzero((predictions == y_test)==True)
print("Accuracy is :%.3f" %(correct/len(X_test)))
未处理、标准化、归一化的结果依次如下:
3)切比雪夫距离
#超参数
p_types={'ecu':2,'man':1,'xf':float('inf')}
p_type = p_types['xf']
k=3
predictions = [knn_classify(X_train,y_train,data,k) for data in X_test]
correct = np.count_nonzero((predictions == y_test)==True)
print("Accuracy is :%.3f" %(correct/len(X_test)))
未处理、标准化、归一化的结果依次如下:
1.2、回归问题
回归预测做的就是将分类取平均得到的结果作为回归结果。
from sklearn import datasets
from collections import Counter
from sklearn.model_selection import train_test_split
import numpy as np
1.2.1、数据处理
归一化和标准化通常二选一
#load data
diabetes = datasets.load_diabetes()
X = diabetes.data
y = diabetes.target
X_train,X_test,y_train,y_test = train_test_split(X,y,random_state=500)
from sklearn.preprocessing import StandardScaler,Normalizer
# #标准化
# scaler = StandardScaler().fit(X_train)
# X_train = scaler.transform(X_train)
# X_test =scaler.transform(X_test)
#归一化
scaler = Normalizer().fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
1.2.2、手写距离方法
欧氏距离 |曼哈顿距离 |切比雪夫距离
这里统一用闵氏距离公式去实现
def min_dis(instance1, instance2, p):
# dist = np.sqrt(sum(instance1 - instance2)**2)
if p>=float('inf'): return max(np.abs(instance1 - instance2))
#dist = np.power((sum(np.abs(instance1 - instance2))**p), 1/p)
dist = np.power(np.sum(np.abs(instance1 - instance2) ** p), 1/p)
return dist
1.2.3、手写回归方法
def knn_regression(X,y,testInstance,k):
distances = [min_dis(x,testInstance, p_type) for x in X]
kneighbors = np.argsort(distances)[:k]
return np.mean(kneighbors)
1.1.4、结果预测
定义超参数,选择距离公式进行预测,使用RMSE
作为标准。
手写RMSE
def rmse(instance1, intstance2):
m=len(instance1)
return np.sqrt(1/m * (sum(instance1-intstance2)**2))
1)欧氏距离
#超参数
p_types={'ecu':2,'man':1,'xf':float('inf')}
p_type = p_types['ecu']
k=3
predictions = [knn_regression(X_train,y_train,data,k) for data in X_test]
res = rmse(predictions,y_test)
print("rmse is %.3f"%(res))
未处理、标准化、归一化的结果依次如下:
2)曼哈顿距离
#超参数
p_types={'ecu':2,'man':1,'xf':float('inf')}
p_type = p_types['man']
k=3
predictions = [knn_regression(X_train,y_train,data,k) for data in X_test]
res = rmse(predictions,y_test)
print("rmse is %.3f"%(res))
未处理、标准化、归一化的结果依次如下:
3)切比雪夫距离
#超参数
p_types={'ecu':2,'man':1,'xf':float('inf')}
p_type = p_types['xf']
k=3
predictions = [knn_regression(X_train,y_train,data,k) for data in X_test]
res = rmse(predictions,y_test)
print("rmse is %.3f"%(res))
未处理、标准化、归一化的结果依次如下:
1.3、Q&A
1)遇到过这样一个问题,sum() 与 np.sum()的区别?为何距离公式使用不同和函数,两者得到的欧氏距离完全不同?曼哈顿距离与切比雪夫结果是一致的?
待解答。欢迎讨论,。