cs231n-knn实现图像分类

**

cs231n-knn实现图像分类(assignment1)(http://cs231n.github.io/assignments2018/assignment1/)**

一、构建模型
代码如下,将此文件保存为knn.py

import numpy as np

class KNearestNeighbor:

    def __init__(self):
        pass
    def train(self,X,y):
        self.X_train=X
        self.y_train=y

    def predict(self,X,k=1,num_loops=0):#欧氏距离的三种不同实现方式,其中X表示测试集数据,k表示需要选取的近邻点个数(默认是1),num_loops表示采用哪种方式计算欧氏距离(默认采用第三种),最后得到的距离存储在dists中。
        if num_loops== 0:
            dists=self.compute_distances_no_loops(X)
        elif num_loops==1:
            dists=self.compute_distances_one_loop(X)
        elif num_loops==2:
            dists=self.compute_distances_two_loops(X)
        else:
            raise ValueError('Invalid value %d for num_loops' % num_loops)
        return self.predict_labels(dists,k=k)

    def cumpute_distances_two_loops(self,X):
        num_test=X.shape[0]
        num_train=self.X_train.shape[0]
        dists=np.zeros((num_test,num_train))
        print(X.shape,self.X_train.shape)
        for i in range(num_test):
            for j in range(num_train):
                dists[i,j]=np.sqrt(np.sum((X[i,:]-self.X_train[j,:])**2))
        return dists

    def compute_distances_one_loop(self,X):
        num_test=X.shape[0]
        num_train=self.X_train.shape[0]
        dists=np.zeros((num_test,num_train))
        for i in range(num_test):
            dists[i,:]=np.sqrt(np.sum(np.square(self.X_train-X[i,:]),axis=1))
        return dists

    def compute_distances_no_loops(self,X):
        num_test = X.shape[0]
        num_train = self.X_train.shape[0]
        dists = np.zeros((num_test, num_train))
        test_sum=np.sum(np.square(X),axis=1)  
        train_sum=np.sum(np.square(self.X_train),axis=1)
        inner_product=np.dot(X,self.X_train.T)
        dists=np.sqrt(-2*inner_product+test_sum.reshape(-1,1)+train_sum)
        return dists

    def predict_labels(self,dists,k=1):   #将距离排序,输出与测试集距离最小的前k个训练集图像的类别
        num_test=dists.shape[0]
        y_pred=np.zeros(num_test)
        for i in range(num_test):
            closest_y=[]
            y_indicies=np.argsort(dists[i,:],axis=0)  #将距离按照从小到大的顺序排列,输出序号
            closest_y=self.y_train[y_indicies[: k]]   #输出前k个图像的类别
            y_pred[i]=np.argmax(np.bincount(closest_y))  #对得到的k个数进行投票,选取出现次数最多的类别作为最后的预测类别,当k=1时,closest_y=y_pred
        return y_pred

二、导入数据
将cifar10数据集下载(http://www.cs.toronto.edu/~kriz/cifar.html)选第一个python版本的数据集下载,解压到cs231n/datasets文件夹下,将此文件保存为data_utils.py

import pickle
import numpy as np
import os

def load_cifar_batch(filename):
    with open(filename,'rb') as f :
        datadict=pickle.load(f,encoding='latin1')
        x=datadict['data']
        y=datadict['labels']
        x=x.reshape(10000,3,32,32).transpose(0,2,3,1).astype('float')
        y=np.array(y)
        return x,y

def load_cifar10(ROOT):
    xs=[]
    ys=[]
    for b in range(1,6):
        f=os.path.join(ROOT,'data_batch_%d' % (b,))
        x,y=load_cifar_batch(f)
        xs.append(x)
        ys.append(y)
    Xtrain=np.concatenate(xs) #将5份训练集转成数组。
    Ytrain=np.concatenate(ys)
    del x ,y
    Xtest,Ytest=load_cifar_batch(os.path.join(ROOT,'test_batch')) #将1份测试集转成数组
    return Xtrain,Ytrain,Xtest,Ytest

三、训练和预测
1.将数据集载入模型,此部分代码如下,将此文件保存为knn_pratice.py;

import numpy as np
from data_utils import load_cifar10
import matplotlib.pyplot as plt
from  knn import KNearestNeighbor
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
x_train,y_train,x_test,y_test=load_cifar10(cifar10_dir)

print('Training data shape: ', x_train.shape)
print('Training labels shape: ', y_train.shape)
print('Test data shape: ', x_test.shape)
print('Test labels shape: ', y_test.shape)

输出结果如下:
training data shape: (50000, 32, 32, 3)
training labels shape: (50000,)
test data shape: (10000, 32, 32, 3)
test labels shape: (10000,)
下面我们从这50000张训练集每一类中随机挑选samples_per_class张图片进行展示,代码如下:

classes=['plane','car','bird','cat','deer','dog','frog','horse','ship','truck']
num_claesses=len(classes)
samples_per_class=7
for y ,cls in enumerate(classes):
    idxs=np.flatnonzero(y_train==y)
    idxs=np.random.choice(idxs,samples_per_class,replace=False)
    for i ,idx in enumerate(idxs):
        plt_idx=i*num_claesses+y+1
        plt.subplot(samples_per_class,num_claesses,plt_idx)
        plt.imshow(x_train[idx].astype('uint8'))
        plt.axis('off')
        if i ==0:
            plt.title(cls)
plt.show()

为了加快我们的训练速度,我们只选取5000张训练集,500张测试集(读者可之后使用全部数据集进行训练和测试),代码如下:

num_training=5000
mask=range(num_training)
x_train=x_train[mask]
y_train=y_train[mask]
num_test=500
mask=range(num_test)
x_test=x_test[mask]
y_test=y_test[mask]

为了欧氏距离的计算,我们把得到的图像数据拉长成行向量,代码如下:

x_train=np.reshape(x_train,(x_train.shape[0],-1))
x_test=np.reshape(x_test,(x_test.shape[0],-1))
print(x_train.shape,x_test.shape)

计算结果如下:
(5000, 3072) (500, 3072)

2.对测试集进行预测。为了能够预测每个图像的类别,我们首先要计算每个测试集图像与训练集图像的欧氏距离,计算代码如下:

classifier=KNearestNeighbor()
classifier.train(x_train,y_train)
dists=classifier.cumpute_distances_two_loops(x_test)
print(dists)

这里我们采用的是其中一种欧氏距离计算compute_distances_two_loops。

可以得到计算结果如下:
[[ 3803.92350081 4210.59603857 5504.0544147 …, 4007.64756434
4203.28086142 4354.20256764]
[ 6336.83367306 5270.28006846 4040.63608854 …, 4829.15334194
4694.09767687 7768.33347636]
[ 5224.83913628 4250.64289255 3773.94581307 …, 3766.81549853
4464.99921613 6353.57190878]
…,
[ 5366.93534524 5062.8772452 6361.85774755 …, 5126.56824786
4537.30613911 5920.94156364]
[ 3671.92919322 3858.60765044 4846.88157479 …, 3521.04515734
3182.3673578 4448.65305458]
[ 6960.92443573 6083.71366848 6338.13442584 …, 6083.55504619
4128.24744898 8041.05223214]]

距离得出之后,就可以预测测试集的类别了,代码如下:

y_test_pred = classifier.predict_labels(dists, k=1)

这里我们使用准确率作为模型的评价指标,代码如下:

num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print( 'got %d / %d correct => accuracy: %f'% (num_correct, num_test, accuracy))

计算结果如下:
got 137 / 500 correct => accuracy: 0.274

3.交叉验证选取最优的k值。
在机器学习中,当数据量不是很充足时,交叉验证是一种不错的模型选择方法(深度学习数据量要求很大,一般是不采用交叉验证的,因为它太费时间),本节我们就利用交叉验证来选择最好的k值来获得较好的预测的准确率。

这里,我们采用S折交叉验证的方法,即将数据平均分成S份,一份作为测试集,其余作为训练集,一般S=10,本文将S设为5,即代码中的num_folds=5。
代码如下:

num_folds=5
k_choices=[1,3,5,8,10,12,15,20,50,100]
x_train_folds=[]
y_train_folds=[]

y_train=y_train.reshape(-1,1) 
x_train_folds=np.array_split(x_train,num_folds)  #表示将数据集平均分成5份;
y_train_folds=np.array_split(y_train,num_folds)

k_to_accuracies={}  #以字典形式存储k和accuracy;

for k in k_choices:
    k_to_accuracies.setdefault(k,[])
for i in range(num_folds):   #对每个k值,选取一份测试,其余训练,计算准确率;
    classifier=KNearestNeighbor()
    x_val_train=np.vstack(x_train_folds[0:i]+x_train_folds[i+1:])   #表示除i之外的作为训练集;
    y_val_train = np.vstack(y_train_folds[0:i] + y_train_folds[i + 1:])
    y_val_train=y_val_train[:,0] 
    classifier.train(x_val_train,y_val_train)
    for k in k_choices:
        y_val_pred=classifier.predict(x_train_folds[i],k=k)  #第i份作为测试集并预测;
        num_correct=np.sum(y_val_pred==y_train_folds[i][:,0])
        accuracy=float(num_correct)/len(y_val_pred)
        k_to_accuracies[k]=k_to_accuracies[k]+[accuracy]

for k in sorted(k_to_accuracies):  #表示输出每次得到的准确率以及每个k值对应的平均准确率。
    sum_accuracy=0
    for accuracy in k_to_accuracies[k]:
        print('k=%d, accuracy=%f' % (k,accuracy))
        sum_accuracy+=accuracy
    print('the average accuracy is :%f' % (sum_accuracy/5))

计算结果如下:

k=1, accuracy=0.263000
k=1, accuracy=0.257000
k=1, accuracy=0.264000
k=1, accuracy=0.278000
k=1, accuracy=0.266000
the average accuracy is :0.265600
k=3, accuracy=0.239000
k=3, accuracy=0.249000
k=3, accuracy=0.240000
k=3, accuracy=0.266000
k=3, accuracy=0.254000
the average accuracy is :0.249600
k=5, accuracy=0.248000
k=5, accuracy=0.266000
k=5, accuracy=0.280000
k=5, accuracy=0.292000
k=5, accuracy=0.280000
the average accuracy is :0.273200
k=8, accuracy=0.262000
k=8, accuracy=0.282000
k=8, accuracy=0.273000
k=8, accuracy=0.290000
k=8, accuracy=0.273000
the average accuracy is :0.276000
k=10, accuracy=0.265000
k=10, accuracy=0.296000
k=10, accuracy=0.276000
k=10, accuracy=0.284000
k=10, accuracy=0.280000
the average accuracy is :0.280200
k=12, accuracy=0.260000
k=12, accuracy=0.295000
k=12, accuracy=0.279000
k=12, accuracy=0.283000
k=12, accuracy=0.280000
the average accuracy is :0.279400
k=15, accuracy=0.252000
k=15, accuracy=0.289000
k=15, accuracy=0.278000
k=15, accuracy=0.282000
k=15, accuracy=0.274000
the average accuracy is :0.275000
k=20, accuracy=0.270000
k=20, accuracy=0.279000
k=20, accuracy=0.279000
k=20, accuracy=0.282000
k=20, accuracy=0.285000
the average accuracy is :0.279000
k=50, accuracy=0.271000
k=50, accuracy=0.288000
k=50, accuracy=0.278000
k=50, accuracy=0.269000
k=50, accuracy=0.266000
the average accuracy is :0.274400
k=100, accuracy=0.256000
k=100, accuracy=0.270000
k=100, accuracy=0.263000
k=100, accuracy=0.256000
k=100, accuracy=0.263000
the average accuracy is :0.261600
通过比较,我们知道当k=10时,准确率最高。

为了更形象的表示准确率,我们借助matplotlib.pyplot.errorbar函数来均值和偏差对应的趋势线,代码如下:

for k in k_choices:
    accuracies=k_to_accuracies[k]
    plt.scatter([k]*len(accuracies),accuracies) 

accuracies_mean=np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
accuracies_std=np.array([np.std(v) for k ,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices,accuracies_mean,yerr=accuracies_std)
plt.title('cross-validation on k')
plt.xlabel('k')
plt.ylabel('cross-validation accuracy')
plt.show()

已经通过交叉验证选择了最好的k值(k=10),下面我们就要使用最好的k值来完成预测任务,代码如下:

#选择最好的K值来预测任务
best_k=10
classifier=KNearestNeighbor()
classifier.train(x_train,y_train)
y_test_pred=classifier.predict(x_test,k=best_k)

num_correct=np.sum(y_test_pred==y_test)
accuracy=float(num_correct)/num_test
print('got %d / %d correct => accuracy: %f' % (num_correct,num_test,accuracy))

计算结果如下:
got 141 / 500 correct => accuracy: 0.282000

本文参考知乎https://zhuanlan.zhihu.com/p/28204173

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值