kaggle上的数字识别程序——电气攻城狮的AI入门之旅(1)

本文摘自kaggle上的Digit Recognizer。题目和数据集来源https://www.kaggle.com/c/digit-recognizer/data/。

代码原作者为Yassine Ghouzam。源代码位于Karnal中,题目:Introduction to CNN Keras - 0.997 (top 6%)。地址为https://www.kaggle.com/yassineghouzam/introduction-to-cnn-keras-0-997-top-6。

攻城狮在源代码的基础上添加了一些便于理解的注释,也是便于攻城狮理解这些函数到底是干什么用的*-*。

模型的目的是识别手写的数字。攻城狮的编程环境为windows操作系统、jupyterlab。事先已经配置好编程环境。

原作者使用MMIST的数据集训练了一个五层卷积神经网络。作者选择基于karasAPI设计(因为非常方便)。

原作者在i5的CPU上训练了两个小时三十分钟,获得了99.671% 的准确率。如果你有很好的GPU,可以使用GPU加速的tensorflow,速度会快很多。

1.先导入各类环境包

import pandas as pd  #导入pandas
import numpy as np  #导入numpy,用于数字计算
import matplotlib.pyplot as plt #有命令风格的函数集合,可以用于图像处理
import matplotlib.image as mpimg  #像matlab风格的函数集合,也是用于图像的
import seaborn as sns #Seaborn其实是在matplotlib的基础上进行了更高级的API封装,从而使得作图更加容易
%matplotlib inline

np.random.seed(2)  #这个用来随机抽取验证集的

from sklearn.model_selection import train_test_split # 主要提供 交叉验证 和 结果评估 的工具
from sklearn.metrics import confusion_matrix #形成混淆矩阵
import itertools #操作迭代对象的函数

from keras.utils.np_utils import to_categorical # convert to one-hot-encoding
from keras.models import Sequential #顺序模型 API
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D #一些有用的函数,构成每一层
from keras.optimizers import RMSprop #改变学习率的最优化方法
from keras.preprocessing.image import ImageDataGenerator #图像预处理
from keras.callbacks import ReduceLROnPlateau #Reduce learning rate when a metric has stopped improving.


sns.set(style='white', context='notebook', palette='deep')  #seaborn.set(context='notebook', style='darkgrid', palette='deep', font='sans-serif', font_scale=1, color_codes=True, rc=None)

2.准备数据
2.1导入数据

# Load the data
train = pd.read_csv("../input/train.csv") #路径改成数据集的路径
test = pd.read_csv("../input/test.csv")
Y_train = train["label"] #每一列

# Drop 'label' column
X_train = train.drop(labels = ["label"],axis = 1) #每一行,除了标签行

# free some space
del train 

g = sns.countplot(Y_train)

Y_train.value_counts()

2.2检查空白和丢失数据

# Check the data
X_train.isnull().any().describe()
test.isnull().any().describe()

2.3数据归一化

# Normalize the data 把数据变成0-1的小数
X_train = X_train / 255.0
test = test / 255.0

2.4数据重塑

# Reshape image in 3 dimensions (height = 28px, width = 28px , canal = 1) 将灰度数据变成正常28*28的图片
X_train = X_train.values.reshape(-1,28,28,1)
test = test.values.reshape(-1,28,28,1)

2.5标签向量化

# Encode labels to one hot vectors (ex : 2 -> [0,0,1,0,0,0,0,0,0,0])
Y_train = to_categorical(Y_train, num_classes = 10)

2.6将训练集和验证集分开

# Set the random seed
random_seed = 2
# Split the train and the validation set for the fitting
X_train, X_val, Y_train, Y_val = train_test_split(X_train, Y_train, test_size = 0.1, random_state=random_seed)

这里随机取出10%的数据作为验证集
看一些训练集里的图片

# Some examples
g = plt.imshow(X_train[0][:,:,0])

3.CNN
3.1定义模型

# Set the CNN model 
# my CNN architechture is In -> [[Conv2D->relu]*2 -> MaxPool2D -> Dropout]*2 -> Flatten -> Dense -> Dropout -> Out

model = Sequential()

model.add(Conv2D(filters = 32, kernel_size = (5,5),padding = 'Same', 
                 activation ='relu', input_shape = (28,28,1))) #第一层,filters:用多少个filter去卷积the number of output filters in the convolution;kernel_size:卷积核的大小specifying the height and width of the 2D convolution window;
model.add(Conv2D(filters = 32, kernel_size = (5,5),padding = 'Same', 
                 activation ='relu'))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(Dropout(0.25))


model.add(Conv2D(filters = 64, kernel_size = (3,3),padding = 'Same', 
                 activation ='relu'))
model.add(Conv2D(filters = 64, kernel_size = (3,3),padding = 'Same', 
                 activation ='relu'))
model.add(MaxPool2D(pool_size=(2,2), strides=(2,2))) #在2*2的范围内池化
model.add(Dropout(0.25))


model.add(Flatten())
model.add(Dense(256, activation = "relu")) #全链接层的节点数和激活函数
model.add(Dropout(0.5))
model.add(Dense(10, activation = "softmax")) #有十种输出,就这么写

3.2设定优化

# Define the optimizer
optimizer = RMSprop(lr=0.001, rho=0.9, epsilon=1e-08, decay=0.0)
# Compile the model
model.compile(optimizer = optimizer , loss = "categorical_crossentropy", metrics=["accuracy"])
# Set a learning rate annealer
learning_rate_reduction = ReduceLROnPlateau(monitor='val_acc', 
                                            patience=3, 
                                            verbose=1, 
                                            factor=0.5, 
                                            min_lr=0.00001)
epochs = 1 # Turn epochs to 30 to get 0.9967 accuracy
batch_size = 86

3.3数据多样化
通过翻转、旋转等操作让同一张图片变成很多张,可以提高识别准确率,降低过拟合。

# Without data augmentation i obtained an accuracy of 0.98114
#history = model.fit(X_train, Y_train, batch_size = batch_size, epochs = epochs, 
#          validation_data = (X_val, Y_val), verbose = 2)
# With data augmentation to prevent overfitting (accuracy 0.99286)

datagen = ImageDataGenerator(
        featurewise_center=False,  # set input mean to 0 over the dataset
        samplewise_center=False,  # set each sample mean to 0
        featurewise_std_normalization=False,  # divide inputs by std of the dataset
        samplewise_std_normalization=False,  # divide each input by its std
        zca_whitening=False,  # apply ZCA whitening
        rotation_range=10,  # randomly rotate images in the range (degrees, 0 to 180)
        zoom_range = 0.1, # Randomly zoom image 
        width_shift_range=0.1,  # randomly shift images horizontally (fraction of total width)
        height_shift_range=0.1,  # randomly shift images vertically (fraction of total height)
        horizontal_flip=False,  # randomly flip images
        vertical_flip=False)  # randomly flip images


datagen.fit(X_train)
# Fit the model
history = model.fit_generator(datagen.flow(X_train,Y_train, batch_size=batch_size),
                              epochs = epochs, validation_data = (X_val,Y_val),
                              verbose = 2, steps_per_epoch=X_train.shape[0] // batch_size
                              , callbacks=[learning_rate_reduction])

训练模型。这一步耗时比较长,可以喝一杯咖啡(谷歌派)/茶(腾讯派)/快乐水(风男派)
4.评估模型
4.1画训练和验证曲线

# Plot the loss and accuracy curves for training and validation 
fig, ax = plt.subplots(2,1)
ax[0].plot(history.history['loss'], color='b', label="Training loss")
ax[0].plot(history.history['val_loss'], color='r', label="validation loss",axes =ax[0])
legend = ax[0].legend(loc='best', shadow=True)

ax[1].plot(history.history['acc'], color='b', label="Training accuracy")
ax[1].plot(history.history['val_acc'], color='r',label="Validation accuracy")
legend = ax[1].legend(loc='best', shadow=True)

4.2交叉验证

# Look at confusion matrix 

def plot_confusion_matrix(cm, classes,
                          normalize=False,
                          title='Confusion matrix',
                          cmap=plt.cm.Blues):
    """
    This function prints and plots the confusion matrix.
    Normalization can be applied by setting `normalize=True`.
    """
    plt.imshow(cm, interpolation='nearest', cmap=cmap)
    plt.title(title)
    plt.colorbar()
    tick_marks = np.arange(len(classes))
    plt.xticks(tick_marks, classes, rotation=45)
    plt.yticks(tick_marks, classes)

    if normalize:
        cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]

    thresh = cm.max() / 2.
    for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
        plt.text(j, i, cm[i, j],
                 horizontalalignment="center",
                 color="white" if cm[i, j] > thresh else "black")

    plt.tight_layout()
    plt.ylabel('True label')
    plt.xlabel('Predicted label')

# Predict the values from the validation dataset
Y_pred = model.predict(X_val)
# Convert predictions classes to one hot vectors 
Y_pred_classes = np.argmax(Y_pred,axis = 1) 
# Convert validation observations to one hot vectors
Y_true = np.argmax(Y_val,axis = 1) 
# compute the confusion matrix
confusion_mtx = confusion_matrix(Y_true, Y_pred_classes) 
# plot the confusion matrix
plot_confusion_matrix(confusion_mtx, classes = range(10)) 

最后,看一些错误比较大的数据

# Display some error results 

# Errors are difference between predicted labels and true labels
errors = (Y_pred_classes - Y_true != 0)

Y_pred_classes_errors = Y_pred_classes[errors]
Y_pred_errors = Y_pred[errors]
Y_true_errors = Y_true[errors]
X_val_errors = X_val[errors]

def display_errors(errors_index,img_errors,pred_errors, obs_errors):
    """ This function shows 6 images with their predicted and real labels"""
    n = 0
    nrows = 2
    ncols = 3
    fig, ax = plt.subplots(nrows,ncols,sharex=True,sharey=True)
    for row in range(nrows):
        for col in range(ncols):
            error = errors_index[n]
            ax[row,col].imshow((img_errors[error]).reshape((28,28)))
            ax[row,col].set_title("Predicted label :{}\nTrue label :{}".format(pred_errors[error],obs_errors[error]))
            n += 1

# Probabilities of the wrong predicted numbers
Y_pred_errors_prob = np.max(Y_pred_errors,axis = 1)

# Predicted probabilities of the true values in the error set
true_prob_errors = np.diagonal(np.take(Y_pred_errors, Y_true_errors, axis=1))

# Difference between the probability of the predicted label and the true label
delta_pred_true_errors = Y_pred_errors_prob - true_prob_errors

# Sorted list of the delta prob errors
sorted_dela_errors = np.argsort(delta_pred_true_errors)

# Top 6 errors 
most_important_errors = sorted_dela_errors[-6:]

# Show the top 6 errors
display_errors(most_important_errors, X_val_errors, Y_pred_classes_errors, Y_true_errors)
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值