Kaggle比赛系列:(2) Digit Recognizer

比赛介绍

手写数字分类(0-9),官方描述这是Kaggle比赛,计算机视觉领域的"Hello World".(然而数据集依然是结构化数据.csv),属于分类任务.
数据集:链接: https://pan.baidu.com/s/1Pil_YCn6x2CfkYdYONVAUg 提取码: n54q
数据集描述: 图像大小28pixels×28pixels(784),每个像素点含有一个像素值,像素值介于0-255之间
(1) train.csv:含有785个字段,第一个字段是类别,其余字段为每个像素点的像素值. 像素点表示方法为pixelx, 像素点序列化公式为 x = i *28 +j (0 ≤ i,j ≤ 27). i表示行,j表示列.
(2) test.csv:与训练集相同,没有标签;
(3) sample_submission.csv: 两个字段,图片序列号:类别

参赛大神比赛记录

原文地址:Deep Neural Network Keras way), (Poonam Ligade)

1. 导入数据处理和模型包

import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)

import matplotlib.pyplot as plt
%matplotlib inline#在jupyter中可以绘图

from keras.models import Sequential#序列模型
from keras.layers import Dense , Dropout , Lambda, Flatten#网络层
from keras.optimizers import Adam ,RMSprop#反向传播优化算法
from sklearn.model_selection import train_test_split#自动化分数据集
from keras import  backend as K
from keras.preprocessing.image import ImageDataGenerator#数据预处理(增强)

# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory

from subprocess import check_output
print(check_output(["ls", ".../digit-recognizer"]).decode("utf8"))

sample_submission.csv
test.csv
train.csv

2. 查看数据集

train = pd.read_csv(".../digit-recognizer/train.csv")
print(train.shape)
train.head()

(42000, 785)
训练数据集包括标签+像素值

test= pd.read_csv(".../digit-recognizer/test.csv")
print(test.shape)
test.head()

(28000, 784)
测试数据集只有像素值

X_train = (train.iloc[:,1:].values).astype('float32') # iloc通过索引访问DataFrame中的值,Dataframe.iloc[行索引,列索引].values;astype()函数是类型转化
y_train = train.iloc[:,0].values.astype('int32') # iloc选择第一列(数字类型都是整形)
X_test = test.values.astype('float32')

分离特征与标签

X_train = X_train.reshape(X_train.shape[0], 28, 28)#reshape()函數,將训练集(单行)转化为,28×28
for i in range(9):
    plt.subplot(330 + (i+1))#subplot(三行三列),从左上角开始
    plt.imshow(X_train[i], cmap=plt.get_cmap('gray'))# 这里使用imshow(cmap = plt.get_cmap())形式,因为要展示手写字体原图像,而不是像素值的变化过程
    #如果直接改写成plt.plot(X_train[i], color = 'gray')形式,则会绘制像素值变化曲线
    plt.title(y_train[i]);

在这里插入图片描述

3. 数据预处理

X_train = X_train.reshape(X_train.shape[0], 28, 28,1)#增加一个维度为色彩空间,这里是灰度图像,所以为1
X_train.shape

(42000, 28, 28, 1)

X_test = X_test.reshape(X_test.shape[0], 28, 28,1)
X_test.shape

(28000, 28, 28, 1)

#特征标准化
mean_px = X_train.mean().astype(np.float32)
std_px = X_train.std().astype(np.float32)

def standardize(x): 
    return (x-mean_px)/std_px
#one-Hot编码
from keras.utils.np_utils import to_categorical
y_train= to_categorical(y_train)
num_classes = y_train.shape[1]
num_classes

10

plt.title(y_train[9])
plt.plot(y_train[9])
plt.xticks(range(10))

在这里插入图片描述

4. 建模(全连接网络,单层)

#part1:调包
from keras.models import  Sequential
from keras.layers.core import  Lambda , Dense, Flatten, Dropout
from keras.callbacks import EarlyStopping
from keras.layers import BatchNormalization, Convolution2D , MaxPooling2D
from keras.optimizers import RMSprop
from keras.preprocessing import image
from sklearn.model_selection import train_test_split
gen = image.ImageDataGenerator()

#part2:保证可重复建模,设定随机种子
seed = 43
np.random.seed(seed)

#part3:划分数据集(训练集,验证集;训练batch,验证batch)
X = X_train
y = y_train
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.10, random_state=42)#随机划分训练集和验证集
batches = gen.flow(X_train, y_train, batch_size=64)#训练batch_size = 64
val_batches=gen.flow(X_val, y_val, batch_size=64)#验证集batch_size = 64

#part4:建模
model= Sequential()
model.add(Lambda(standardize,input_shape=(28,28,1)))
model.add(Flatten())#layer1
model.add(Dense(10, activation='softmax'))#layer2
#print("input shape ",model.input_shape) (None,28,28,1)
#print("output shape ",model.output_shape) (None,10)

model.compile(optimizer=RMSprop(lr=0.001), loss='categorical_crossentropy',metrics=['accuracy'])

history=model.fit_generator(generator=batches, steps_per_epoch=batches.n, epochs=3, 
                    validation_data=val_batches, validation_steps=val_batches.n)#model.fit_generator()开始训练

Epoch 1/3
37800/37800 [] - 173s 5ms/step - loss: 0.2400 - acc: 0.9342 - val_loss: 0.3305 - val_acc: 0.9122
Epoch 2/3
37800/37800 [
] - 172s 5ms/step - loss: 0.2158 - acc: 0.9417 - val_loss: 0.3487 - val_acc: 0.9092
Epoch 3/3
37800/37800 [==============================] - 173s 5ms/step - loss: 0.2098 - acc: 0.9437 - val_loss: 0.3662 - val_acc: 0.9079

5. 可视化训练结果(评价)

history_dict = history.history
loss_values = history_dict['loss']
val_loss_values = history_dict['val_loss']
epochs = range(1, len(loss_values) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss_values, 'bo')
# b+ is for "blue crosses"
plt.plot(epochs, val_loss_values, 'b+')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.show()

训练误差降低,验证集的误差却上升,说明出现过拟合现象

plt.clf()   # clear figure
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']
plt.plot(epochs, acc_values, 'bo')
plt.plot(epochs, val_acc_values, 'b+')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.show()

数据增强与loss图相呼应,出现过拟合现象

6. 第一次模型优化(增加一个全连接层)

def get_fc_model():
    model = Sequential([
        Lambda(standardize, input_shape=(28,28,1)),#输入数据标准化
        Flatten(),
        Dense(512, activation='relu'),#增加了一个全连接层
        Dense(10, activation='softmax')
        ])
    model.compile(optimizer='Adam', loss='categorical_crossentropy',
                  metrics=['accuracy'])#优化算法也换成了Adam
    return model

fc = get_fc_model()
fc.optimizer.lr=0.01

history=fc.fit_generator(generator=batches, steps_per_epoch=batches.n, epochs=1, 
                    validation_data=val_batches, validation_steps=val_batches.n)

*Epoch 1/1
37800/37800 [==============================] - 254s 7ms/step - loss: 0.1726 - acc: 0.9721 - val_loss: 0.5442 - val_acc: 0.9524*

7. 第二次模型优化(卷积神经网络)

from keras.layers import Convolution2D, MaxPooling2D

def get_cnn_model():
    model = Sequential([
        Lambda(standardize, input_shape=(28,28,1)),#输入数据标准化
        Convolution2D(32,(3,3), activation='relu'),#32个 3*3卷积核, 激活函数为relu,连续两层
        Convolution2D(32,(3,3), activation='relu'),
        MaxPooling2D(),
        Convolution2D(64,(3,3), activation='relu'),#卷积核大小不变,深度扩大二倍
        Convolution2D(64,(3,3), activation='relu'),
        MaxPooling2D(),
        Flatten(),
        Dense(512, activation='relu'),
        Dense(10, activation='softmax')
        ])
    model.compile(Adam(), loss='categorical_crossentropy',
                  metrics=['accuracy'])
    return model

model= get_cnn_model()
model.optimizer.lr=0.01

history=model.fit_generator(generator=batches, steps_per_epoch=batches.n, epochs=1, 
                    validation_data=val_batches, validation_steps=val_batches.n)

*Epoch 1/1
37800/37800 [==============================] - 1047s 28ms/step - loss: 0.0751 - acc: 0.9783 - val_loss: 0.1435 - val_acc: 0.9645*

8. 第三次模型优化(CNN+数据增强)

gen =ImageDataGenerator(rotation_range=8, width_shift_range=0.08, shear_range=0.3,
                               height_shift_range=0.08, zoom_range=0.08)
batches = gen.flow(X_train, y_train, batch_size=64)
val_batches = gen.flow(X_val, y_val, batch_size=64)

model.optimizer.lr=0.001
history=model.fit_generator(generator=batches, steps_per_epoch=batches.n, epochs=1, 
                    validation_data=val_batches, validation_steps=val_batches.n)

*Epoch 1/1
37800/37800 [==============================] - 1299s 34ms/step - loss: 0.1325 - acc: 0.9611 - val_loss: 0.1397 - val_acc: 0.9625*

9. 第四次模型优化(CNN+数据增强+批量归一化)

from keras.layers.normalization import BatchNormalization

def get_bn_model():
    model = Sequential([
        Lambda(standardize, input_shape=(28,28,1)),
        Convolution2D(32,(3,3), activation='relu'),
        BatchNormalization(axis=1),#在第一个卷积层后.两个连续的卷基层加一个池化层可以视为一个block.
        Convolution2D(32,(3,3), activation='relu'),
        MaxPooling2D(),
        BatchNormalization(axis=1),#在第一个池化层后
        Convolution2D(64,(3,3), activation='relu'),
        BatchNormalization(axis=1),#block的第一个卷积层后
        Convolution2D(64,(3,3), activation='relu'),
        MaxPooling2D(),
        Flatten(),
        BatchNormalization(),#block后
        Dense(512, activation='relu'),
        BatchNormalization(),#第一个全连接层后
        Dense(10, activation='softmax')
        ])
    model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
    return model
    
model= get_bn_model()
model.optimizer.lr=0.01
gen = image.ImageDataGenerator()
batches = gen.flow(X, y, batch_size=64)
history=model.fit_generator(generator=batches, steps_per_epoch=batches.n, epochs=3)#3个epoch

注意: BN层的使用位置是有规律的.
在加入BN层后,起始的准确率是下降的,但是训练增量速度变快了
2000 次迭代的时候 acc = 96.55%
26000次迭代的时候 acc = 98.80%
伟大的二八法则

10. 提交测试

predictions = model.predict_classes(X_test, verbose=0)
submissions=pd.DataFrame({"ImageId": list(range(1,len(predictions)+1)),
                         "Label": predictions})
submissions.to_csv("DR.csv", index=False, header=True)

epoch: 完全遍历一次训练数据集.
batch_size: 一次训练投入的数据量

11. 讨论区精华

问题1: 为什么需要添加数据集的色彩空间?
回答: 在使用 ImageDataGenerator.Flow() 方法的时候,函数参数包含4个传入参数,最后一项为色彩空间.
问题2: 可不可以不使用One-Hot编码?
回答: 不编码会输出(0,1)之间的一个概率
问题3: On ‘fit_generator’ in linear NN model, you used ‘batches.n’ for ‘steps_per_epoch’ instead of ‘len(batches)’? As ‘batches.n’ is equal to the length of ‘X_train’(=37800) not ‘batches’(=591), it loops over 37800.
回答:(作者没回答)当使用len(batches)时,参与训练的数据只有591个,如果按作者的思路可以解释为,steps_per_epoch表示每个epoch参与训练的数据量. (但是我觉得应该使用len(batches),再看看别人的吧)
写在最后: 使用Keras确实是能简化搭建神经网络模型的过程,将"数据增强,层,损失函数,优化方法"等都封装完好,适合比赛类应用,但是顶层封装越好,底层原理会变得更模糊. 用不同的框架搭建神经网络,可选择的优化方法是不一样的.例如:使用keras,我可以选择使用哪种梯度下降的优化算法,但是设计一种新的优化算法是比较困难的(Keras).当然,编程技术还是能不断刷新上线的,你也可以自己用Python写一个优化函数模块,那就另当别论了.

  • 2
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值