图像分类

    <div id="post_detail">

【从传统方法到深度学习】图像分类

1. 问题

Kaggle上有一个图像分类比赛Digit Recognizer,数据集是大名鼎鼎的MNIST——图片是已分割 (image segmented)过的28*28的灰度图,手写数字部分对应的是0~255的灰度值,背景部分为0。

from keras.datasets import mnist

(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train[0] # .shape = 28*28
“”"
[[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0]

[ 0 0 0 0 0 0 0 0 0 0 0 0 3 18 18 18 126 136
175 26 166 255 247 127 0 0 0 0]
[ 0 0 0 0 0 0 0 0 30 36 94 154 170 253 253 253 253 253
225 172 253 242 195 64 0 0 0 0]

[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0]]
“”"

手写数字图片是长这样的:

import matplotlib.pyplot as plt

plt.subplot(1, 3, 1)
plt.imshow(x_train[0], cmap=‘gray’)
plt.subplot(1, 3, 2)
plt.imshow(x_train[1], cmap=‘gray’)
plt.subplot(1, 3, 3)
plt.imshow(x_train[2], cmap=‘gray’)
plt.show()

手写数字识别可以看做是一个图像分类问题——对二维向量的灰度图进行分类。

2. 识别

Rodrigo Benenson给出50种方法在MNIST的错误率。本文将从传统方法过渡到深度学习,对比准确率来看。以下代码基于Python 3.6 + sklearn 0.18.1 + keras 2.0.4。

传统方法

kNN

思路比较简单:将二维向量拉直成一个一维向量,基于距离度量以判断向量间的相似性。显而易见,这种不带特征提取的朴素办法,丢掉了二维向量中最重要的四周相邻像素的信息。在比较干净的数据集MNIST还有不错的表现,准确率为96.927%。此外,kNN模型训练慢。

from sklearn import neighbors
from sklearn.metrics import precision_score

num_pixels = x_train[0].shape[0] * x_train[0].shape[1]
x_train = x_train.reshape((x_train.shape[0], num_pixels))
x_test = x_test.reshape((x_test.shape[0], num_pixels))

knn = neighbors.KNeighborsClassifier()
knn.fit(x_train, y_train)
pred = knn.predict(x_test)
precision_score(y_test, pred, average=‘macro’) # 0.96927533865705706

MLP
多层感知器MLP (Multi Layer Perceptron)亦即三层的前馈神经网络,所采用的特征与kNN方法类似——每一个像素点的灰度值对应于输入层的一个神经元,隐藏层的神经元数为700(一般介于输入层与输出层的数量之间)。sklearn的MLPClassifier实现MLP分类,下面给出基于keras的MLP实现。没怎么细致地调参,准确率大概在98.530%左右。

from keras.layers import Dense
from keras.models import Sequential
from keras.utils import np_utils

# normalization
num_pixels = 28 * 28
x_train = x_train.reshape(x_train.shape[0], num_pixels).astype(‘float32’) / 255
x_test = x_test.reshape(x_test.shape[0], num_pixels).astype(‘float32’) / 255
# one-hot enconder for class
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
num_classes = y_train.shape[1]

model = Sequential([
Dense(700, input_dim=num_pixels, activation=‘relu’, kernel_initializer=‘normal’), # hidden layer
Dense(num_classes, activation=‘softmax’, kernel_initializer=‘normal’) # output layer
])
model.compile(optimizer=‘adam’, loss=‘categorical_crossentropy’, metrics=[‘accuracy’])
model.summary()

model.fit(x_train, y_train, validation_data=(x_test, y_test), epochs=600, batch_size=200, verbose=2)
model.evaluate(x_test, y_test, verbose=0) # [0.10381294689745164, 0.98529999999999995]

深度学习

LeCun早在1989年发表的论文 [1]中提出了用CNN (Convolutional Neural Networks)来做手写数字识别,后来 [2]又改进到Lenet-5,其网络结构如下图所示:

卷积、池化、卷积、池化,然后套2个全连接层,最后接个Guassian连接层。众所周知,CNN自带特征提取功能,不需要刻意地设计特征提取器。基于keras,Lenet-5 非正式实现如下:

import keras
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Dense, Dropout, Flatten, Activation
from keras.models import Sequential
from keras.utils import np_utils

img_rows, img_cols = 28, 28
# TensorFlow backend: image_data_format() == ‘channels_last’
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1).astype(‘float32’) / 255
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1).astype(‘float32’) / 255
# one-hot code for class
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
num_classes = y_train.shape[1]

model = Sequential()
model.add(Conv2D(filters=6, kernel_size=(5, 5), padding=‘valid’, input_shape=(28, 28, 1)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Activation(“sigmoid”))

model.add(Conv2D(16, kernel_size=(5, 5), padding=‘valid’))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Activation(“sigmoid”))
model.add(Dropout(0.25))
# full connection
model.add(Conv2D(120, kernel_size=(1, 1), padding=‘valid’))
model.add(Flatten())
# full connection
model.add(Dense(84, activation=‘sigmoid’))
model.add(Dense(num_classes, activation=‘softmax’))

model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.SGD(lr=0.08, momentum=0.9),
metrics=[‘accuracy’])
model.summary()
model.fit(x_train, y_train, batch_size=32, epochs=8,
verbose=1, validation_data=(x_test, y_test))
model.evaluate(x_test, y_test, verbose=0)

以上三种方法的准确率如下:

特征分类器准确率
graykNN96.927%
gray3-layer neural networks98.530%
Lenet-598.640%

3. 参考资料

[1] LeCun, Yann, et al. "Backpropagation applied to handwritten zip code recognition." Neural computation 1.4 (1989): 541-551.
[2] LeCun, Yann, et al. "Gradient-based learning applied to document recognition." Proceedings of the IEEE 86.11 (1998): 2278-2324.
[3] Taylor B. Arnold, Computer vision: LeNet-5, AlexNet, VGG-19, GoogLeNet.

如需转载,请注明作者及出处.
作者: Treant
</div></div>
分类: 计算机视觉
标签: 深度学习
0
0
« 上一篇: 线性相关性度量
» 下一篇: 【LeetCode题解】动态规划:从新手到专家(一)
	</div>
	<div class="postDesc">posted @ <span id="post-date">2017-06-17 15:52</span> <a href="https://www.cnblogs.com/en-heng/">Treant</a> 阅读(<span id="post_view_count">1667</span>) 评论(<span id="post_comment_count">0</span>)  <a href="https://i.cnblogs.com/EditPosts.aspx?postid=7040427" rel="nofollow">编辑</a> <a href="#" onclick="AddToWz(7040427);return false;">收藏</a></div>
</div>
<script src="//common.cnblogs.com/highlight/9.12.0/highlight.min.js"></script><script>markdown_highlight();</script><script type="text/javascript">var allowComments=true,cb_blogId=114979,cb_entryId=7040427,cb_blogApp=currentBlogApp,cb_blogUserGuid='592d8fa7-8a87-e111-aa3f-842b2b196315',cb_entryCreatedDate='2017/6/17 15:52:00';loadViewCount(cb_entryId);var cb_postType=1;var isMarkdown=true;</script>
</div><!--end: forFlow -->
</div>
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值