keras构建卷积神经网路

一、卷积神经网络

1、与全连接神经网络相比,卷积神经网络的优势体现在两个方面:(1)减少了参数:在全连接的结构下,相邻网络层的两点之间互联,参数非常多,尤其是面对图像输入这样的数据结构;CNN能够大大降低参数的数量级,加快收敛,使训练复杂度大大下降,也减轻了过拟合,提高了模型的泛化能力。(2)更适用于二维结构的数据格式:全连接网络将整个图像“压平”成一个向量,这种操作忽略了图像的“二维空间特性”,图像在x和y轴方向的构成是有意义的,这些特征(局部特征)是需要被提取的,卷积操作就很好的应对了这种局部特征提取。
2、优点:稀疏连接、参数共享、等变表示(输入发生位移时卷积模型依然可以提取到相同的特征。参数共享赋予了卷积网络对平移的容忍性。)。
3、卷积操作是对应元素相乘再相加。

from keras import Input
from keras.models import Sequential
# from keras.layers import Dense,Activation,Dropout
# #3种卷积层 分别对应一维(序列)、二维(图像)、三维(立体数据) 为了适应任何维度的输入空间
from keras.layers import Conv1D,Conv2D,Conv3D
model=Sequential()
'''
# #序列数据 指定了长度10 如果没有指定seq_length,那么输入的是一个长度不固定的序列
# seq_length=10
# model.add(Conv1D(64,3,activation='relu',input_shape=(seq_length,128)))
# #输入数据为一个三维数组 表示长和宽都为224的彩色图像
#Conv2D 第一个参数64是卷积核filters 必须指定 ,第二个参数(3,3)是卷积核的尺寸 kernel_size=(3,3)或 kernel_size=3 表示3×3的卷积核 必须指定,
#stride 步长 默认为1 和kernel_size表达形式一样,表示卷积核沿宽度和高度方向的步长
#padding 填充 只有两个值 padding=“same”表示会有填充的操作 保持输出图像的尺寸和输入图像的尺寸一致,如果设置为padding=“valid”表示不做填充 默认为valid
#激活函数 activation 默认为线性激活函数 即将卷积层的结果直接输出
#指定卷积层的输入
'''
input_img=Input(shape=(224,224,3))
conv_layer=Conv2D(64,(1,1))(input_img)
model.add(Conv2D(64,(3,3),activation='relu'))
'''
#池化层 pool_size滑动窗口尺寸 strides通常设置为1 为步长
#接收上一层输出的特征图作为输入,即对输入取最大值MaxPooling和平均值AveragePooling后输出本层的特征图
'''
from keras.layers.pooling import MaxPool2D
from keras.layers.pooling import AveragePooling2D
model.add(MaxPool2D(pool_size=(2,2),strides=1))

#全连接层
'''
在卷积网络的最后使用,当卷积层和池化层完成特征的提取和处理后,全连接层整合特征来进行最后的预测
其接收的输入是一维数组,所以从卷积层、池化层到全连接层还需要加入一层Flatten层来将输入“压平” 把多维的输入一维化
Flatten层只作用于图像特征上,不影响批量数据的大小
'''
from keras.layers import Dense,Flatten
model.add(Flatten())
model.add(Dense(10,activation='softmax'))
#输出类别为10的分类预测
#检查data_format的设置
from keras import backend as K
if K.image_data_format()=='channels_last':
    bn_axis=3
else:
    bn_axis=1
print(bn_axis)
二、使用卷积神经网络处理手写体分类问题
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense,Dropout,Flatten
from keras.layers import Conv2D,MaxPool2D
from keras import backend as K


#定义参数
batch_size=128
num_classes=10
epochs=12

#输入图像的尺寸
img_rows,img_cols=28,28
(x_train,y_train),(x_test,y_test)=mnist.load_data()

#保证程序总是能有正确的data_format
if K.image_data_format()=='channels_first':
    x_train=x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
    x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
    input_shape=(1,img_rows,img_cols)
else:
    x_train=x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
    x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
    input_shape=(img_rows,img_cols,1)

#数据预处理
x_train=x_train.astype('float32')
x_test=x_test.astype('float32')
x_train/=255
x_test/=255
#将标签值转换为categorical
y_train=keras.utils.to_categorical(y_train,num_classes)
y_test=keras.utils.to_categorical(y_test,num_classes)

#搭建卷积神经网络
model=Sequential()
model.add(Conv2D(32,kernel_size=(3,3),activation='relu',input_shape=input_shape))#输入层
model.add(Conv2D(64,(3,3),activation='relu'))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128,activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes,activation='softmax'))
print(model.summary())

# #模型结构图可视化
# from keras.utils import plot_model
# plot_model(model,to_file='model.png')

#建模训练
model.compile(loss=keras.losses.categorical_crossentropy,optimizer=keras.optimizers.Adadelta(),metrics=['accuracy'])
model.fit(x_train,y_train,batch_size=batch_size,epochs=epochs,verbose=1,validation_data=(x_test,y_test))

#评价模型
score=model.evaluate(x_test,y_test,verbose=0)
print('test loss:',score[0])
print('test accuracy:',score[1])

Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 26, 26, 32)        320       
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 24, 24, 64)        18496     
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 12, 12, 64)        0         
_________________________________________________________________
dropout (Dropout)            (None, 12, 12, 64)        0         
_________________________________________________________________
flatten (Flatten)            (None, 9216)              0         
_________________________________________________________________
dense (Dense)                (None, 128)               1179776   
_________________________________________________________________
dropout_1 (Dropout)          (None, 128)               0         
_________________________________________________________________
dense_1 (Dense)              (None, 10)                1290      
=================================================================
Total params: 1,199,882
Trainable params: 1,199,882
Non-trainable params: 0
_________________________________________________________________
None
Epoch 1/12
469/469 [==============================] - 78s 167ms/step - loss: 2.2852 - accuracy: 0.1757 - val_loss: 2.2475 - val_accuracy: 0.3065
Epoch 2/12
469/469 [==============================] - 79s 168ms/step - loss: 2.2207 - accuracy: 0.2906 - val_loss: 2.1707 - val_accuracy: 0.4844
Epoch 3/12
469/469 [==============================] - 80s 170ms/step - loss: 2.1409 - accuracy: 0.3968 - val_loss: 2.0703 - val_accuracy: 0.5835
Epoch 4/12
469/469 [==============================] - 78s 165ms/step - loss: 2.0348 - accuracy: 0.4717 - val_loss: 1.9353 - val_accuracy: 0.6532
Epoch 5/12
469/469 [==============================] - 76s 162ms/step - loss: 1.8967 - accuracy: 0.5258 - val_loss: 1.7613 - val_accuracy: 0.7023
Epoch 6/12
469/469 [==============================] - 76s 162ms/step - loss: 1.7293 - accuracy: 0.5703 - val_loss: 1.5583 - val_accuracy: 0.7390
Epoch 7/12
469/469 [==============================] - 76s 162ms/step - loss: 1.5520 - accuracy: 0.6059 - val_loss: 1.3491 - val_accuracy: 0.7705
Epoch 8/12
469/469 [==============================] - 76s 163ms/step - loss: 1.3812 - accuracy: 0.6368 - val_loss: 1.1568 - val_accuracy: 0.7939
Epoch 9/12
469/469 [==============================] - 79s 169ms/step - loss: 1.2330 - accuracy: 0.6641 - val_loss: 0.9980 - val_accuracy: 0.8077
Epoch 10/12
469/469 [==============================] - 77s 165ms/step - loss: 1.1182 - accuracy: 0.6842 - val_loss: 0.8751 - val_accuracy: 0.8202
Epoch 11/12
469/469 [==============================] - 77s 164ms/step - loss: 1.0237 - accuracy: 0.7052 - val_loss: 0.7818 - val_accuracy: 0.8301
Epoch 12/12
469/469 [==============================] - 78s 166ms/step - loss: 0.9473 - accuracy: 0.7223 - val_loss: 0.7095 - val_accuracy: 0.8389
test loss: 0.709541916847229
test accuracy: 0.8389000296592712
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值