卷积神经网络的搭建

1 预处理训练集与测试集图片

说明:因为我们从网上下载的图片各种格式都有大小也不统一,所以图片的批量预处理是很必要的。
(1)将图片大小统一修改成100*100,可参考下面代码

def convertjpg(jpgfile,outdir,width=100,height=100):
    img = Image.open('C:/Users/ASUS/Desktop/cat/暹罗猫/'+jpgfile)
    try:
        new_img = img.resize((width, height), Image.BILINEAR)
        new_img.save(os.path.join(outdir, os.path.basename(jpgfile)))
    except Exception as e:
        print(e)
for jpgfile in os.listdir('C:/Users/ASUS/Desktop/cat/暹罗猫'):
    print(jpgfile)
    convertjpg(jpgfile, "./Xianluo")

(2)对图片格式、文件名进行处理,可参考下面代码

# 统一图片类型
def ranamesJPG(filepath, kind):
    images = os.listdir(filepath)
    for name in images:
        os.rename(filepath+name, filepath+kind+'_'+name.split('.')[0]+'.jpg')
        print(name)
        print(name.split('.')[0])
ranamesJPG('C:/Users/ASUS/Desktop/cat/英国短毛猫/','3')

这里有必要说明为什么要修改文件名?
其实是这样的,因为训练集和测试集的图片一共有几百张,而我们训练时,不仅要传递图片,而且还要告诉卷积神经网络每一张图片对应的标签,如果手工添加标签的话,可想而知,会有很大的工作量。这里注明:0_xxx代表布偶猫、1_xxx代表孟买猫、2_xxx代表暹罗猫、3_xxx代表英国短毛猫。
这里写图片描述这里写图片描述这里写图片描述这里写图片描述

2 训练模型与测试

import os
from PIL import Image
import numpy as np
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.optimizers import SGD, RMSprop, Adam
from keras.layers import Conv2D, MaxPooling2D

#--------------------------------------------------------------------------------------------
# 将训练集图片转换成数组
ima1 = os.listdir('./cat/train')
def read_image1(filename):
    img = Image.open('./cat/train/'+filename).convert('RGB')
    return np.array(img)

x_train = []

for i in ima1:
    x_train.append(read_image1(i))

x_train = np.array(x_train)

# 根据文件名提取标签
y_train = []
for filename in ima1:
    y_train.append(int(filename.split('_')[0]))

y_train = np.array(y_train)
# -----------------------------------------------------------------------------------------
# 将测试集图片转化成数组
ima2 = os.listdir('./cat/test')
def read_image2(filename):
    img = Image.open('./cat/test/'+filename).convert('RGB')
    return np.array(img)

x_test = []

for i in ima2:
    x_test.append(read_image2(i))

x_test = np.array(x_test)

# 根据文件名提取标签
y_test = []
for filename in ima2:
    y_test.append(int(filename.split('_')[0]))

y_test = np.array(y_test)
#-------------------------------------------------------------------------------------
# 将标签转换格式
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)

# 将特征点从0~255转换成0~1提高特征提取精度
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255

# 搭建卷积神经网络
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(100, 100, 3)))
model.add(Conv2D(32, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(4, activation='softmax'))

sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])

model.fit(x_train, y_train, batch_size=10, epochs=32)
model.save_weights('./cat/cat_weights.h5', overwrite=True)

score = model.evaluate(x_test, y_test, batch_size=10)
print(score)

注:需要安装h5py,用于保存和加载xx.h5类型的权重文件

直接右键run运行程序生成权重文件,用于以后的预测。

  • 19
    点赞
  • 77
    收藏
    觉得还不错? 一键收藏
  • 15
    评论
评论 15
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值