利用神经网络实现手势识别 百度深度学习7日—Day02

百度深度学习7日—Day02手势识别神经网络的搭建及模型实现

-数据预处理

导入所需库
图片分组
图片归一化处理
启动训练和测试的数据提供器

- 训练模块

神经网络模型搭建
使用模型训练

- 测试模块
- 预测模块

在这里插入图片描述利用深度学习解决图像识别问题,主要分为两大模块:训练模块和测试模块;三个主要步骤:建立模型、损失函数、参数学习。

数据预处理:

导入所需库
import os
import time
import random
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
import paddle
import paddle.fluid as fluid
import paddle.fluid.layers as layers
from multiprocessing import cpu_count
from paddle.fluid.dygraph import Pool2D,Conv2D
from paddle.fluid.dygraph import Linear
图片分组

对图片进行预处理,取10%为测试集,90%为训练集,遍历图片,根据文件夹名称,生成label,并按照1:9的比例划分测试集和训练集,生成train_list和test_list。

# 生成图像列表
data_path = '/home/aistudio/data/data23668/Dataset'
character_folders = os.listdir(data_path)
# print(character_folders)遍历文件夹,
if(os.path.exists('./train_data.list')):
    os.remove('./train_data.list')
if(os.path.exists('./test_data.list')):
    os.remove('./test_data.list')
    
'''
遍历文件夹,取10%为测试集,90%为训练集,并生成train_list和test_list
'''
for character_folder in character_folders:
    
    with open('./train_data.list', 'a') as f_train:
        with open('./test_data.list', 'a') as f_test:
            if character_folder == '.DS_Store':
                continue
            character_imgs = os.listdir(os.path.join(data_path,character_folder))
            count = 0 
            for img in character_imgs:
                if img =='.DS_Store':
                    continue
                if count%10 == 0:#判断是否为10,则将第十个设置为测试集test,前边的九个为训练集train。
                    f_test.write(os.path.join(data_path,character_folder,img) + '\t' + character_folder + '\n')
                else:
                    f_train.write(os.path.join(data_path,character_folder,img) + '\t' + character_folder + '\n')
                count +=1
print('列表已生成')
图片归一化处理

data_ mapper0:读取图片,对图片进行归-化处理,返回图片和标签。
data_reader0:按照rain. Jist和test Jist批量化读取图片。

def data_mapper(sample):
    img, label = sample
    img = Image.open(img)
    img = img.resize((100, 100), Image.ANTIALIAS)
    img = np.array(img).astype('float32')
    img = img.transpose((2, 0, 1))'对图片进行转置操作,因为正常图片为RGB格式rgb,rgb,reb,为了模型更好的学习,转化成rrr...,ggg...,bbb...,提高学习率'
    img = img/255.0
    return img, label

def data_reader(data_list_path):'读取数据'
    def reader():
        with open(data_list_path, 'r') as f:
            lines = f.readlines()
            for line in lines:
                img, label = line.split('\t')
                yield img, int(label)
    return paddle.reader.xmap_readers(data_mapper, reader, cpu_count(), 512)
 
启动训练和测试的数据提供器

train_ reader():用于训练的数据提供器,乱序、按批次提供数据(乱序是非常重要的作用)
test_reader):用于测试的数据提供器

# 用于训练的数据提供器
train_reader = paddle.batch(reader=paddle.reader.shuffle(reader=data_reader('./train_data.list'), buf_size=256), batch_size=32)
# 用于测试的数据提供器
test_reader = paddle.batch(reader=data_reader('./test_data.list'), batch_size=32) 

到这一步即完成了数据的准备工作。

训练模块

神经网络模型的搭建

神经网络是一个模型中最关键的部分,神经网络的有效与否往往决定着整个模型质量的高低。其实神经网络也不一定要越深越好,越深的网络结构越复杂,以此来减少过拟合和无法传播到深处等问题,例如resnet和densenet等。

方法一:Lenet模型
class LeNet(fluid.dygraph.Layer):
    def __init__(self, training= True):
        super(LeNet, self).__init__()
        self.conv1 = Conv2D(num_channels=3, num_filters=32, filter_size=3, act='relu')
        self.pool1 = Pool2D(pool_size=2, pool_stride=2)

        self.conv2 = Conv2D(num_channels=32, num_filters=32, filter_size=3, act='relu')
        self.pool2 = Pool2D(pool_size=2, pool_stride=2)

        self.conv3 = Conv2D(num_channels=32, num_filters=64, filter_size=3, act='relu')
        self.pool3 = Pool2D(pool_size=2, pool_stride=2)

        self.fc1 = Linear(input_dim=6400, output_dim=4096, act='relu')
        self.drop_ratiol = 0.5 if training else 0.0
        self.fc2 = Linear(input_dim=4096, output_dim=10)

    def forward(self, inputs):
        conv1 = self.conv1(inputs)  # 32 32 98 98
        pool1 = self.pool1(conv1)  # 32 32 49 49

        conv2 = self.conv2(pool1)  # 32 32 47 47
        pool2 = self.pool2(conv2)  # 32 32 23 23

        conv3 = self.conv3(pool2)  # 32 64 21 21
        pool3 = self.pool3(conv3)  # 32 64 10 10

        rs_1 = fluid.layers.reshape(pool3, [pool3.shape[0], -1])
        fc1 = self.fc1(rs_1)
        drop1 = fluid.layers.dropout(fc1, self.drop_ratiol)
        y = self.fc2(drop1)

        return y
方法二:DNN模型
class MyDNN( fluid. dygraph. Layer):
	super(MyDNN,self)._ init_ ()
	'三个隐藏层,这里是对应点初始的维度'
		self. hidden2 = Linear(100, 100, act=' relu' )'三个隐藏层都是输入为100,输出为100的线性层'
		self. hidden2 = Linear(100, 100, act=' relu' )
		self.hidden3 = Linear(100,100, act=' relu')
		self.hidden4 = Linear(3*100*100, 10,act= ' softmax' )'输出层,为3*100*100的输入,输出是10,根据类别来考虑,最终的输出要做一个softmax,'
	def fonward( self, input ):
		# print(input . shape)
		x = self.hidden1(input)'hidden1是input层'
		# print(x. shape)
		x = self.hidden2(x)
		# print(x. shape)
		x = self.hidden3(x)
		x = fluid. layers. reshape(x, shape= [-1,3*100*100])'因为输出层的输入和前三层的输出不对应,所以对前三层进行了一个形状的拉伸,使其符合对应关系'
		y = self.hidden4(x)'hidden4为output层'
		# print(y. shape) 
		return y 

在网络的配置中,最困难的点就是如何让维度保持一致,要控制输入的维和输出的维是一致的。

使用所搭建网络进行训练

合理调参

with fluid.dygraph.guard():'动态图函数'

    model=LeNet(True) #模型实例化 (修改)
    model.train() '进入训练模式'
   '定义优化器,这里用的是SGD,也可以选用其他的。fluid.optimizer.----在这个接口中会提供不同的优化器以供选择'
    # opt=fluid.optimizer.SGDOptimizer(learning_rate=0.01, parameter_list=model.parameters())#优化器选用SGD随机梯度下降,学习率为0.001.
    opt =fluid.optimizer.Momentum(learning_rate=0.001, momentum=0.9, parameter_list=model.parameters())  # 修改
	'训练轮数,不是论述越多越好,如果论述过多,可能遇到过拟合的现象'
    epochs_num=80 #迭代次数(修改20->150)
    
    #60epoch差不多能达到90+
    #建议100左右
    '将数据放入模型中进行训练,不断循环'
    for pass_num in range(epochs_num):
        '把数据丢过来,喂给模型,放入train_reader中'
        for batch_id,data in enumerate(train_reader()):
            images=np.array([x[0].reshape(3,100,100) for x in data],np.float32)
            
            labels = np.array([x[1] for x in data]).astype('int64')
            labels = labels[:, np.newaxis]
            '将数据类型进行转化,将numpy转化为DyGraph接收到输入,该函数实现从numpy,ndarry对象在创建一个Variable类型的对象'
            # print(images.shape)
            '将通过这两个结构分别将images,labels变为变量的形式,然后就可以喂给模型进行训练了'
            image=fluid.dygraph.to_variable(images)
            label=fluid.dygraph.to_variable(labels)
            '训练模型'
            logits=model(image)  # 预测  (修改)
            '交叉熵求loss,计算与标准答案的差值'
            pred = fluid.layers.softmax(logits)
            # print(predict)
            # loss=fluid.layers.cross_entropy(predict,label)
            '交叉熵计算'
            loss = fluid.layers.softmax_with_cross_entropy(logits, label)  # 修改
            '因为用batch,所以需要求平均值,loss越小,和标准答案的结果就越来越一致'
            avg_loss=fluid.layers.mean(loss)#获取loss值
            
            acc=fluid.layers.accuracy(pred, label)#计算精度  (修改)
            
            if batch_id!=0 and batch_id%50==0:
                print("train_pass:{},batch_id:{},train_loss:{},train_acc:{}".format(pass_num,batch_id,avg_loss.numpy(),acc.numpy()))
            '使用backward()方法进行反向传导,先进行'
            avg_loss.backward()
            '最小化loss'
            opt.minimize(avg_loss)
            model.clear_gradients()
            
    fluid.save_dygraph(model.state_dict(),'LeNet')#保存模型  修改

Lenet模型训练结果:
train_pass:0,batch_id:50,train_loss:[2.6033573],train_acc:[0.03125]
train_pass:1,batch_id:50,train_loss:[2.3130212],train_acc:[0.]
train_pass:2,batch_id:50,train_loss:[2.3115559],train_acc:[0.]
train_pass:3,batch_id:50,train_loss:[2.3080363],train_acc:[0.]
train_pass:4,batch_id:50,train_loss:[2.3054473],train_acc:[0.]
train_pass:5,batch_id:50,train_loss:[2.3077059],train_acc:[0.]
train_pass:6,batch_id:50,train_loss:[2.306113],train_acc:[0.]
train_pass:7,batch_id:50,train_loss:[2.3078835],train_acc:[0.]
train_pass:8,batch_id:50,train_loss:[2.3035762],train_acc:[0.]
train_pass:9,batch_id:50,train_loss:[2.3070734],train_acc:[0.]
train_pass:10,batch_id:50,train_loss:[2.307796],train_acc:[0.]
train_pass:11,batch_id:50,train_loss:[2.30613],train_acc:[0.]
train_pass:12,batch_id:50,train_loss:[2.2984164],train_acc:[0.0625]
train_pass:13,batch_id:50,train_loss:[2.2895367],train_acc:[0.0625]
train_pass:14,batch_id:50,train_loss:[2.2833552],train_acc:[0.03125]
train_pass:15,batch_id:50,train_loss:[2.2392092],train_acc:[0.21875]
train_pass:16,batch_id:50,train_loss:[2.2434943],train_acc:[0.40625]
train_pass:17,batch_id:50,train_loss:[2.2614546],train_acc:[0.34375]
train_pass:18,batch_id:50,train_loss:[2.5831435],train_acc:[0.03125]
train_pass:19,batch_id:50,train_loss:[2.2954307],train_acc:[0.03125]
train_pass:20,batch_id:50,train_loss:[2.2492304],train_acc:[0.5]
train_pass:21,batch_id:50,train_loss:[2.2516563],train_acc:[0.46875]
train_pass:22,batch_id:50,train_loss:[2.410196],train_acc:[0.]
train_pass:23,batch_id:50,train_loss:[2.1042356],train_acc:[0.53125]
train_pass:24,batch_id:50,train_loss:[2.2364726],train_acc:[0.125]
train_pass:25,batch_id:50,train_loss:[2.0535293],train_acc:[0.59375]
train_pass:26,batch_id:50,train_loss:[2.144804],train_acc:[0.21875]
train_pass:27,batch_id:50,train_loss:[2.2974925],train_acc:[0.0625]
train_pass:28,batch_id:50,train_loss:[2.1055684],train_acc:[0.1875]
train_pass:29,batch_id:50,train_loss:[1.9097934],train_acc:[0.59375]
train_pass:30,batch_id:50,train_loss:[1.9978399],train_acc:[0.1875]
train_pass:31,batch_id:50,train_loss:[2.6551652],train_acc:[0.34375]
train_pass:32,batch_id:50,train_loss:[1.8858719],train_acc:[0.21875]
train_pass:33,batch_id:50,train_loss:[1.8568519],train_acc:[0.28125]
train_pass:34,batch_id:50,train_loss:[1.6665683],train_acc:[0.34375]
train_pass:35,batch_id:50,train_loss:[1.684766],train_acc:[0.4375]
train_pass:36,batch_id:50,train_loss:[1.6913209],train_acc:[0.3125]
train_pass:37,batch_id:50,train_loss:[1.4125628],train_acc:[0.375]
train_pass:38,batch_id:50,train_loss:[1.4758303],train_acc:[0.40625]
train_pass:39,batch_id:50,train_loss:[1.4798434],train_acc:[0.375]
train_pass:40,batch_id:50,train_loss:[1.4191139],train_acc:[0.375]
train_pass:41,batch_id:50,train_loss:[1.0342803],train_acc:[0.65625]
train_pass:42,batch_id:50,train_loss:[1.015683],train_acc:[0.59375]
train_pass:43,batch_id:50,train_loss:[1.2713318],train_acc:[0.5625]
train_pass:44,batch_id:50,train_loss:[1.1245035],train_acc:[0.5625]
train_pass:45,batch_id:50,train_loss:[0.99236476],train_acc:[0.71875]
train_pass:46,batch_id:50,train_loss:[0.5824655],train_acc:[0.8125]
train_pass:47,batch_id:50,train_loss:[0.8661804],train_acc:[0.71875]
train_pass:48,batch_id:50,train_loss:[0.50539327],train_acc:[0.84375]
train_pass:49,batch_id:50,train_loss:[0.80336374],train_acc:[0.75]
train_pass:50,batch_id:50,train_loss:[0.52408004],train_acc:[0.84375]
train_pass:51,batch_id:50,train_loss:[0.58458644],train_acc:[0.8125]
train_pass:52,batch_id:50,train_loss:[0.49318528],train_acc:[0.78125]
train_pass:53,batch_id:50,train_loss:[0.4896847],train_acc:[0.875]
train_pass:54,batch_id:50,train_loss:[0.4816877],train_acc:[0.875]
train_pass:55,batch_id:50,train_loss:[0.7058243],train_acc:[0.75]
train_pass:56,batch_id:50,train_loss:[0.2857253],train_acc:[0.90625]
train_pass:57,batch_id:50,train_loss:[0.34035048],train_acc:[0.90625]
train_pass:58,batch_id:50,train_loss:[0.253412],train_acc:[0.90625]
train_pass:59,batch_id:50,train_loss:[0.24350622],train_acc:[0.90625]
train_pass:60,batch_id:50,train_loss:[0.14342564],train_acc:[0.96875]
train_pass:61,batch_id:50,train_loss:[0.196674],train_acc:[0.96875]
train_pass:62,batch_id:50,train_loss:[0.13154468],train_acc:[0.96875]
train_pass:63,batch_id:50,train_loss:[0.2230658],train_acc:[0.875]
train_pass:64,batch_id:50,train_loss:[0.15462069],train_acc:[0.96875]
train_pass:65,batch_id:50,train_loss:[0.102877],train_acc:[1.]
train_pass:66,batch_id:50,train_loss:[0.02462327],train_acc:[1.]
train_pass:67,batch_id:50,train_loss:[0.12254722],train_acc:[0.96875]
train_pass:68,batch_id:50,train_loss:[0.13074552],train_acc:[0.9375]
train_pass:69,batch_id:50,train_loss:[0.12016328],train_acc:[0.9375]
train_pass:70,batch_id:50,train_loss:[0.02072803],train_acc:[1.]
train_pass:71,batch_id:50,train_loss:[0.01553892],train_acc:[1.]
train_pass:72,batch_id:50,train_loss:[0.01331428],train_acc:[1.]
train_pass:73,batch_id:50,train_loss:[0.05098783],train_acc:[1.]
train_pass:74,batch_id:50,train_loss:[0.02384487],train_acc:[1.]
train_pass:75,batch_id:50,train_loss:[0.01114329],train_acc:[1.]
train_pass:76,batch_id:50,train_loss:[0.03901853],train_acc:[1.]
train_pass:77,batch_id:50,train_loss:[0.02505683],train_acc:[1.]
train_pass:78,batch_id:50,train_loss:[0.03033306],train_acc:[1.]
train_pass:79,batch_id:50,train_loss:[0.03308918],train_acc:[1.]

测试数据

#模型校验
with fluid.dygraph.guard():
    accs = []
    model_dict, _ = fluid.load_dygraph('LeNet')
    model = LeNet()
    model.load_dict(model_dict) #加载模型参数
    model.eval() #训练模式
    for batch_id,data in enumerate(test_reader()):#测试集
        images=np.array([x[0].reshape(3,100,100) for x in data],np.float32)
        labels = np.array([x[1] for x in data]).astype('int64')
        labels = labels[:, np.newaxis]

        image=fluid.dygraph.to_variable(images)
        label=fluid.dygraph.to_variable(labels)
        
        predict=model(image)       
        acc=fluid.layers.accuracy(predict,label)
        accs.append(acc.numpy()[0])
        avg_acc = np.mean(accs)
    print(avg_acc)

0.9117064

预测数据

读取预测图像,进行预测

def load_image(path):
    img = Image.open(path)
    img = img.resize((100, 100), Image.ANTIALIAS)
    img = np.array(img).astype('float32')
    img = img.transpose((2, 0, 1))
    img = img/255.0
    print(img.shape)
    return img

#构建预测动态图过程
with fluid.dygraph.guard():
    infer_path = '手势.JPG'
    model=LeNet()#模型实例化
    model_dict,_=fluid.load_dygraph('LeNet')
    model.load_dict(model_dict)#加载模型参数
    model.eval()#评估模式
    infer_img = load_image(infer_path)
    infer_img=np.array(infer_img).astype('float32')
    infer_img=infer_img[np.newaxis,:, : ,:]
    infer_img = fluid.dygraph.to_variable(infer_img)
    result=model(infer_img)
    display(Image.open('手势.JPG'))
    print(np.argmax(result.numpy()))

在这里插入图片描述

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值