带自己学paddle(二)

该博客介绍了如何使用PaddlePaddle框架构建卷积神经网络(CNN)模型,以提高手写数字识别的准确性。首先定义了一个简单的CNN模型,包括卷积层、池化层和全连接层。接着,博主展示了如何自定义数据加载器和使用内置的DataLoader。在训练部分,采用了SGD优化器和交叉熵损失函数。同时,提供了训练和验证过程的代码,以及评估模型准确性的方法。最后,给出了模型预测的示例。
摘要由CSDN通过智能技术生成

项目一 手写数字识别

上文回顾,利用单层神经网络回归数字,那必然是不行的,因此这一章节是提高模型的结构复杂度,来尝试提高准确率

1、模型架构

CNN网络架构登场,其实之前我一直都用pytorch,但是总想尝试新玩意,所以就玩玩paddle,看看他有什么区别

import paddle
from paddle.nn import Conv2D,MaxPool2D,Linear
import paddle.nn.functional as F 

class MNIST(paddle.nn.Layer):
	def __init__(self):
		super(MNIST,self).__init__()
		# 卷积层
		self.conv1=Conv2D(in_channels=1,out_channels=20,kernel_size=5,stride=1,padding=2)
		self.max_pool1=MaxPool2D(kernel_size=2,stride=2)
		self.conv2=Conv2D(in_channels=20,out_channels=20,kernel_size=5,stride=1,padding=2)
		self.max_pool2=MaxPool2D(kernel_size=2,stride=2)
		self.fc=Linear(in_features=980,out_features=10)
	
	def forward(self,inputs):
		x=self.conv1(inputs)
		x=F.relu(x)
		x=self.max_pool1(x)
		x=paddle.reshape(x,[x.shape[0],980])
		x=self.fc(x)
		return x

2、数据导入

上一节中,讲到利用paddle自带的Dataloader来封装,这里就尝试自己写一个loader出来
这里通过这个例子,介绍一下DataLoadert的底层原理

import json
import gzip
def load_data(mode='train'):
	datafile='./work/mnist.json.gz'
	data=json.load(gzip.open(datafile))
	train_set,val_set,eval_set=data
	IMG_ROWS=28
	IMG_COLS=28
	if mode=='train':
		imgs,labels=train_set[0],train_set[1]
	elif mode=='valid':
		imgs,labels=val_set[0],val_set[1]
	elif mode=='eval':
		imgs,labels=eval_set[0],eval_set[1]
	else:
		raise Exception("mode can only be one of ['train','valid','eval']")
	
	imgs_length=len(imgs)
	assert len(imgs)==len(labels),"length of train_imgs({}) should be the same as train_labels({})".format(len(imgs),len(labels))
	index_list=list(range(imgs_length))
	BATCH_SIZE=100
	
	def data_generator():
		if mode=='train':
			random.shuffle(index_list)
		imgs_list=[]
		labels_list=[]
		for i in index_list:
			img=np.reshape(imgs[i],[1,IMG_ROWS,IMG_COLS]).astype('float32')
			label=np.reshape(labels[i],[1]).astype('float32')
			imgs_list.append(img)
			labels_list.append(label)
			
			if len(imgs_list)==BATCH_SIZE:
				yield np.array(imgs_list),np.array(labels_list)
				imgs_list=[]
				labels_list=[]
		if len(imgs_list)>0:
			yield np.array(imgs_list),np.array(labels_list)
	return data_generator

当然,我还是会用Dataloader,直接封装好的
其次,Dataset的封装也是在这里一次过回顾好了

import paddle
import numpy as np
class MNISTDataset(paddle.io.Dataset):
	def __init__(self,mode):
		datafile='./work/mnist.json.gz'
		data=json.load(gzip.open(datafile))
		train_set,val_set,eval_set=data
		self.IMG_ROWS=28
		self.IMG_COLS=28
		if mode=='train':
			imgs,labels=train_set[0],train_set[1]
		elif mode=='valid':
			imgs,labels=val_set[0],val_set[1]
		elif mode=='eval':
			imgs,labels=eval_set[0],eval_set[1]
		else:
			raise Exception("mode can only be one of ['train','valid','eval']")
		assert len(imgs)==len(labels),"length of train_imgs({}) should be the same as train_labels({})".format(len(imgs),len(labels))
		self.imgs=imgs
		self.labels=labels
	
	def __getitem__(self,idx):
		img=np.reshape(self.imgs[idx],[1,self.IMG_ROWS,self.IMG_COLS]).astype('float32')
		label=np.reshape(self.labels[idx],[1]).astype('int64')
		return img,label
	
	def __len__(self):
		return len(self.imgs)

其中这里和pytorch的Dataset几乎一样,只是封装的类库不同paddle.io.Dataset

3、训练策略

还是SGD

opt=paddle.optimizer.SGD(learning_rate=0.01,parameters=model.parameters())

4、损失函数

这里使用交叉熵,比均方差要好用
均方差是这样

import paddle.nn.functional as F 

loss=F.square_error_cost(predicts,labels)

交叉熵

import paddle.nn.functional as F 
loss=F.cross_entropy(predicts,labels)

5 训练策略

训练脚本

def train(model,train_loader):
	model.train()
	opt=paddle.optimizer.SGD(learning_rate=0.01,parameters=model.parameters())
	EPOCH_NUM=10
	for epoch_id in range(EPOCH_NUM):
		for batch_id,data in enumerate(train_loader):
			images,labels=data
			images=paddle.to_tensor(images)
			labels=paddle.to_tensor(labels)
			predicts=model(images)
			
			loss=F.cross_entropy(predicts,labels)
			avg_loss=paddle.mean(loss)
			
			if batch_id %200:
				print("epoch :{}, batch:{} loss:{}".format(epoch_id,batch_id,avg_loss.numpy())
			
			avg_loss.backward()
			opt.step()
			opt.clear_grad()
			
	paddle.save(model.state_dict(),'mnist.pdparams')

验证代码
这里又引出了一个新东西

paddle.metric.accuracy(input=pred,label=labels)

这个看着很玄乎,就是不知道里面是何方神圣
其中这个函数包含两个功能,一个是计算一般分类模型的准确率,一个是计算Topk的类别准确率
我们来大致看看它的底层代码就非常清楚啦hhh

def accuracy(input, label, k=1, correct=None, total=None, name=None):
    """
    accuracy layer.
    Refer to the https://en.wikipedia.org/wiki/Precision_and_recall                                                                                           
 
    This function computes the accuracy using the input and label.
    If the correct label occurs in top k predictions, then correct will increment by one.
    Note: the dtype of accuracy is determined by input. the input and label dtype can be different.
 
    Args:
        input(Tensor): The input of accuracy layer, which is the predictions of network. A Tensor with type float32,float64.
            The shape is ``[sample_number, class_dim]`` .
        label(Tensor): The label of dataset. Tensor with type int32,int64. The shape is ``[sample_number, 1]`` .
        k(int, optional): The top k predictions for each class will be checked. Data type is int64 or int32.
        correct(Tensor, optional): The correct predictions count. A Tensor with type int64 or int32.
        total(Tensor, optional): The total entries count. A tensor with type int64 or int32.
        name(str, optional): The default value is None. Normally there is no need for
            user to set this property. For more information, please refer to :ref:`api_guide_Name`
 
    Returns:
        Tensor, the correct rate. A Tensor with type float32.
 
    Examples:
        .. code-block:: python
 
            import paddle
 
            predictions = paddle.to_tensor([[0.2, 0.1, 0.4, 0.1, 0.1], [0.2, 0.3, 0.1, 0.15, 0.25]], dtype='float32')
            label = paddle.to_tensor([[2], [0]], dtype="int64")
            result = paddle.metric.accuracy(input=predictions, label=label, k=1)
            # [0.5]
    """
    if in_dygraph_mode():
        if correct is None:
            correct = _varbase_creator(dtype="int32")
        if total is None:
            total = _varbase_creator(dtype="int32")

        topk_out, topk_indices = paddle.topk(input, k=k)
        _acc, _, _ = _C_ops.accuracy(topk_out, topk_indices, label, correct,
                                     total)
        return _acc

不过,从correct这个参数来看,目前这个函数是支持动态图模式下的计算变量方式。目前我还不知道这个参数有什么用…LOL
那么这个接口函数要怎么运用到验证代码中呢,这里给出来

def evaluation(model,data_loader):
	model.eval()
	acc_set=list()
	for batch_id,data in enumerate(data_loader()):
		images,labels=data
		images=paddle.to_tensor(images)
		labels=paddle.to_tensor(labels)
		pred=model(images)
		acc=paddle.metric.accuracy(input=pred,label=labels)
		acc_set.extend(acc.numpy())
	acc_val_mean=np.array(acc_set).mean()
	return acc_val_mean

好最后整合一波代码

import paddle
import numpy as np
import paddle.nn.functional as F
from dataset import MNISTDataset
from network import MNIST
from PIL import Image

def evaluation(model,dataset):
    model.eval()
    acc_set=list()
    for batch_id,data in enumerate(dataset()):
        images,labels=data
        images=paddle.to_tensor(images)
        labels=paddle.to_tensor(labels)
        pred=model(images)
        acc=paddle.metric.accuracy(input=pred,label=labels)
        acc_set.extend(acc.numpy())

    acc_val_mean=np.array(acc_set).mean()
    return acc_val_mean

def train(model,train_loader,val_loader):
    model.train()
    opt=paddle.optimizer.SGD(learning_rate=0.01,parameters=model.parameters())
    EPOCH_NUM=10
    for epoch_id in range(EPOCH_NUM):
        for batch_id,data in enumerate(train_loader):
            images,labels=data
            images=paddle.to_tensor(images)
            labels=paddle.to_tensor(labels)
            predicts=model(images)

            loss=F.cross_entropy(predicts,labels)
            avg_loss=paddle.mean(loss)

            if batch_id%200==0:
                print("epoch:{}, batch:{}, loss is :{}".format(epoch_id,batch_id,avg_loss.numpy()))

            avg_loss.backward()
            opt.step()
            opt.clear_grad()
        
        acc_train_mean = evaluation(model, train_loader)
        acc_val_mean = evaluation(model, val_loader)
        print('train_acc: {}, val acc: {}'.format(acc_train_mean, acc_val_mean))   
    paddle.save(model.state_dict(),'mnist.pdparams')

def load_image(img_path):
    im=Image.open(img_path).convert('L')
    im=im.resize((28,28),Image.ANTIALIAS)
    im=np.array(im).reshape(1,1,28,28).astype(np.float32)
    im=1.0-im/255
    return im

model=MNIST()
train_dataset=MNISTDataset(mode='train')
train_loader=paddle.io.DataLoader(train_dataset,batch_size=100,shuffle=True,drop_last=True)
val_dataset=MNISTDataset(mode='valid')
val_loader=paddle.io.DataLoader(val_dataset,batch_size=128,drop_last=True)
train(model,train_loader,val_loader)

# 预测
params_file_path='mnist.pdparams'
img_path='work/example_1.png'
param_dict=paddle.load(params_file_path)
model.load_dict(param_dict)
model.eval()

tensor_img=load_image(img_path)
results=model(paddle.to_tensor(tensor_img))
lab=np.argsort(results.numpy())
print("本次预测的数字为:",lab[0][-1])

加油加油

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值