点击左上方蓝字关注我们
课程简介:
“跟着雨哥学AI”是百度飞桨开源框架近期针对高层API推出的系列课。本课程由多位资深飞桨工程师精心打造,不仅提供了从数据处理、到模型组网、模型训练、模型评估和推理部署全流程讲解;还提供了丰富的趣味案例,旨在帮助开发者更全面清晰地掌握百度飞桨框架的用法,并能够举一反三、灵活使用飞桨框架进行深度学习实践。
前言: 嗨,同学们好,我是雨哥,又是一年毕业季啦,很多同学今年三月或者六月份就陆陆续续从学校毕业啦,那么不管是继续深造还是着手找工作都需要我们自己提前制备一份充实的简历。制备简历那就自然少不了一张好看的证件照了,然而每个单位对证件照的要求各不相同,有的需要白底,有的需要红底,除此之外摄影师的水平也随心所欲,还不如我们手机前置摄像头的自拍。既然如此,那么今天我们就结合我们之前学过的理论知识,制作一个专属证件照吧。
证件照的制作原理?
开始之前同学们会有几个疑问,什么是证件照的制作呢?我们该如何去制作证件照呢?
本次的证件照制作是基于我们手机中普通的正面自拍照片,简单来说,我们需要制作一个生成证件照的系统,输入一张生活正面自拍照,就会输出一个白底或者蓝底的证件照。大家是不是已经明白了?简而言之我们的制作的核心工艺在于抠图,通过一张生活自拍照,我们使用深度学习模型去掉我们生活照杂乱的背景,扣出我们的正面自拍,搭配纯色背景,就可以快速制作一款令我们满意的证件照。
By the way,毕竟摄影师是我们自己,拍的丑了也不能打自己~
在计算机视觉领域,抠图可以通过图像分割技术来实现,而图像分割技术指的是将数字图像细分为多个图像子区域的过程。其实,图像分割的目的是简化或改变图像的表示形式,使得图像更容易理解和分析。图像分割通常用于定位图像中的物体和边界(线,曲线等)。更精确的说,图像分割是对图像中的每个像素加标签的一个过程,这一过程使得具有相同标签的像素具有某种共同视觉特性。
图像分割的应用领域非常多,无人车、地块检测、表计识别等等。
证件照的制作流程?
万变不离其宗,我们需要遵循如下图所示的框架。从左到右,数据集的准备、数据的预处理及加载、模型组建、模型训练、模型测试以及模型预测等。本教程将简要介绍如何通过飞桨开源框架的高层API实现图像分割技术,实现证件照的制作。
Note:有同学问(其实没同学问),为什么这个图要画成U型啊,很奇怪唉?咳咳,只是单纯的为了和下文呼应一下,下面介绍我们的嘉宾。
本教程中我们采用了一个在图像分割领域比较熟知的U-Net网络结构,是一个基于FCN做改进后的一个深度学习网络,包含下采样(编码器,特征提取)和上采样(解码器,分辨率还原)两个阶段,因模型结构比较像U型而命名为U-Net。模型结构如下图所示。
Ronneberger O , Fischer P , Brox T . U-Net: Convolutional Networks for Biomedical Image Segmentation[C]// International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Cham, 2015.
数据集的准备
我们采用的是AI Studio数据集中自带人像语义分割数据集,如图所示,为人体正面照片。我们首先需要在建立项目的时候添加该数据集,然后解压该数据集。
# 解压该数据集
!unzip -q -o /home/aistudio/data/data59640/koto.zip
解压数据集之后,我们可以看到如下的目录结构:
koto/
├── imgs
│ ├── 00003-256.jpg
│ ├── ......
├── annos
│ ├── 00001-308.png
│ ├── ......
├── labels.txt
├── train_list.txt
└── valid_list.txt
imgs和annos目录分别对应原图和标签数据,train_list.txt和valid_list.txt分别为训练集和验证集。
数据加载
得到数据集以后,我们需要进行数据的加载工作。飞桨(PaddlePaddle)数据集加载方案是统一使用Dataset(数据集定义) + DataLoader(多进程数据集加载)。
首先我们先进行数据集的定义,数据集定义主要是实现一个新的Dataset类,继承父类paddle.io.Dataset,并实现父类中以下两个抽象方法,__getitem__和__len__:
import paddle
paddle.__version__
'2.0.0-rc1'
需要注意的是,这个数据集中的标签数据为黑白单通道的图像,每个像素点的值为0-255,根据任务需要,我们将其转换为0、1标签,0表示背景,1表示人像。由于数据中部分人像为黑色,部分为白色,所以根据第一个像素点的值决定其背景的颜色,分类讨论。
import io
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image as PilImage
import paddle
from paddle.io import Dataset
from paddle.vision.transforms import transforms as T
from paddle.nn import functional as F
IMAGE_SIZE = (384, 384)
class PetDataset(Dataset):
"""
数据集定义
"""
def __init__(self, mode='train'):
"""
构造函数
"""
self.image_size = IMAGE_SIZE
self.mode = mode.lower()
assert self.mode in ['train', 'valid', 'predict'], \
"mode should be 'train' or 'valid' or 'predict', but got {}".format(self.mode)
self.train_images = []
self.label_images = []
with open('koto/{}_list.txt'.format(self.mode), 'r') as f:
for line in f.readlines():
image, label = line.strip().split(' ')
image = image.replace('/mnt/d/data/', '')
label = label.replace('/mnt/d/data/', '')
self.train_images.append(image)
self.label_images.append(label)
def _load_img(self, path, color_mode='rgb', transforms=[]):
"""
统一的图像处理接口封装,用于规整图像大小和通道
"""
with open(path, 'rb') as f:
img = PilImage.open(io.BytesIO(f.read()))
if color_mode == 'grayscale':
# if image is not already an 8-bit, 16-bit or 32-bit grayscale image
# convert it to an 8-bit grayscale image.
if img.mode not in ('L', 'I;16', 'I'):
img = img.convert('L')
elif color_mode == 'rgba':
if img.mode != 'RGBA':
img = img.convert('RGBA')
elif color_mode == 'rgb':
if img.mode != 'RGB':
img = img.convert('RGB')
else:
raise ValueError('color_mode must be "grayscale", "rgb", or "rgba"')
return T.Compose([
T.Resize(self.image_size)
] + transforms)(img)
def __getitem__(self, idx):
"""
返回 image, label
"""
train_image = self._load_img(self.train_images[idx],
transforms=[
T.Transpose(),
T.Normalize(mean=127.5, std=127.5)
]) # 加载原始图像
label_image = self._load_img(self.label_images[idx],
color_mode='grayscale',
transforms=[T.Grayscale()]) # 加载Label图像
# 返回image, label
train_image = np.array(train_image, dtype='float32')
label_image = np.array(label_image, dtype='int64')
# 将Label图像转换为标签,0:背景,1:人像
if label_image[0][0] == 0:
# 背景黑色,人像白色
for i in range(len(label_image)):
for j in range(len(label_image[i])):
if label_image[i][j] > 128:
label_image[i][j] = 1
else:
label_image[i][j] = 0
else:
# 背景白色,人像黑色
for i in range(len(label_image)):
for j in range(len(label_image[i])):
if label_image[i][j] < 128:
label_image[i][j] = 1
else:
label_image[i][j] = 0
return train_image, label_image
def __len__(self):
"""
返回数据集总数
"""
return len(self.train_images)
模型组建
网络结构这里我们用了上节课讲到的U-Net网络,本节课就不做过多的赘述了,如果大家想具体了解也可以回顾上节课的内容,『跟着雨哥学AI』系列06:趣味案例——基于U-Net的宠物图像分割
定义网络结构组建当中用到的一些基础模块
定义模型所需要用到的深度可分离卷积、编码器和解码器。
from paddle.nn import functional as F
import numpy as np
# 深度可分离卷积定义,目标减少参数个数,减少计算量
class SeparableConv2D(paddle.nn.Layer):
def __init__(self,
in_channels,
out_channels,
kernel_size,
stride=1,
padding=0,
dilation=1,
groups=None,
weight_attr=None,
bias_attr=None,
data_format="NCHW"):
super(SeparableConv2D, self).__init__()
self._padding = padding
self._stride = stride
self._dilation = dilation
self._in_channels = in_channels
self._data_format = data_format
# 第一次卷积参数,没有偏置参数
filter_shape = [in_channels, 1] + self.convert_to_list(kernel_size, 2, 'kernel_size')
self.weight_conv = self.create_parameter(shape=filter_shape, attr=weight_attr)
# 第二次卷积参数
filter_shape = [out_channels, in_channels] + self.convert_to_list(1, 2, 'kernel_size')
self.weight_pointwise = self.create_parameter(shape=filter_shape, attr=weight_attr)
self.bias_pointwise = self.create_parameter(shape=[out_channels],
attr=bias_attr,
is_bias=True)
def convert_to_list(self, value, n, name, dtype=np.int):
if isinstance(value, dtype):
return [value, ] * n
else:
try:
value_list = list(value)
except TypeError:
raise ValueError("The " + name +
"'s type must be list or tuple. Received: " + str(
value))
if len(value_list) != n:
raise ValueError("The " + name + "'s length must be " + str(n) +
". Received: " + str(value))
for single_value in value_list:
try:
dtype(single_value)
except (ValueError, TypeError):
raise ValueError(
"The " + name + "'s type must be a list or tuple of " + str(
n) + " " + str(dtype) + " . Received: " + str(
value) + " "
"including element " + str(single_value) + " of type" + " "
+ str(type(single_value)))
return value_list
def forward(self, inputs):
conv_out = F.conv2d(inputs,
self.weight_conv,
padding=self._padding,
stride=self._stride,
dilation=self._dilation,
groups=self._in_channels,
data_format=self._data_format)
out = F.conv2d(conv_out,
self.weight_pointwise,
bias=self.bias_pointwise,
padding=0,
stride=1,
dilation=1,
groups=1,
data_format=self._data_format)
return out
# 定义下采样编码器
class Encoder(paddle.nn.Layer):
def __init__(self, in_channels, out_channels):
super(Encoder, self).__init__()
self.relus = paddle.nn.LayerList(
[paddle.nn.ReLU() for i in range(2)])
self.separable_conv_01 = SeparableConv2D(in_channels,
out_channels,
kernel_size=3,
padding='same')
self.bns = paddle.nn.LayerList(
[paddle.nn.BatchNorm2D(out_channels) for i in range(2)])
self.separable_conv_02 = SeparableConv2D(out_channels,
out_channels,
kernel_size=3,
padding='same')
self.pool = paddle.nn.MaxPool2D(kernel_size=3, stride=2, padding=1)
self.residual_conv = paddle.nn.Conv2D(in_channels,
out_channels,
kernel_size=1,
stride=2,
padding='same')
def forward(self, inputs):
previous_block_activation = inputs
y = self.relus[0](inputs)
y = self.separable_conv_01(y)
y = self.bns[0](y)
y = self.relus[1](y)
y = self.separable_conv_02(y)
y = self.bns[1](y)
y = self.pool(y)
residual = self.residual_conv(previous_block_activation)
y = paddle.add(y, residual)
return y
# 定义上采样解码器
class Decoder(paddle.nn.Layer):
def __init__(self, in_channels, out_channels):
super(Decoder, self).__init__()
self.relus = paddle.nn.LayerList(
[paddle.nn.ReLU() for i in range(2)])
self.conv_transpose_01 = paddle.nn.Conv2DTranspose(in_channels,
out_channels,
kernel_size=3,
padding=1)
self.conv_transpose_02 = paddle.nn.Conv2DTranspose(out_channels,
out_channels,
kernel_size=3,
padding=1)
self.bns = paddle.nn.LayerList(
[paddle.nn.BatchNorm2D(out_channels) for i in range(2)]
)
self.upsamples = paddle.nn.LayerList(
[paddle.nn.Upsample(scale_factor=2.0) for i in range(2)]
)
self.residual_conv = paddle.nn.Conv2D(in_channels,
out_channels,
kernel_size=1,
padding='same')
def forward(self, inputs):
previous_block_activation = inputs
y = self.relus[0](inputs)
y = self.conv_transpose_01(y)
y = self.bns[0](y)
y = self.relus[1](y)
y = self.conv_transpose_02(y)
y = self.bns[1](y)
y = self.upsamples[0](y)
residual = self.upsamples[1](previous_block_activation)
residual = self.residual_conv(residual)
y = paddle.add(y, residual)
return y
模型组网
按照U型网络结构格式进行整体的网络结构搭建,三次下采样,四次上采样。
class PetNet(paddle.nn.Layer):
def __init__(self, num_classes):
super(PetNet, self).__init__()
self.conv_1 = paddle.nn.Conv2D(3, 32,
kernel_size=3,
stride=2,
padding='same')
self.bn = paddle.nn.BatchNorm2D(32)
self.relu = paddle.nn.ReLU()
in_channels = 32
self.encoders = []
self.encoder_list = [64, 128, 256]
self.decoder_list = [256, 128, 64, 32]
# 根据下采样个数和配置循环定义子Layer,避免重复写一样的程序
for out_channels in self.encoder_list:
block = self.add_sublayer('encoder_{}'.format(out_channels),
Encoder(in_channels, out_channels))
self.encoders.append(block)
in_channels = out_channels
self.decoders = []
# 根据上采样个数和配置循环定义子Layer,避免重复写一样的程序
for out_channels in self.decoder_list:
block = self.add_sublayer('decoder_{}'.format(out_channels),
Decoder(in_channels, out_channels))
self.decoders.append(block)
in_channels = out_channels
self.output_conv = paddle.nn.Conv2D(in_channels,
num_classes,
kernel_size=3,
padding='same')
def forward(self, inputs):
y = self.conv_1(inputs)
y = self.bn(y)
y = self.relu(y)
for encoder in self.encoders:
y = encoder(y)
for decoder in self.decoders:
y = decoder(y)
y = self.output_conv(y)
return y
模型可视化
模型组建完成后,我们可以调用飞桨提供的summary接口对组建好的模型进行可视化,方便进行模型结构和参数信息的查看和确认。
num_classes = 2
model = paddle.Model(PetNet(num_classes))
model.summary((-1, 3, 384,384))
-----------------------------------------------------------------------------
Layer (type) Input Shape Output Shape Param #
=============================================================================
Conv2D-1 [[1, 3, 384, 384]] [1, 32, 192, 192] 896
BatchNorm2D-1 [[1, 32, 192, 192]] [1, 32, 192, 192] 128
ReLU-1 [[1, 32, 192, 192]] [1, 32, 192, 192] 0
ReLU-2 [[1, 32, 192, 192]] [1, 32, 192, 192] 0
SeparableConv2D-1 [[1, 32, 192, 192]] [1, 64, 192, 192] 2,400
BatchNorm2D-2 [[1, 64, 192, 192]] [1, 64, 192, 192] 256
ReLU-3 [[1, 64, 192, 192]] [1, 64, 192, 192] 0
SeparableConv2D-2 [[1, 64, 192, 192]] [1, 64, 192, 192] 4,736
BatchNorm2D-3 [[1, 64, 192, 192]] [1, 64, 192, 192] 256
MaxPool2D-1 [[1, 64, 192, 192]] [1, 64, 96, 96] 0
Conv2D-2 [[1, 32, 192, 192]] [1, 64, 96, 96] 2,112
Encoder-1 [[1, 32, 192, 192]] [1, 64, 96, 96] 0
ReLU-4 [[1, 64, 96, 96]] [1, 64, 96, 96] 0
SeparableConv2D-3 [[1, 64, 96, 96]] [1, 128, 96, 96] 8,896
BatchNorm2D-4 [[1, 128, 96, 96]] [1, 128, 96, 96] 512
ReLU-5 [[1, 128, 96, 96]] [1, 128, 96, 96] 0
SeparableConv2D-4 [[1, 128, 96, 96]] [1, 128, 96, 96] 17,664
BatchNorm2D-5 [[1, 128, 96, 96]] [1, 128, 96, 96] 512
MaxPool2D-2 [[1, 128, 96, 96]] [1, 128, 48, 48] 0
Conv2D-3 [[1, 64, 96, 96]] [1, 128, 48, 48] 8,320
Encoder-2 [[1, 64, 96, 96]] [1, 128, 48, 48] 0
ReLU-6 [[1, 128, 48, 48]] [1, 128, 48, 48] 0
SeparableConv2D-5 [[1, 128, 48, 48]] [1, 256, 48, 48] 34,176
BatchNorm2D-6 [[1, 256, 48, 48]] [1, 256, 48, 48] 1,024
ReLU-7 [[1, 256, 48, 48]] [1, 256, 48, 48] 0
SeparableConv2D-6 [[1, 256, 48, 48]] [1, 256, 48, 48] 68,096
BatchNorm2D-7 [[1, 256, 48, 48]] [1, 256, 48, 48] 1,024
MaxPool2D-3 [[1, 256, 48, 48]] [1, 256, 24, 24] 0
Conv2D-4 [[1, 128, 48, 48]] [1, 256, 24, 24] 33,024
Encoder-3 [[1, 128, 48, 48]] [1, 256, 24, 24] 0
ReLU-8 [[1, 256, 24, 24]] [1, 256, 24, 24] 0
Conv2DTranspose-1 [[1, 256, 24, 24]] [1, 256, 24, 24] 590,080
BatchNorm2D-8 [[1, 256, 24, 24]] [1, 256, 24, 24] 1,024
ReLU-9 [[1, 256, 24, 24]] [1, 256, 24, 24] 0
Conv2DTranspose-2 [[1, 256, 24, 24]] [1, 256, 24, 24] 590,080
BatchNorm2D-9 [[1, 256, 24, 24]] [1, 256, 24, 24] 1,024
Upsample-1 [[1, 256, 24, 24]] [1, 256, 48, 48] 0
Upsample-2 [[1, 256, 24, 24]] [1, 256, 48, 48] 0
Conv2D-5 [[1, 256, 48, 48]] [1, 256, 48, 48] 65,792
Decoder-1 [[1, 256, 24, 24]] [1, 256, 48, 48] 0
ReLU-10 [[1, 256, 48, 48]] [1, 256, 48, 48] 0
Conv2DTranspose-3 [[1, 256, 48, 48]] [1, 128, 48, 48] 295,040
BatchNorm2D-10 [[1, 128, 48, 48]] [1, 128, 48, 48] 512
ReLU-11 [[1, 128, 48, 48]] [1, 128, 48, 48] 0
Conv2DTranspose-4 [[1, 128, 48, 48]] [1, 128, 48, 48] 147,584
BatchNorm2D-11 [[1, 128, 48, 48]] [1, 128, 48, 48] 512
Upsample-3 [[1, 128, 48, 48]] [1, 128, 96, 96] 0
Upsample-4 [[1, 256, 48, 48]] [1, 256, 96, 96] 0
Conv2D-6 [[1, 256, 96, 96]] [1, 128, 96, 96] 32,896
Decoder-2 [[1, 256, 48, 48]] [1, 128, 96, 96] 0
ReLU-12 [[1, 128, 96, 96]] [1, 128, 96, 96] 0
Conv2DTranspose-5 [[1, 128, 96, 96]] [1, 64, 96, 96] 73,792
BatchNorm2D-12 [[1, 64, 96, 96]] [1, 64, 96, 96] 256
ReLU-13 [[1, 64, 96, 96]] [1, 64, 96, 96] 0
Conv2DTranspose-6 [[1, 64, 96, 96]] [1, 64, 96, 96] 36,928
BatchNorm2D-13 [[1, 64, 96, 96]] [1, 64, 96, 96] 256
Upsample-5 [[1, 64, 96, 96]] [1, 64, 192, 192] 0
Upsample-6 [[1, 128, 96, 96]] [1, 128, 192, 192] 0
Conv2D-7 [[1, 128, 192, 192]] [1, 64, 192, 192] 8,256
Decoder-3 [[1, 128, 96, 96]] [1, 64, 192, 192] 0
ReLU-14 [[1, 64, 192, 192]] [1, 64, 192, 192] 0
Conv2DTranspose-7 [[1, 64, 192, 192]] [1, 32, 192, 192] 18,464
BatchNorm2D-14 [[1, 32, 192, 192]] [1, 32, 192, 192] 128
ReLU-15 [[1, 32, 192, 192]] [1, 32, 192, 192] 0
Conv2DTranspose-8 [[1, 32, 192, 192]] [1, 32, 192, 192] 9,248
BatchNorm2D-15 [[1, 32, 192, 192]] [1, 32, 192, 192] 128
Upsample-7 [[1, 32, 192, 192]] [1, 32, 384, 384] 0
Upsample-8 [[1, 64, 192, 192]] [1, 64, 384, 384] 0
Conv2D-8 [[1, 64, 384, 384]] [1, 32, 384, 384] 2,080
Decoder-4 [[1, 64, 192, 192]] [1, 32, 384, 384] 0
Conv2D-9 [[1, 32, 384, 384]] [1, 2, 384, 384] 578
=============================================================================
Total params: 2,058,690
Trainable params: 2,051,138
Non-trainable params: 7,552
-----------------------------------------------------------------------------
Input size (MB): 1.69
Forward/backward pass size (MB): 676.12
Params size (MB): 7.85
Estimated Total Size (MB): 685.67
-----------------------------------------------------------------------------
{'total_params': 2058690, 'trainable_params': 2051138}
定义评估指标
由于框架中没有提供相应的评估指标,所以我们使用前面章节中讲解过的自定义评估指标方法,『跟着雨哥学AI』系列04:详解飞桨框架高阶用法。
IoU(interp over union)指标就是大家常说的交并比,通常作为语义分割任务的标准度量,在做此任务时我们自定义该指标用来衡量模型的预测效果。
from paddle.metric import Metric
import numpy as np
class IOU(Metric):
def __init__(self, name='iou', *args, **kwargs):
super(IOU, self).__init__(*args, **kwargs)
self._name = name
self.interp = 0 # 交集
self.union = 0 # 并集
def name(self):
return self._name
def update(self, preds, labels):
# 在preds第二维上取预测概率最大的标签序号
preds = np.argmax(preds, axis=1)
self.interp = np.logical_and(labels, preds)
self.union = np.logical_or(labels, preds)
def accumulate(self):
return np.sum(self.interp) / np.sum(self.union)
def reset(self):
self.interp = 0
self.union = 0
模型训练
使用模型代码进行Model实例生成,使用model.prepare接口定义优化器、损失函数和评价指标等信息,用于后续训练使用。在所有初步配置完成后,调用model.fit接口开启训练执行过程,调用fit时只需要将前面定义好的训练数据集、测试数据集、训练轮次(Epoch)和批次大小(batch_size)配置好即可。
在本案例中加入了save_dir和save_freq参数,分别指定模型的存储路径和保存的频率,方便训练过程中断后可以接着之前的继续训练。
另外我们使用了飞桨提供的EarlyStopping接口,当验证集上iou指标5轮没有更高的值时,我们将提前结束训练过程,并保存最好的模型,存储名为best_model。
train_dataset = PetDataset(mode='train') # 训练数据集
val_dataset = PetDataset(mode='valid') # 验证数据集
print('训练集大小:{}, 验证集大小:{}'.format(len(train_dataset), len(val_dataset)))
训练集大小:5666, 验证集大小:1416
num_classes = 2
model = paddle.Model(PetNet(num_classes))
optim = paddle.optimizer.RMSProp(learning_rate=0.001,
rho=0.9,
momentum=0.0,
epsilon=1e-07,
centered=False,
parameters=model.parameters())
model.prepare(optim, paddle.nn.CrossEntropyLoss(axis=1), metrics=IOU())
callbacks = paddle.callbacks.EarlyStopping('iou',
mode='max',
patience=5,
verbose=1,
min_delta=0,
baseline=None,
save_best_model=True)
model.fit(train_dataset,
val_dataset,
epochs=100,
batch_size=64,
drop_last=True,
verbose=1,
save_dir='./ckpt/',
save_freq=5,
log_freq=1,
callbacks=[callbacks])
The loss value printed in the log is the current step, and the metric is the average value of previous step.
Epoch 1/100
step 88/88 [==============================] - loss: 1.4115 - iou: 0.0982 - 8s/step
save checkpoint at /home/aistudio/ckpt/0
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step 23/23 [==============================] - loss: 0.5246 - iou: 0.4950 - 7s/step
Eval samples: 1416
Epoch 2/100
step 88/88 [==============================] - loss: 0.5149 - iou: 0.4484 - 8s/step
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step 23/23 [==============================] - loss: 0.3571 - iou: 0.4504 - 7s/step
Eval samples: 1416
……
Epoch 25/100
step 88/88 [==============================] - loss: 0.1769 - iou: 0.8273 - 8s/step
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step 23/23 [==============================] - loss: 0.0705 - iou: 0.9176 - 7s/step
Eval samples: 1416
Epoch 26/100
step 88/88 [==============================] - loss: 0.1179 - iou: 0.8665 - 8s/step
save checkpoint at /home/aistudio/ckpt/25
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step 23/23 [==============================] - loss: 0.0618 - iou: 0.9094 - 7s/step
Eval samples: 1416
Epoch 26: Early stopping.
Best checkpoint has been saved at /home/aistudio/ckpt/best_model
save checkpoint at /home/aistudio/ckpt/final
模型预测
模型训练好之后,我们调用model.load接口下载训练效果最好的模型,然后直接使用model.predict接口来对数据集进行预测操作,只需要将预测数据集传递到接口内即可。
在本例中我们选取valid_list.txt中的任意一条数据另外创建了predict_list.txt用于测试。
predict_dataset = PetDataset(mode='predict')
num_classes = 2
model = paddle.Model(PetNet(num_classes))
# 模型加载
model.load('ckpt/best_model')
model.prepare()
predict_results = model.predict(predict_dataset)
Predict begin...
step 1/1 [==============================] - 283ms/step
Predict samples: 1
预测结果可视化
得到预测结果之后,我们借助工具将原图、标签图和预测结果可视化。
import matplotlib.pyplot as plt
from PIL import Image as PilImage
from paddle.vision.transforms import transforms as T
plt.figure(figsize=(10, 10))
with open('koto/predict_list.txt', 'r') as f:
for line in f.readlines():
img_path, label_path = line.strip().split(' ')
img_path = img_path.replace('/mnt/d/data/', '')
label_path = label_path.replace('/mnt/d/data/', '')
resize_t = T.Compose([T.Resize(IMAGE_SIZE)])
img = resize_t(PilImage.open(img_path))
label = resize_t(PilImage.open(label_path))
img = np.array(img).astype('uint8')
label = np.array(label).astype('uint8')
plt.subplot(1, 3, 1)
plt.imshow(img)
plt.axis('off')
plt.title('Input Image')
plt.subplot(1, 3, 2)
plt.imshow(label, cmap='gray')
plt.axis('off')
plt.title('Label')
data = predict_results[0][0][0].transpose((1, 2, 0))
mask = np.argmax(data, axis=-1)
plt.subplot(1, 3, 3)
plt.imshow(mask.astype('uint8'), cmap='gray')
plt.axis('off')
plt.title('Predict')
plt.show()
<Figure size 720x720 with 3 Axes>
制作证件照
我们在得到预测结果之后,就可以动手进行证件照的制作啦。此处是选用了白色背景,大家可以根据自己的需要来自定义背景颜色。
# 创建背景图,此处选用白色背景
bg = np.full([384, 384, 3], 255)
# 将预测结果的单通道扩展为RGB三通道
mask_rgb = []
for i in range(3):
mask_rgb.append(mask)
photo_mask = np.array(mask_rgb).transpose((1, 2, 0))
# 抠背景
photo_bg = bg * (1 - photo_mask)
# 抠人像
photo_per = img * photo_mask
# 将背景和人像结合
photo = photo_bg + photo_per
plt.imshow(photo.astype('uint8'))
plt.axis('off')
plt.show()
<Figure size 432x288 with 1 Axes>
总结
本节课和大家一起完成了制作自己的专属证件照案例,下节课同学们想实现什么趣味案例呢?欢迎大家在评论区告诉我,我们将会在后续的课程中给大家安排上哈,今天的课程到这里就结束了,我是雨哥,下节课再见啦~
欢迎关注飞桨框架高层API官方账号:飞桨PaddleHapi
有任何问题可以在本项目中评论或到飞桨Github仓库提交Issue。
欢迎扫码加入飞桨框架高层API技术交流群
回顾往期:
第五篇:『跟着雨哥学AI』系列之五:快速上手趣味案例FashionMNIST
第六篇:『跟着雨哥学AI』系列之六:趣味案例——基于U-Net的宠物图像分割
如果您想详细了解更多飞桨的相关内容,请参阅以下文档。
·飞桨官网地址·
https://www.paddlepaddle.org.cn/
·飞桨开源框架项目地址·
GitHub: https://github.com/PaddlePaddle/Paddle
Gitee: https://gitee.com/paddlepaddle/Paddle
????长按上方二维码立即star!????
飞桨(PaddlePaddle)以百度多年的深度学习技术研究和业务应用为基础,是中国首个开源开放、技术领先、功能完备的产业级深度学习平台,包括飞桨开源平台和飞桨企业版。飞桨开源平台包含核心框架、基础模型库、端到端开发套件与工具组件,持续开源核心能力,为产业、学术、科研创新提供基础底座。飞桨企业版基于飞桨开源平台,针对企业级需求增强了相应特性,包含零门槛AI开发平台EasyDL和全功能AI开发平台BML。EasyDL主要面向中小企业,提供零门槛、预置丰富网络和模型、便捷高效的开发平台;BML是为大型企业提供的功能全面、可灵活定制和被深度集成的开发平台。
END