讯飞人脸关键点检测大赛--打卡1

  • 这段时间参加了Coggle 30 days of ML的打卡活动,报名了CV赛事,赛题地址为:https://challenge.xfyun.cn/topic/info?type=key-points-of-human-face&ch=dw-sq-1

  • 赛题概述:

    • 人脸识别是基于人的面部特征信息进行身份识别的一种生物识别技术,金融和安防是目前人脸识别应用最广泛的两个领域。人脸关键点是人脸识别中的关键技术。人脸关键点检测需要识别出人脸的指定位置坐标,例如眉毛、眼睛、鼻子、嘴巴和脸部轮廓等位置坐标等。给定人脸图像,找到 4 个人脸关键点,赛题任务可以视为一个关键点检测问题。训练集:5 千张人脸图像,并且给定了具体的人脸关键点标注。测试集:约 2 千张人脸图像,需要选手识别出具体的关键点位置。
  • 首先完成数据的读取、关键点数据的缺省值填补,以及从array到图片的复现,缺测值填补采用的为dataframe自带的从由上一行向下一行填补的方式,图片的复现采用的为matplotlib库,代码如下:

train_df = pd.read_csv('./人脸关键点检测挑战赛_数据集/train.csv')
train_img = np.load('./人脸关键点检测挑战赛_数据集/train.npy/train.npy')
test_img = np.load('./人脸关键点检测挑战赛_数据集/test.npy/test.npy')

print(train_df.head())
print(train_img.shape)

### 基于dataframe的缺省值填补
print(train_df.isnull().sum())
train_df.fillna(method='ffill', inplace=True)
print(train_df.isnull().sum())

### 图片的复现 
plt.imshow(train_img[:,:,1])
plt.show()
  • 在计算中采用的卷积层和全连接层结合的方式,有四个卷积层和四个全连接层,使用pytorch实现,代码如下:
import torch
import torch.nn as nn
import torch.nn.functional as F

class MLP(nn.Module):

    def __init__(self, output_dims):
        super(MLP, self).__init__()
        #self.input_dims = input_dims
        self.output_dims = output_dims
        self.conv1 = nn.Conv2d(
                        in_channels=1, out_channels=16, 
                        kernel_size=3,stride=2)
        self.conv2 = nn.Conv2d(
                        in_channels=16, out_channels=32, 
                        kernel_size=3, stride=2)
        self.conv3 = nn.Conv2d(
                        in_channels=32, out_channels=64,
                        kernel_size=3, stride=2)
        self.conv4 = nn.Conv2d(
                        in_channels=64, out_channels=64,
                        kernel_size=3, stride=2)                
        self.fc1 = nn.Linear(7744, 1600)
        self.fc2 = nn.Linear(1600, 800)
        self.fc3 = nn.Linear(800, 100)
        self.fc4 = nn.Linear(100, self.output_dims)

    def forward(self, X):
        X = F.relu(self.conv1(X))
        X = F.relu(self.conv2(X))
        X = F.relu(self.conv3(X))
        X = X.reshape(X.shape[0],-1)
        X = F.relu(self.fc1(X))
        X = F.relu(self.fc2(X))
        X = F.relu(self.fc3(X))
        out = self.fc4(X)
        return out
  • 接下来对数据进行简单的预处理投入到模型中进行计算,预处理的步骤包括检验数据集与训练数据集的划分和array 到 tensor的转化:
from torch.utils.data import Dataset, DataLoader, TensorDataset

Xtrain, Xtest, ytrain, ytest = train_test_split(
                    train_img.transpose(2, 0, 1), 
                    train_df.values.astype(np.float32), 
                    test_size=0.1)
                    
def in_out_creat(inputData, outputData):
    inputData = torch.FloatTensor(inputData).unsqueeze(1)
    outputData = torch.FloatTensor(outputData)
    return DataLoader(TensorDataset(inputData, outputData), 
            batch_size=para.batch_size, shuffle=True)
  • 最后就是模型的训练,以类的方式储存训练参数,并写了一个训练函数,采用的为cos学习率进行模型的训练:
from torch.optim.lr_scheduler import CosineAnnealingLR

def train_model(model, trainLoader, vaildLoader, params):
    train_loss, vaild_loss = [], []
    loss_func = nn.MSELoss()
    optimizer = torch.optim.Adam(model.parameters(),lr=params.lr)
    scheduler = CosineAnnealingLR(optimizer,T_max=10)
    for i in range(params.epochs):

        for ibatch, (X, y) in enumerate(trainLoader):
            model.train()
            optimizer.zero_grad()
            out = model(X)
            loss = loss_func(y, out)
            loss.backward()
            optimizer.step()
            train_loss.append(loss.item())
        
   
        for iv, (X, y) in enumerate(vaildLoader):
            model.eval()
            out = model(X)
            v_loss = loss_func(y, out)

            vaild_loss.append(v_loss.item())
        if i%2 == 0:
            print("train loss: {}, vaild loss: {}".format(
                    loss.item(), v_loss.item()))

trainLoader = in_out_creat(Xtrain, ytrain)
vaildLoader = in_out_creat(Xtest, ytest)
model = MLP(output_dims=8)

train_model(model, trainLoader, vaildLoader, para)
  • 训练效果展示,梯度下降
train loss: 113.74918365478516, vaild loss: 121.13554382324219
train loss: 123.68692779541016, vaild loss: 129.83938598632812
train loss: 68.00350189208984, vaild loss: 43.04329299926758
train loss: 46.17871856689453, vaild loss: 75.8116226196289
train loss: 25.16992950439453, vaild loss: 23.86362648010254
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值