PFLD论文解析及复现

                                                       PFLD论文解析及复现《二》:论文复现

写在前面:

PFLD:简单、快速、超高精度人脸特征点检测算法

刚好,这两天解决了前面文章关于Mobilenet-ssd物体检测的一些问题,借用训练机跑Mobilenet-ssd模型的空余时间,就前一段时间发布并自己进行复现的文章《PFLD: A Practical Facial Landmark Detector》进行一些说明.

上一篇https://blog.csdn.net/Danbinbo/article/details/96718937就PFLD论文的核心思想进行一些分析和总结,这一篇将会就自己的实现做一些分享。总之,PFLD成为实用人脸特征点检测算法的典范,其中损失函数的设计是整个网络的核心要素,论文实现了三个目标:精度高、速度快、模型小!这里先给出论文地址:https://arxiv.org/pdf/1902.10859.pdf,


人脸关键点检测:这里采用的深度学习框架为Tensorflow:这里直接给出主干网络结构,这里论文中的欧拉角通过人脸训练数据的关键点坐标计算而来。

#  主网络 + 副网络
def Pfld_Netework(input): # 112 * 112 * 3
    with tf.name_scope('Pfld_Netework'):
        ##### Part1: Major Network -- 主网络 #####
        #layers1
        #input= [None,112,112,3]
        with tf.name_scope('layers1'):
            W_conv1 = weight_variable([3, 3, 3, 64],name='W_conv1')
            b_conv1 = bias_variable([64],name='b_conv1')
            x_image = tf.reshape(input, [-1, 112, 112, 3],name='input_X')
            x_image = batch_norm(x_image,is_training=True)
            h_conv_1 = conv2d(x_image,W_conv1,strides=[1,2,2,1],padding='SAME') + b_conv1
        # layers2
        with tf.name_scope('layers1'):
            W_conv2 = weight_variable([3,3,64,1],name='W_conv2')
            b_conv2 = bias_variable([64],name='b_conv2')
            h_conv_1 = batch_norm(h_conv_1,is_training=True)
            h_conv_2 = deepwise_conv2d(h_conv_1,W_conv2) + b_conv2 # 56 * 56 * 64
        # Bottleneck   input = [56*56*64]
        with tf.name_scope('Mobilenet-V2'):
            with tf.name_scope('bottleneck_1'):
                h_conv_b1 = make_bottleneck_block(h_conv_2, 2, 64, stride=[1, 2, 2, 1], kernel=(3, 3)) # 28*28*64
                h_conv_b1 = make_bottleneck_block(h_conv_b1, 2, 64, stride=[1, 1, 1, 1], kernel=(3, 3))  # 28*28*64
                h_conv_b1 = make_bottleneck_block(h_conv_b1, 2, 64, stride=[1, 1, 1, 1], kernel=(3, 3))  # 28*28*64
                h_conv_b1 = make_bottleneck_block(h_conv_b1, 2, 64, stride=[1, 1, 1, 1], kernel=(3, 3))  # 28*28*64
                h_conv_b1 = make_bottleneck_block(h_conv_b1, 2, 64, stride=[1, 1, 1, 1], kernel=(3, 3))  # 28*28*64
            with tf.name_scope('bottleneck_2'):
                h_conv_b2 = make_bottleneck_block(h_conv_b1,2,128,stride=[1,2,2,1],kernel=(3,3)) # 14*14*128
            with tf.name_scope('bottleneck_3'):
                h_conv_b3 = make_bottleneck_block(h_conv_b2, 4, 128, stride=[1, 1, 1, 1], kernel=(3, 3)) # 14*14*128
                h_conv_b3 = make_bottleneck_block(h_conv_b3, 4, 128, stride=[1, 1, 1, 1], kernel=(3, 3))  # 14*14*128
                h_conv_b3 = make_bottleneck_block(h_conv_b3, 4, 128, stride=[1, 1, 1, 1], kernel=(3, 3))  # 14*14*128
                h_conv_b3 = make_bottleneck_block(h_conv_b3, 4, 128, stride=[1, 1, 1, 1], kernel=(3, 3))  # 14*14*128
                h_conv_b3 = make_bottleneck_block(h_conv_b3, 4, 128, stride=[1, 1, 1, 1], kernel=(3, 3))  # 14*14*128
                h_conv_b3 = make_bottleneck_block(h_conv_b3, 4, 128, stride=[1, 1, 1, 1], kernel=(3, 3))  # 14*14*128
            with tf.name_scope('bottleneck_4'):
                h_conv_b4 = make_bottleneck_block(h_conv_b3,2,16,stride=[1,1,1,1],kernel=(3,3))  # 14*14*16
        # S1
        with tf.name_scope('S1'):
            h_conv_s1 = h_conv_b4 # 14 * 14 * 16
        # s2
        with tf.name_scope('S2'):
            W_conv_s2 = weight_variable([3,3,16,32],name='W_conv_s2')
            b_conv_s2 = bias_variable([32],name='b_conv_s2')
            h_conv_s1 = batch_norm(h_conv_s1, is_training=True)
            h_conv_s2 = conv2d(h_conv_s1,W_conv_s2,strides=[1,
  • 5
    点赞
  • 14
    收藏
    觉得还不错? 一键收藏
  • 10
    评论
评论 10
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值