深度学习第二枪--人脸性别识别

前言

本文数据使用由该GitHub:https://github.com/chenlinzhong/gender-recognition  提供,在此表示感谢。原文使用TensorFlow搭建,

本文利用keras来精简代码。

数据描述

一共包含399张男女照片。data目录中保存着数据,famale存放女性图片,male存放男性图片,图片大小为(112,92,3)

 数据读取

 我们分别读取女性照片,标记为0;男性照片,标记为1。这里以灰度图读取。再shuffle,划分训练和测试,比例为4:1

import numpy as np
import os,cv2
import tensorflow as tf
from tensorflow import keras

#读取数据
filename = '/gender-recognition-master/data/'

def read_data(filename=filename,rate=0.2):
    #读取女性
    imgs_female = os.listdir(filename+'female/')
    #标记为0
    labels_female = list(map(lambda x:0,imgs_female))
    X_female = np.zeros(shape=[len(imgs_female),92,112,1])
    y_female = np.zeros(shape=[len(labels_female),1])
    for index,img in enumerate(imgs_female):
        #以灰度进行读取
        vector = cv2.imread(filename + 'female/' + img,0)
        #归一化
        vector = vector.flatten()/255
        vector = np.reshape(vector,[92,112,1])
        X_female[index,:,:,:] = vector
    for index,label in enumerate(labels_female):
        y_female[index,:] = label
    
    #读取男性
    imgs_male = os.listdir(filename+'male/')
    labels_male = list(map(lambda x:1,imgs_male))
    X_male = np.zeros(shape=[len(imgs_male),92,112,1])
    y_male = np.zeros(shape=[len(labels_male),1])
    for index,img in enumerate(imgs_male):
        vector = cv2.imread(filename +'male/'+ img,0)
        vector = vector.flatten()/255
        vector = np.reshape(vector,[92,112,1])
        X_male[index,:,:,:] = vector
    for index,label in enumerate(labels_male):
        y_male[index,:] = label
    
    #合并 
    #[399,92,112,1]
    X = np.r_[X_female,X_male]
    #[399,1]
    y = np.r_[y_female,y_male]
   
    #shuffle
    index = np.random.permutation(len(X))
    X = X[index]
    y = y[index]
    
    #train:test 8:2
    test_num = int(len(X)*rate)
    #[320,92,112,1]
    train = X[test_num:,:,:,:]
    #[320,1]
    train_labels  = y[test_num:,:]
    test = X[:test_num,:,:,:]
    test_labels  = y[:test_num,:]
    
    return train,train_labels,test,test_labels
train,train_labels,test,test_labels =  read_data() 

网络结构

采用下图的神经网络结构,全连接采用dropout抑制过拟合,丢弃率为0.4

输入层为输入的灰度图像尺寸:  -1 x 112 x 92 x 1
第一个卷积层,卷积核的大小,深度和数量 (3, 3, 1, 16)
池化后的特征张量尺寸:       -1 x 56 x 46 x 16
第二个卷积层,卷积核的大小,深度和数量 (3, 3, 16, 32)
池化后的特征张量尺寸:       -1 x 28 x 23 x 32
第三个卷积层,卷积核的大小,深度和数量 (3, 3, 32, 64)
池化后的特征张量尺寸:       -1 x 14 x 12 x 64
全连接第一层权重矩阵:         10752 x 512
全连接第二层权重矩阵:         512 x 128
输出层与全连接隐藏层之间:     128 x 1

 代码如下:

input_shape = (92,112,1)
model = keras.Sequential()
#卷积1
model.add(keras.layers.Conv2D(16,kernel_size=(3,3),strides=(1,1),
                              padding='same',activation='relu',input_shape=input_shape))
#池化1
model.add(keras.layers.MaxPool2D(strides=(2,2),padding='same'))

#卷积2
model.add(keras.layers.Conv2D(32,kernel_size=(3,3),padding='same',activation='relu'))
#池化2
model.add(keras.layers.MaxPool2D(strides=(2,2),padding='same'))

#卷积3
model.add(keras.layers.Conv2D(64,kernel_size=(3,3),padding='same',activation='relu'))
#池化3
model.add(keras.layers.MaxPool2D(strides=(2,2),padding='same'))

model.add(keras.layers.Flatten())
#全连接1
model.add(keras.layers.Dense(512,activation='relu'))
model.add(keras.layers.Dropout(0.4))
#全连接2
model.add(keras.layers.Dense(128,activation='relu'))
model.add(keras.layers.Dropout(0.4))
#输出
model.add(keras.layers.Dense(1,activation='sigmoid'))

编译训练

损失函数为二分类损失,以精确度进行评价

训练20轮,每批32个样本,划分出20%作为验证集,当验证集损失不在下降时,停止训练。

#编译
model.compile(optimizer=keras.optimizers.Adam(),loss=keras.losses.binary_crossentropy,
              metrics=['accuracy'])

#训练
early_stopping = keras.callbacks.EarlyStopping(monitor='val_loss',patience=2)
model.fit(train,train_labels,batch_size=32,epochs=20,validation_split=0.2,callbacks=[early_stopping])
model.save('my_model.h5')
score = model.evaluate(test,test_labels,batch_size=32)
print(score)

结果

Train on 256 samples, validate on 64 samples
Epoch 1/20
256/256 [==============================] - 8s 33ms/sample - loss: 0.7270 - acc: 0.5781 - val_loss: 0.6565 - val_acc: 0.7656
Epoch 2/20
256/256 [==============================] - 6s 23ms/sample - loss: 0.6185 - acc: 0.7031 - val_loss: 0.5153 - val_acc: 0.8281
Epoch 3/20
256/256 [==============================] - 6s 23ms/sample - loss: 0.5115 - acc: 0.7617 - val_loss: 0.3600 - val_acc: 0.9219
Epoch 4/20
256/256 [==============================] - 6s 24ms/sample - loss: 0.3417 - acc: 0.8789 - val_loss: 0.2518 - val_acc: 0.9375
Epoch 5/20
256/256 [==============================] - 6s 23ms/sample - loss: 0.2953 - acc: 0.8945 - val_loss: 0.2789 - val_acc: 0.9375
Epoch 6/20
256/256 [==============================] - 6s 23ms/sample - loss: 0.2676 - acc: 0.9141 - val_loss: 0.2366 - val_acc: 0.9219
Epoch 7/20
256/256 [==============================] - 6s 22ms/sample - loss: 0.2253 - acc: 0.9297 - val_loss: 0.1872 - val_acc: 0.9375
Epoch 8/20
256/256 [==============================] - 6s 23ms/sample - loss: 0.1682 - acc: 0.9414 - val_loss: 0.1974 - val_acc: 0.9375
Epoch 9/20
256/256 [==============================] - 6s 22ms/sample - loss: 0.1917 - acc: 0.9336 - val_loss: 0.1892 - val_acc: 0.9219
79/79 [==============================] - 1s 11ms/sample - loss: 0.1384 - acc: 0.9494
[0.13842634951012045, 0.9493671]

在第九轮时停止训练,精度在95%

我们在网上搜一张照片试试模型效果:

img = cv2.imread('/gender-recognition-master/female.jpg',0)
img = cv2.resize(img,(112,92))
cv2.namedWindow('my_img')
cv2.imshow('my_img',img)
cv2.waitKey(0)
cv2.destroyAllWindows()

照片如下:

模型判断:

model = keras.models.load_model('my_model.h5')
img = cv2.imread('/female.jpg',0)
img = cv2.resize(img,(112,92))
img = img.reshape((1,92,112,1))
print(model.predict(img))

结果:

模型识别正确。

参考:

https://github.com/chenlinzhong/gender-recognition

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值