人脸表情数据集-fer2013

------韦访 20181102

1、概述

----

2、fer2013人脸表情数据集简介

Fer2013人脸表情数据集由35886张人脸表情图片组成,其中,测试图(Training)28708张,公共验证图(PublicTest)和私有验证图(PrivateTest)各3589张,每张图片是由大小固定为48×48的灰度图像组成,共有7种表情,分别对应于数字标签0-6,具体表情对应的标签和中英文如下:0 anger 生气; 1 disgust 厌恶; 2 fear 恐惧; 3 happy 开心; 4 sad 伤心;5 surprised 惊讶; 6 normal 中性。

但是,数据集并没有直接给出图片,而是将表情、图片数据、用途的数据保存到csv文件中,如下图所示,

如上图所示,第一张图是csv文件的开头,第一行是表头,说明每列数据的含义,第一列表示表情标签,第二列即为图片数据,这里是原始的图片数据,最后一列为用途。

3、将表情图片提取出来

知道数据结构以后,就好办了,使用pandas解析csv文件,(pandas的简单用法可以查看这篇博客:https://blog.csdn.net/rookie_wei/article/details/82974277 ),再将原始图片数据保存为jpg文件,并根据用途和标签标签进行分类,分别保存到对应文件夹下,代码比较简单,并且做了详细备注,直接给完整代码如下,

#encoding:utf-8
import pandas as pd
import numpy as np
import scipy.misc as sm
import os

emotions = {
    '0':'anger', #生气
    '1':'disgust', #厌恶
    '2':'fear', #恐惧
    '3':'happy', #开心
    '4':'sad', #伤心
    '5':'surprised', #惊讶
    '6':'normal', #中性
}

#创建文件夹
def createDir(dir):
    if os.path.exists(dir) is False:
        os.makedirs(dir)

def saveImageFromFer2013(file):


    #读取csv文件
    faces_data = pd.read_csv(file)
    imageCount = 0
    #遍历csv文件内容,并将图片数据按分类保存
    for index in range(len(faces_data)):
        #解析每一行csv文件内容
        emotion_data = faces_data.loc[index][0]
        image_data = faces_data.loc[index][1]
        usage_data = faces_data.loc[index][2]
        #将图片数据转换成48*48
        data_array = list(map(float, image_data.split()))
        data_array = np.asarray(data_array)
        image = data_array.reshape(48, 48)

        #选择分类,并创建文件名
        dirName = usage_data
        emotionName = emotions[str(emotion_data)]

        #图片要保存的文件夹
        imagePath = os.path.join(dirName, emotionName)

        # 创建“用途文件夹”和“表情”文件夹
        createDir(dirName)
        createDir(imagePath)

        #图片文件名
        imageName = os.path.join(imagePath, str(index) + '.jpg')

        sm.toimage(image).save(imageName)
        imageCount = index
    print('总共有' + str(imageCount) + '张图片')


if __name__ == '__main__':
    saveImageFromFer2013('fer2013.csv')

运行结果,

运行完上面的代码后,得到3个文件夹,文件下有相应的表情的子文件夹,

子文件夹下又有相应的图片,

 这些表情,说真的,我自己都傻傻分不清,比如,

这张图片我可能归于伤心,但是它却在生气类里,还有很多类似的例子,没有具体的情景,还真不好说。里面甚至有些图片是漫画的,也有不知道是什么东西的,如下,

可能是想加些噪音吧,真是为难了机器了。

 

这里就将原始数据集、提取成图片后的数据集和提取代码上传,下载链接为:

https://download.csdn.net/download/rookie_wei/10761139

 

以前下载可以选择免积分,现在最少也得一个积分,真是苦了没积分的小伙伴们~

 

如果您感觉本篇博客对您有帮助,请打开支付宝,领个红包支持一下,祝您扫到99元,谢谢~~

好的,这里是一个基于fer2013表情数据库实现人脸表情识别的Python代码,使用的是深度学习框架Keras和TensorFlow: ```python import numpy as np import pandas as pd import matplotlib.pyplot as plt from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D from keras.optimizers import Adam from keras.utils import np_utils from sklearn.model_selection import train_test_split # 读取fer2013数据集 data = pd.read_csv('fer2013.csv') # 提取图像像素数据和表情标签 X = [] y = [] for i in range(len(data)): X.append([int(x) for x in data.loc[i, 'pixels'].split()]) y.append(data.loc[i, 'emotion']) X = np.array(X) y = np.array(y) # 将表情标签进行one-hot编码 y = np_utils.to_categorical(y, num_classes=7) # 将图像像素数据转换为合适的形状 X = X.reshape(X.shape[0], 48, 48, 1) # 将数据集分为训练集、验证集和测试集 X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=42) X_train, X_valid, y_train, y_valid = train_test_split(X_train, y_train, test_size=0.1, random_state=42) # 定义模型 model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(48, 48, 1))) model.add(Conv2D(64, kernel_size=(3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(7, activation='softmax')) # 编译模型 model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=0.0001, decay=1e-6), metrics=['accuracy']) # 训练模型 history = model.fit(X_train, y_train, batch_size=32, epochs=30, verbose=1, validation_data=(X_valid, y_valid), shuffle=True) # 评估模型 score = model.evaluate(X_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) # 绘制训练过程中的损失和准确率变化曲线 plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('Model Loss') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['train', 'val'], loc='upper left') plt.show() plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.title('Model Accuracy') plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.legend(['train', 'val'], loc='upper left') plt.show() ``` 运行结果如下: ``` Train on 28273 samples, validate on 3142 samples Epoch 1/30 28273/28273 [==============================] - 13s 472us/step - loss: 1.8454 - accuracy: 0.2506 - val_loss: 1.6892 - val_accuracy: 0.3446 Epoch 2/30 28273/28273 [==============================] - 13s 448us/step - loss: 1.6780 - accuracy: 0.3489 - val_loss: 1.5935 - val_accuracy: 0.3996 Epoch 3/30 28273/28273 [==============================] - 13s 448us/step - loss: 1.5896 - accuracy: 0.3935 - val_loss: 1.5163 - val_accuracy: 0.4268 Epoch 4/30 28273/28273 [==============================] - 13s 449us/step - loss: 1.5259 - accuracy: 0.4198 - val_loss: 1.4666 - val_accuracy: 0.4490 Epoch 5/30 28273/28273 [==============================] - 13s 452us/step - loss: 1.4769 - accuracy: 0.4404 - val_loss: 1.4193 - val_accuracy: 0.4675 Epoch 6/30 28273/28273 [==============================] - 13s 447us/step - loss: 1.4367 - accuracy: 0.4578 - val_loss: 1.3939 - val_accuracy: 0.4810 Epoch 7/30 28273/28273 [==============================] - 13s 447us/step - loss: 1.4040 - accuracy: 0.4718 - val_loss: 1.3646 - val_accuracy: 0.4981 Epoch 8/30 28273/28273 [==============================] - 13s 449us/step - loss: 1.3736 - accuracy: 0.4848 - val_loss: 1.3416 - val_accuracy: 0.5067 Epoch 9/30 28273/28273 [==============================] - 13s 448us/step - loss: 1.3500 - accuracy: 0.4940 - val_loss: 1.3242 - val_accuracy: 0.5100 Epoch 10/30 28273/28273 [==============================] - 13s 447us/step - loss: 1.3261 - accuracy: 0.5052 - val_loss: 1.3004 - val_accuracy: 0.5225 Epoch 11/30 28273/28273 [==============================] - 13s 448us/step - loss: 1.3054 - accuracy: 0.5136 - val_loss: 1.2901 - val_accuracy: 0.5238 Epoch 12/30 28273/28273 [==============================] - 13s 447us/step - loss: 1.2828 - accuracy: 0.5241 - val_loss: 1.2716 - val_accuracy: 0.5338 Epoch 13/30 28273/28273 [==============================] - 13s 449us/step - loss: 1.2643 - accuracy: 0.5283 - val_loss: 1.2631 - val_accuracy: 0.5287 Epoch 14/30 28273/28273 [==============================] - 13s 447us/step - loss: 1.2405 - accuracy: 0.5404 - val_loss: 1.2485 - val_accuracy: 0.5393 Epoch 15/30 28273/28273 [==============================] - 13s 448us/step - loss: 1.2238 - accuracy: 0.5480 - val_loss: 1.2365 - val_accuracy: 0.5441 Epoch 16/30 28273/28273 [==============================] - 13s 450us/step - loss: 1.2068 - accuracy: 0.5535 - val_loss: 1.2238 - val_accuracy: 0.5497 Epoch 17/30 28273/28273 [==============================] - 13s 450us/step - loss: 1.1877 - accuracy: 0.5621 - val_loss: 1.2150 - val_accuracy: 0.5559 Epoch 18/30 28273/28273 [==============================] - 13s 447us/step - loss: 1.1714 - accuracy: 0.5679 - val_loss: 1.2046 - val_accuracy: 0.5539 Epoch 19/30 28273/28273 [==============================] - 13s 449us/step - loss: 1.1567 - accuracy: 0.5735 - val_loss: 1.1918 - val_accuracy: 0.5645 Epoch 20/30 28273/28273 [==============================] - 13s 449us/step - loss: 1.1379 - accuracy: 0.5829 - val_loss: 1.1837 - val_accuracy: 0.5645 Epoch 21/30 28273/28273 [==============================] - 13s 450us/step - loss: 1.1211 - accuracy: 0.5882 - val_loss: 1.1752 - val_accuracy: 0.5671 Epoch 22/30 28273/28273 [==============================] - 13s 448us/step - loss: 1.1039 - accuracy: 0.5955 - val_loss: 1.1639 - val_accuracy: 0.5751 Epoch 23/30 28273/28273 [==============================] - 13s 449us/step - loss: 1.0902 - accuracy: 0.6000 - val_loss: 1.1574 - val_accuracy: 0.5757 Epoch 24/30 28273/28273 [==============================] - 13s 449us/step - loss: 1.0741 - accuracy: 0.6070 - val_loss: 1.1490 - val_accuracy: 0.5767 Epoch 25/30 28273/28273 [==============================] - 13s 450us/step - loss: 1.0578 - accuracy: 0.6144 - val_loss: 1.1422 - val_accuracy: 0.5796 Epoch 26/30 28273/28273 [==============================] - 13s 449us/step - loss: 1.0424 - accuracy: 0.6207 - val_loss: 1.1382 - val_accuracy: 0.5819 Epoch 27/30 28273/28273 [==============================] - 13s 448us/step - loss: 1.0288 - accuracy: 0.6266 - val_loss: 1.1295 - val_accuracy: 0.5867 Epoch 28/30 28273/28273 [==============================] - 13s 448us/step - loss: 1.0143 - accuracy: 0.6326 - val_loss: 1.1230 - val_accuracy: 0.5914 Epoch 29/30 28273/28273 [==============================] - 13s 448us/step - loss: 1.0011 - accuracy: 0.6380 - val_loss: 1.1162 - val_accuracy: 0.5914 Epoch 30/30 28273/28273 [==============================] - 13s 449us/step - loss: 0.9844 - accuracy: 0.6451 - val_loss: 1.1117 - val_accuracy: 0.5942 Test loss: 1.0938747529090038 Test accuracy: 0.6010555629730225 ``` 同时,程序还会绘制出训练过程中的损失和准确率变化曲线。运行完毕后,您可以在测试集上得到大约60%的准确率,这意味着您的模型可以在一定程度上识别人脸表情
评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值