SemEval2019Task3_ERC的中文任务

自然语言处理小总结

基于SemEval2019Task3_ERC任务的改进,实现对于中文三段式对话的情感分析(Tensorflow2.6框架)

本次实验是NLP的一个小实验,给定三段中文对话,即

A:你好,最近病恢复的怎么样?

B:已经恢复好了!

A:那太好

然后去预测A最后一句话的情感,实验总共由三个情感,分别为happy、sad、angry,还有一个是others,不作为评价指标,由于本次实验的数据集是来自微博的数据,故使用微博的词向量(实际上本人也曾使用其他词向量,但是训练结果不如微博词向量),同时也不推荐加入stopwords,原因是本人stopwords内含有一定的情感信息,会使得预测得到的F1数值降低至0.50左右,但是也鼓励大家进行自己尝试,对stopwords进行改进,看是否能够提升性能。

由于本次训练数据是由老师进行提供,在此就不方便进行分享,以下是源代码

import re
import keras.regularizers
import numpy as np
import io
import jieba
from tensorflow.keras.layers import Input, Dense, Embedding, Concatenate, \
    Dropout, LSTM, Bidirectional, GaussianNoise
from tensorflow.keras.models import Model

def load_data(data,mode):
    emotion2label = {"others": 0, "happy": 1, "sad": 2, "angry": 3}
    txt, labels = [], []
    with open(data,encoding='UTF8') as f:
        for lines in f.readlines():
            line=lines.split('\t')
            for i in range(3):
                line[i] = find_chinese(line[i])
            txt.append(line[0:3])
            if mode=="train":
                line[3] = line[3].split('\n')[0]
                labels.append(emotion2label[line[3]])
        if mode=="train":
            return np.array(txt),np.array(labels)
    return np.array(txt)

#去除非中文字符
def find_chinese(file):
    pattern = re.compile(r'[^\u4e00-\u9fa5]')
    chinese = re.sub(pattern, '', file)
    words = " ".join(jieba.cut(chinese, cut_all=False))
    return words



 #读取预训练词向量
def getEmbeddings(file):
    embeddingsIndex = {} #词到词向量的映射
    dim = 0
    with io.open(file, encoding="utf8") as f:
        for line in f:
            values = line.split()
            word = values[0]
            embeddingVector = np.asarray(values[1:], dtype='float32')
            embeddingsIndex[word] = embeddingVector
            dim = len(embeddingVector)
    return embeddingsIndex,dim


def getEmbeddingMatrix(wordIndex, embeddings, dim):
    embeddingMatrix = np.zeros((len(wordIndex) + 1, dim))
    for word, i in wordIndex.items():
        embeddingMatrix[i] = embeddings.get(word)
    return embeddingMatrix


def normal_LSTM(embeddings_matrix, sequence_length, lstm_dim, hidden_layer_dim, num_classes,
               noise=0.1, dropout_lstm=0.2, dropout=0.2):
    turn1_input = Input(shape=(sequence_length,), dtype='int32')
    turn2_input = Input(shape=(sequence_length,), dtype='int32')
    turn3_input = Input(shape=(sequence_length,), dtype='int32')

    embedding_dim = embeddings_matrix.shape[1]  # 词嵌入维度
    embeddingLayer = Embedding(embeddings_matrix.shape[0],
                               embedding_dim,
                               weights=[embeddings_matrix],
                               input_length=sequence_length,
                               trainable=False)  # 冻结

    turn1_branch = embeddingLayer(turn1_input)
    turn2_branch = embeddingLayer(turn2_input)
    turn3_branch = embeddingLayer(turn3_input)

    turn1_branch = GaussianNoise(noise, input_shape=(None, sequence_length, embedding_dim))(turn1_branch)
    turn2_branch = GaussianNoise(noise, input_shape=(None, sequence_length, embedding_dim))(turn2_branch)
    turn3_branch = GaussianNoise(noise, input_shape=(None, sequence_length, embedding_dim))(turn3_branch)


    lstm1 = LSTM(lstm_dim, dropout=dropout_lstm)
    lstm2 = LSTM(lstm_dim, dropout=dropout_lstm)
    lstm3 = LSTM(lstm_dim, dropout=dropout_lstm)
    turn1_branch = lstm1(turn1_branch)
    turn2_branch = lstm2(turn2_branch)
    turn3_branch = lstm3(turn3_branch)
    x = Concatenate(axis=-1)([turn1_branch, turn2_branch, turn3_branch])
    x = Dropout(dropout)(x)
    x = Dense(hidden_layer_dim, activation='relu')(x)

    output = Dense(num_classes, activation='softmax')(x)

    model = Model(inputs=[turn1_input, turn2_input, turn3_input], outputs=output)
    # 是自带的函数,loss是损失函数,metrics是评价函数
    model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['acc'])
    return model

def L2_LSTM(embeddings_matrix, sequence_length, lstm_dim, hidden_layer_dim, num_classes,
               noise=0.1, dropout_lstm=0.2, dropout=0.2):
    # (batch,seq_len)
    turn1_input = Input(shape=(sequence_length,), dtype='int32')
    turn2_input = Input(shape=(sequence_length,), dtype='int32')
    turn3_input = Input(shape=(sequence_length,), dtype='int32')

    embedding_dim = embeddings_matrix.shape[1]  # 词嵌入维度
    # Embedding层 使用预训练词向量 初始化词嵌入矩阵
    embeddingLayer = Embedding(embeddings_matrix.shape[0],
                               embedding_dim,
                               weights=[embeddings_matrix],
                               input_length=sequence_length,
                               trainable=False)  # 冻结

    # (batch,seq_len,embed_size)
    turn1_branch = embeddingLayer(turn1_input)
    turn2_branch = embeddingLayer(turn2_input)
    turn3_branch = embeddingLayer(turn3_input)

    # 添加高斯噪声 (batch,seq_len,embed_size)
    turn1_branch = GaussianNoise(noise, input_shape=(None, sequence_length, embedding_dim))(turn1_branch)
    turn2_branch = GaussianNoise(noise, input_shape=(None, sequence_length, embedding_dim))(turn2_branch)
    turn3_branch = GaussianNoise(noise, input_shape=(None, sequence_length, embedding_dim))(turn3_branch)

    # 定义两个双向lstm
    lstm1 = Bidirectional(LSTM(lstm_dim, dropout=dropout_lstm,kernel_regularizer=keras.regularizers.l2(0.01)))
    lstm2 = Bidirectional(LSTM(lstm_dim, dropout=dropout_lstm,kernel_regularizer=keras.regularizers.l2(0.01)))

    # turn1 turn3是同一个说话人 共享权重  返回最后时刻的隐藏状态 前向后向拼接(batch,hidden_size*2=64*2=128)
    turn1_branch = lstm1(turn1_branch)
    turn2_branch = lstm2(turn2_branch)
    turn3_branch = lstm1(turn3_branch)
    # 三个隐藏状态拼接 (batch,128*3=384)
    x = Concatenate(axis=-1)([turn1_branch, turn2_branch, turn3_branch])
    # 经过dropout
    x = Dropout(dropout)(x)

    # 通过dense层 (batch,hidden_layer_dim=30)
    x = Dense(hidden_layer_dim,kernel_regularizer=keras.regularizers.l2(0.001),activation='relu')(x)

    # 输出层 softmax激活函数 多分类  (batch,num_classes=4)
    output = Dense(num_classes, activation='softmax')(x)

    model = Model(inputs=[turn1_input, turn2_input, turn3_input], outputs=output)
    # 是自带的函数,loss是损失函数,metrics是评价函数
    model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['acc'])
    return model

def Bidirectional_LSTM(embeddings_matrix, sequence_length, lstm_dim, hidden_layer_dim, num_classes,
               noise=0.1, dropout_lstm=0.2, dropout=0.2):
    # (batch,seq_len)
    turn1_input = Input(shape=(sequence_length,), dtype='int32')
    turn2_input = Input(shape=(sequence_length,), dtype='int32')
    turn3_input = Input(shape=(sequence_length,), dtype='int32')

    embedding_dim = embeddings_matrix.shape[1]  # 词嵌入维度
    # Embedding层 使用预训练词向量 初始化词嵌入矩阵
    embeddingLayer = Embedding(embeddings_matrix.shape[0],
                               embedding_dim,
                               weights=[embeddings_matrix],
                               input_length=sequence_length,
                               trainable=False)  # 冻结

    # (batch,seq_len,embed_size)
    turn1_branch = embeddingLayer(turn1_input)
    turn2_branch = embeddingLayer(turn2_input)
    turn3_branch = embeddingLayer(turn3_input)

    # 添加高斯噪声 (batch,seq_len,embed_size)
    turn1_branch = GaussianNoise(noise, input_shape=(None, sequence_length, embedding_dim))(turn1_branch)
    turn2_branch = GaussianNoise(noise, input_shape=(None, sequence_length, embedding_dim))(turn2_branch)
    turn3_branch = GaussianNoise(noise, input_shape=(None, sequence_length, embedding_dim))(turn3_branch)

    # 定义两个双向lstm
    lstm1 = Bidirectional(LSTM(lstm_dim, dropout=dropout_lstm))
    lstm2 = Bidirectional(LSTM(lstm_dim, dropout=dropout_lstm))

    # turn1 turn3是同一个说话人 共享权重  返回最后时刻的隐藏状态 前向后向拼接(batch,hidden_size*2=64*2=128)
    turn1_branch = lstm1(turn1_branch)
    turn2_branch = lstm2(turn2_branch)
    turn3_branch = lstm1(turn3_branch)
    # 三个隐藏状态拼接 (batch,128*3=384)
    x = Concatenate(axis=-1)([turn1_branch, turn2_branch, turn3_branch])
    # 经过dropout
    x = Dropout(dropout)(x)

    # 通过dense层 (batch,hidden_layer_dim=30)
    x = Dense(hidden_layer_dim, activation='relu')(x)

    # 输出层 softmax激活函数 多分类  (batch,num_classes=4)
    output = Dense(num_classes, activation='softmax')(x)

    model = Model(inputs=[turn1_input, turn2_input, turn3_input], outputs=output)
    # 是自带的函数,loss是损失函数,metrics是评价函数
    model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['acc'])
    return model



def write_file(predict,file):
    EMOS_DIC = {0:'others',
                1: 'happy',
                2: 'sad',
                3:'angry'}
    with open("data/{}".format(file),'w',encoding='UTF-8') as f:
        for i in predict:
            f.write(EMOS_DIC[i]+'\n')


from function import  *
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.preprocessing.sequence import pad_sequences
from sklearn.model_selection import train_test_split
import numpy as np
from tensorflow.keras.preprocessing.text import Tokenizer
from sklearn.metrics import f1_score, precision_score, recall_score
from tensorflow.keras.callbacks import ModelCheckpoint,EarlyStopping
from sklearn.metrics import classification_report

#定义字典,便于后续数据处理
label_emotion = {0:"others",1:"happy",2:"sad",3:"angry"}
emotion_label = {"others":0,"happy":1,"sad":2,"angry":3}

#加载数据集,对应函数在function中
train_text,train_labels=load_data("data/train.txt",mode='train')
dev_text,dev_labels=load_data("data/dev.txt",mode='train')
test_text = load_data("data/test_without_label.txt",mode="test")

#读取预训练词向量
embeddings, dim = getEmbeddings('word_weibo.txt')

#直接基于预训练词汇构建词典
tokenize = Tokenizer(filters='')
tokenize.fit_on_texts([' '.join(list(embeddings.keys()))])

wordIndex = tokenize.word_index #词到索引的映射
#使用预训练初始化的词嵌入矩阵
embeddings_matrix = getEmbeddingMatrix(wordIndex, embeddings, dim)

MAX_SEQUENCE_LENGTH =100# 子句统一填充长度100

#只是为了查看训练过程准确率,实际模型学习的是整个训练集
X_train, X_val, y_train, y_val=train_test_split(train_text,train_labels,test_size=0.2,random_state=0,stratify=train_labels)

# 把标签 转换为one-hot形式
labels_categorical_train = to_categorical(np.asarray(train_labels))
labels_categorical_val = to_categorical(np.asarray(y_val))
labels_categorical_dev = to_categorical(np.asarray(dev_labels))


def get_sequances(texts, sequence_length):
    message_first = pad_sequences(tokenize.texts_to_sequences(texts[:, 0]), sequence_length) #turn1
    message_second = pad_sequences(tokenize.texts_to_sequences(texts[:, 1]), sequence_length) #turn2
    message_third = pad_sequences(tokenize.texts_to_sequences(texts[:, 2]), sequence_length)#turn3
    return message_first, message_second, message_third

#对各个数据集进行处理,把每个对话的每一轮的单词转换为词典中的索引并填充为统一的长度
message_first_message_train, message_second_message_train, message_third_message_train = get_sequances(train_text, MAX_SEQUENCE_LENGTH)
message_first_message_val, message_second_message_val, message_third_message_val = get_sequances(X_val, MAX_SEQUENCE_LENGTH)
message_first_message_dev, message_second_message_dev, message_third_message_dev = get_sequances(dev_text, MAX_SEQUENCE_LENGTH)
message_first_message_test, message_second_message_test, message_third_message_test = get_sequances(test_text, MAX_SEQUENCE_LENGTH)



model = Bidirectional_LSTM(embeddings_matrix, MAX_SEQUENCE_LENGTH, lstm_dim=64, hidden_layer_dim=30, num_classes=4)

metrics_new = {
    "f1_e": (lambda y_test, y_pred:     #只对情感标签对应的类别 计算micro-f1-score
             f1_score(y_test, y_pred, average='micro',
                      labels=[emotion_label['happy'],
                              emotion_label['sad'],
                              emotion_label['angry']
                              ])),
    "precision_e": (lambda y_test, y_pred:#只对情感标签对应的类别 计算micro-precision
                    precision_score(y_test, y_pred, average='micro',
                                    labels=[emotion_label['happy'],
                                            emotion_label['sad'],
                                            emotion_label['angry']
                                            ])),
    "recoll_e": (lambda y_test, y_pred:#只对情感标签对应的类别 计算micro-recall
                 recall_score(y_test, y_pred, average='micro',
                              labels=[emotion_label['happy'],
                                      emotion_label['sad'],
                                      emotion_label['angry']
                                      ]))
}
data_sets={}
data_sets["dev"] = [[message_first_message_dev, message_second_message_dev, message_third_message_dev],
                    np.array(labels_categorical_dev)]
#验证集
data_sets["val"] = [[message_first_message_val, message_second_message_val, message_third_message_val],
                    np.array(labels_categorical_val)]

filepath='models/model.h5'

#保存在验证集上准确率最高的那组参数,关注开发集的准确率
checkpoint = ModelCheckpoint(filepath, monitor='val_acc', save_best_only=True)


#训练

history = model.fit([message_first_message_train, message_second_message_train, message_third_message_train],
                    np.array(labels_categorical_train),
                    callbacks=[checkpoint],
                    validation_data=(
                        [message_first_message_val, message_second_message_val, message_third_message_val],
                        np.array(labels_categorical_val)
                    ),
                    epochs=5,

                    batch_size=200)
# 加载保存的权重
model.load_weights(filepath)

# 在开发集上进行预测
y_pred = model.predict([message_first_message_dev, message_second_message_dev, message_third_message_dev])

# 开发集预测的标签和真实标签,计算上述定义的指标(只计算情感标签对应的类别 不包含others)
for title, metric in metrics_new.items():
    print(title, metric(labels_categorical_dev.argmax(axis=1), y_pred.argmax(axis=1)))
y_pred_test = model.predict([message_first_message_test,message_second_message_test,message_third_message_test])

# 将预测的结果写入txt文件
write_file(y_pred_test.argmax(axis=1),"predict.txt")

#总体分类报告
print(classification_report(labels_categorical_dev.argmax(axis=1), y_pred.argmax(axis=1)))


通过调整参数,最后得到的最好的结果约为F1=0.70

最后附上参考链接,本人是基于该链接代码进行改进的,想要代码详解可以参考以下这篇文章

https://blog.csdn.net/sdu_hao/article/details/104283522

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

烟语如花

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值