【从官方案例学框架Tensorflow/Keras】使用BERT解决NLI自然语言推理任务

【从官方案例学框架Keras】使用BERT解决NLI自然语言推理任务

Keras官方案例链接
Tensorflow官方案例链接
Paddle官方案例链接
Pytorch官方案例链接

注:本系列仅帮助大家快速理解、学习并能独立使用相关框架进行深度学习的研究,理论部分还请自行学习补充,每个框架的官方经典案例写的都非常好,很值得进行学习使用。可以说在完全理解官方经典案例后加以修改便可以解决大多数常见的相关任务。

摘要:在SNLI数据集上微调BERT模型解决NLI自然语言推理问题


1 Introduction

语义相似度是是一项决定两个句子的相似程度的任务。本例将使用斯坦福自然语言推理数据集SNLI来通过Transformers预测相似度。我们将会微调BERT模型来解决该任务,以两个句子作为输入,然后输出这两个句子的相似程度。

2 Setup

命令行中安装HuggingFace抱抱脸的库,这个库真的妙!抱抱脸立志于将NLP技术平民化,好酷(库)!
pip install transformers

导入所需库

import numpy as np
import pandas as pd
import tensorflow as tf
import transformers

3 Configuration

超参数定义及数据标签

max_length = 128  # Maximum length of input sentence to the model.
batch_size = 32
epochs = 2

# Labels in our dataset.
labels = ["contradiction", "entailment", "neutral"]

4 Load the Data

!curl -LO https://raw.githubusercontent.com/MohamadMerchant/SNLI/master/data.tar.gz
!tar -xvzf data.tar.gz

本地可通过该链接下载数据集

https://raw.githubusercontent.com/MohamadMerchant/SNLI/master/data.tar.gz

数据集概述:

  • sentence1:给出前提
  • sentence2:假设
  • similarity:大多数数据标注者给出的标签,若不存在大多数,则用’-'代替,
    我们会跳过这些不确定标签的数据

similarity的含义:

  • Contradiction:矛盾
  • Entailment:蕴含
  • Neutral:中立

至此,你可能没理解NLI自然语言推理中文本相似度任务具体是什么,别急,让我们一会看三个例子就知道了

# There are more than 550k samples in total; we will use 100k for this example.
train_df = pd.read_csv("../input/SNLI_Corpus/snli_1.0_train.csv", nrows=100000)
valid_df = pd.read_csv("../input/SNLI_Corpus/snli_1.0_dev.csv")
test_df = pd.read_csv("../input/SNLI_Corpus/snli_1.0_test.csv")

# Shape of the data
print(f"Total train samples : {train_df.shape[0]}")
print(f"Total validation samples: {valid_df.shape[0]}")
print(f"Total test samples: {valid_df.shape[0]}")

在这里插入图片描述

  • 栗子1
print(f"Sentence1: {train_df.loc[1, 'sentence1']}")
print(f"Sentence2: {train_df.loc[1, 'sentence2']}")
print(f"Similarity: {train_df.loc[1, 'similarity']}")

在这里插入图片描述
sen1:一个人在马背上跳过一架坏了的飞机。
sen2:一个人在用餐,点了一个煎蛋。
similarity:矛盾

也就是说如果在 “一个人在马背上跳过一架坏了的飞机。”上的假设下,这个人不可能在“用餐”,故两个句子的关系为矛盾

  • 栗子2
print(f"Sentence1: {train_df.loc[2, 'sentence1']}")
print(f"Sentence2: {train_df.loc[2, 'sentence2']}")
print(f"Similarity: {train_df.loc[2, 'similarity']}")

在这里插入图片描述
sen1:一个人在马背上跳过一架坏了的飞机。
sen2:一个人在户外骑马
similarity:蕴含

也就是说如果在 “一个人在马背上跳过一架坏了的飞机。”上的假设下,是可以推出 “这个人在户外骑马”

  • 栗子3
print(f"Sentence1: {train_df.loc[3, 'sentence1']}")
print(f"Sentence2: {train_df.loc[3, 'sentence2']}")
print(f"Similarity: {train_df.loc[3, 'similarity']}")

在这里插入图片描述
sen1:孩子们对着照相机微笑挥手
sen2:他们对着父母微笑
similarity:中立

也就是说如果在 “孩子们对着照相机微笑挥手”上的假设下,“他们对着父母微笑”是不确定的,是无法判断的,不是矛盾、也不是蕴含,是一种中立

5 Preprocessing

空值查看

# We have some NaN entries in our train data, we will simply drop them.
print("Number of missing values")
print(train_df.isnull().sum())
train_df.dropna(axis=0, inplace=True)

在这里插入图片描述
查看训练集、验证集标签分布情况

print("Train Target Distribution")
print(train_df.similarity.value_counts())
print("Validation Target Distribution")
print(valid_df.similarity.value_counts())

在这里插入图片描述
可以看出样本还是很均衡的,不可多得的好数据集!

可以看到’-'无法确定标签在数据集中,让我们剔除它

train_df = (
    train_df[train_df.similarity != "-"]
    .sample(frac=1.0, random_state=42)
    .reset_index(drop=True)
)
valid_df = (
    valid_df[valid_df.similarity != "-"]
    .sample(frac=1.0, random_state=42)
    .reset_index(drop=True)
)

将标签进行独热编码one-hot encoding

train_df["label"] = train_df["similarity"].apply(
    lambda x: 0 if x == "contradiction" else 1 if x == "entailment" else 2
)
y_train = tf.keras.utils.to_categorical(train_df.label, num_classes=3)

valid_df["label"] = valid_df["similarity"].apply(
    lambda x: 0 if x == "contradiction" else 1 if x == "entailment" else 2
)
y_val = tf.keras.utils.to_categorical(valid_df.label, num_classes=3)

test_df["label"] = test_df["similarity"].apply(
    lambda x: 0 if x == "contradiction" else 1 if x == "entailment" else 2
)
y_test = tf.keras.utils.to_categorical(test_df.label, num_classes=3)

6 Create a custom data generator

创建自定义数据集生成器

class BertSemanticDataGenerator(tf.keras.utils.Sequence):
    """Generates batches of data.

    Args:
        sentence_pairs: Array of premise and hypothesis input sentences.
        labels: Array of labels.
        batch_size: Integer batch size.
        shuffle: boolean, whether to shuffle the data.
        include_targets: boolean, whether to incude the labels.

    Returns:
        Tuples `([input_ids, attention_mask, `token_type_ids], labels)`
        (or just `[input_ids, attention_mask, `token_type_ids]`
         if `include_targets=False`)
    """

    def __init__(
        self,
        sentence_pairs,
        labels,
        batch_size=batch_size,
        shuffle=True,
        include_targets=True,
    ):
        self.sentence_pairs = sentence_pairs
        self.labels = labels
        self.shuffle = shuffle
        self.batch_size = batch_size
        self.include_targets = include_targets
        # Load our BERT Tokenizer to encode the text.
        # We will use base-base-uncased pretrained model.
        self.tokenizer = transformers.BertTokenizer.from_pretrained(
            "../input/uncased_L-12_H-768_A-12/", do_lower_case=True
        )
        self.indexes = np.arange(len(self.sentence_pairs))
        self.on_epoch_end()

    def __len__(self):
        # Denotes the number of batches per epoch.
        return len(self.sentence_pairs) // self.batch_size

    def __getitem__(self, idx):
        # Retrieves the batch of index.
        indexes = self.indexes[idx * self.batch_size : (idx + 1) * self.batch_size]
        sentence_pairs = self.sentence_pairs[indexes]

        # With BERT tokenizer's batch_encode_plus batch of both the sentences are
        # encoded together and separated by [SEP] token.
        encoded = self.tokenizer.batch_encode_plus(
            sentence_pairs.tolist(),
            add_special_tokens=True,
            max_length=max_length,
            return_attention_mask=True,
            return_token_type_ids=True,
            pad_to_max_length=True,
            return_tensors="tf",
        )

        # Convert batch of encoded features to numpy array.
        input_ids = np.array(encoded["input_ids"], dtype="int32")
        attention_masks = np.array(encoded["attention_mask"], dtype="int32")
        token_type_ids = np.array(encoded["token_type_ids"], dtype="int32")

        # Set to true if data generator is used for training/validation.
        if self.include_targets:
            labels = np.array(self.labels[indexes], dtype="int32")
            return [input_ids, attention_masks, token_type_ids], labels
        else:
            return [input_ids, attention_masks, token_type_ids]

    def on_epoch_end(self):
        # Shuffle indexes after each epoch if shuffle is set to True.
        if self.shuffle:
            np.random.RandomState(42).shuffle(self.indexes)
            
train_data = BertSemanticDataGenerator(
    train_df[["sentence1", "sentence2"]].values.astype("str"),
    y_train,
    batch_size=batch_size,
    shuffle=True,
)
valid_data = BertSemanticDataGenerator(
    valid_df[["sentence1", "sentence2"]].values.astype("str"),
    y_val,
    batch_size=batch_size,
    shuffle=False,
)

查看数据形式

train_data.sentence_pairs

在这里插入图片描述

7 Build the model

本地导入BERT文件

from transformers import BertConfig,TFBertModel
import os

pretrained_path = "../input/uncased_L-12_H-768_A-12/"
config_path = os.path.join(pretrained_path,"bert_config.json")
checkpoint_path = os.path.join(pretrained_path,"bert_model.ckpt")
vocab_path = os.path.join(pretrained_path,'vocab.txt')
# 加载config
config = BertConfig.from_json_file(config_path)

分布式tf使用

# Create the model under a distribution strategy scope.
strategy = tf.distribute.MirroredStrategy()

with strategy.scope():
    # Encoded token ids from BERT tokenizer.
    input_ids = tf.keras.layers.Input(
        shape=(max_length,), dtype=tf.int32, name="input_ids"
    )
    # Attention masks indicates to the model which tokens should be attended to.
    attention_masks = tf.keras.layers.Input(
        shape=(max_length,), dtype=tf.int32, name="attention_masks"
    )
    # Token type ids are binary masks identifying different sequences in the model.
    token_type_ids = tf.keras.layers.Input(
        shape=(max_length,), dtype=tf.int32, name="token_type_ids"
    )
    # Loading pretrained BERT model.
    bert_model = TFBertModel.from_pretrained(pretrained_path,from_pt=True, config=config)
    
    # Freeze the BERT model to reuse the pretrained features without modifying them.
    bert_model.trainable = False

    sequence_output, pooled_output = bert_model(
        input_ids, attention_mask=attention_masks, token_type_ids=token_type_ids
    )
    # Add trainable layers on top of frozen layers to adapt the pretrained features on the new data.
    bi_lstm = tf.keras.layers.Bidirectional(
        tf.keras.layers.LSTM(64, return_sequences=True)
    )(sequence_output)
    # Applying hybrid pooling approach to bi_lstm sequence output.
    avg_pool = tf.keras.layers.GlobalAveragePooling1D()(bi_lstm)
    max_pool = tf.keras.layers.GlobalMaxPooling1D()(bi_lstm)
    concat = tf.keras.layers.concatenate([avg_pool, max_pool])
    dropout = tf.keras.layers.Dropout(0.3)(concat)
    output = tf.keras.layers.Dense(3, activation="softmax")(dropout)
    model = tf.keras.models.Model(
        inputs=[input_ids, attention_masks, token_type_ids], outputs=output
    )

    model.compile(
        optimizer=tf.keras.optimizers.Adam(),
        loss="categorical_crossentropy",
        metrics=["acc"],
    )


print(f"Strategy: {strategy}")
model.summary()

在这里插入图片描述

8 Train the Model

只对顶层进行“特征提取”的训练,这将允许模型使用预先训练过的模型的embedding词嵌入。

history = model.fit(
    train_data,
    validation_data=valid_data,
    epochs=epochs,
    use_multiprocessing=True,
    workers=-1,
)

9 Fine-tuning

这一步必须在特征提取模型经过训练,能够对新数据进行收敛后才能进行。

这是最后一个可选步骤,在这个步骤中,bert_model将以非常低的学习率进行非冻结和再训练。这可以通过逐步地将预先训练好的特征适应于新数据来交付有意义的改进。

# Unfreeze the bert_model.
bert_model.trainable = True
# Recompile the model to make the change effective.
model.compile(
    optimizer=tf.keras.optimizers.Adam(1e-5),
    loss="categorical_crossentropy",
    metrics=["accuracy"],
)
model.summary()

10 Train the entire model end-to-end

端到端的训练整个模型

history = model.fit(
    train_data,
    validation_data=valid_data,
    epochs=epochs,
    use_multiprocessing=True,
    workers=-1,
)

11Evaluate model on the test set

在测试集上评测模型

test_data = BertSemanticDataGenerator(
    test_df[["sentence1", "sentence2"]].values.astype("str"),
    y_test,
    batch_size=batch_size,
    shuffle=False,
)
model.evaluate(test_data, verbose=1)

12 Inference on custom sentences

自然语言推理展示

def check_similarity(sentence1, sentence2):
    sentence_pairs = np.array([[str(sentence1), str(sentence2)]])
    test_data = BertSemanticDataGenerator(
        sentence_pairs, labels=None, batch_size=1, shuffle=False, include_targets=False,
    )

    proba = model.predict(test_data)[0]
    idx = np.argmax(proba)
    proba = f"{proba[idx]: .2f}%"
    pred = labels[idx]
    return pred, proba

在这里插入图片描述

  • 0
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

阿芒Aris

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值