BERT是基于Vaswani et al(2017)的论文"Attention is all you need"中提出的transformer模型构建的多层双向transformoer encoder。就是说BERT只是一个NLP方向的编码器。他能对单独句子进行表征,也可以对问答等进行表征。具体可以看论文
论文题目:BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
论文发表时间:2018年10月11日
机构:Google AI
文献解决的问题:提出一个语言表征的模型,叫BERT(Bidirectional Encoder Representations from Transformers)
特点:与传统的语言表征模型不同,BERT使用的是深度双向表征,即在每一层中同时基于左边和右边的context来做预测。
优势:预训练的BERT模型只需要在上面增加额外的一层,就能广泛地于多种NLP任务中进行fine-tune.
结果:在11项NLP任务上都取得了state of art的成绩。
BERT是个开源的项目,能在github上下载源代码。值得一提到是,由于预训练消耗大量时间和资源,google还很贴心的给出了预训练的权重(模型),让我们进行微调即可。比较可惜的是中文预训练模型就一个。中文预训练模型如下:
模型解压,模型文件包括:
bert_config.json
bert_model.ckpt.data-00000-of-00001
bert_model.ckpt.index
bert_model.ckpt.meta
vocab.txt(除了中文字符,还有很多看不懂的特殊符号等)
1、制作特定格式data
class InputExample(object):
"""A single training/test example for simple sequence classification."""
def __init__(self, guid, text_a, text_b=None, label=None):
"""Constructs a InputExample.
Args:
guid: Unique id for the example.
text_a: string. The untokenized text of the first sequence. For single
sequence tasks, only this sequence must be specified.
text_b: (Optional) string. The untokenized text of the second sequence.
Only must be specified for sequence pair tasks.
label: (Optional) string. The label of the example. This should be
specified for train and dev examples, but not for test examples.
"""
self.guid = guid
self.text_a = text_a
self.text_b = text_b
self.label = label
可以发现它要求的输入分别是guid, text_a, text_b, label,其中text_b和label为可选参数。我们要做的是单个句子的分类任务,那么就不需要输入text_b。对了,我们是中文的模型,只能输入512个字符(具有长度限制)。
2、重载DataProcessor类
class MyProcessor(DataProcessor):
"""Processor for my task-news classification """
def __init__(self):
self.labels = ['AD', 'CTRL']
def get_train_examples(self, data_dir):
return self._create_examples(
self._read_tsv(os.path.join(data_dir, 'traintext.csv')), 'train')
def get_dev_examples(self, data_dir):
return self._create_examples(
self._read_tsv(os.path.join(data_dir, 'vaildtext.csv')), 'val')
def get_test_examples(self, data_dir):
return self._create_examples(
self._read_tsv(os.path.join(data_dir, 'testtext.csv')), 'test')
def get_labels(self):
return self.labels
def _create_examples(self, lines, set_type):
"""create examples for the training and val sets"""
examples = []
for (i, line) in enumerate(lines):
guid = '%s-%s' %(set_type, i)
#print("line[0]:",line[0])
#print("line[1]:",line[1])
text_a = tokenization.convert_to_unicode(line[1])
label = tokenization.convert_to_unicode(line[0])
examples.append(InputExample(guid=guid, text_a=text_a, label=label))
return examples
建立好了需要在main函数中的processors中增加自己的模型
processors = {
"cola": ColaProcessor,
"mnli": MnliProcessor,
"mrpc": MrpcProcessor,
"xnli": XnliProcessor,
"my": MyProcessor,
}
3、修改loss输出
if FLAGS.do_train:
train_file = os.path.join(FLAGS.output_dir, "train.tf_record")
file_based_convert_examples_to_features(
train_examples, label_list, FLAGS.max_seq_length, tokenizer, train_file)
tf.logging.info("***** Running training *****")
tf.logging.info(" Num examples = %d", len(train_examples))
tf.logging.info(" Batch size = %d", FLAGS.train_batch_size)
tf.logging.info(" Num steps = %d", num_train_steps)
train_input_fn = file_based_input_fn_builder(
input_file=train_file,
seq_length=FLAGS.max_seq_length,
is_training=True,
drop_remainder=True)
tensors_to_log={'train loss':'loss/Mean:0'} #修改
logging_hook = tf.train.LoggingTensorHook(tensors=tensors_to_log,every_n_iter=20)#修改
estimator.train(input_fn=train_input_fn, hooks=[logging_hook],max_steps=num_train_steps) #修改
训练
export BERT_BASE_DIR=./chinese_L-12_H-768_A-12#这里是存放中文模型的路径
export DATA_DIR=. #这里是存放数据的路径
python3 run_classifier.py \
--task_name=my \ #这里是processor的名字
--do_train=true \ #是否训练
--do_eval=true \ #是否验证
--do_predict=false \ #是否预测(对应test)
--data_dir=$DATA_DIR \
--vocab_file=$BERT_BASE_DIR/vocab.txt \
--bert_config_file=$BERT_BASE_DIR/bert_config.json \
--init_checkpoint=$BERT_BASE_DIR/bert_model.ckpt \
--max_seq_length=512 \#最大文本程度,最大512
--train_batch_size=4 \
--learning_rate=2e-5 \
--num_train_epochs=15 \
--output_dir=./mymodel #输出目录
测试
export BERT_BASE_DIR=./chinese_L-12_H-768_A-12
export DATA_DIR=./mymodel
# TRAINED_CLASSIFIER为刚刚训练的输出目录,无需在进一步指定模型模型名称,否则分类结果会不对
export ./mymodel
python3 run_classifier.py \
--task_name=chi \
--do_predict=true \
--data_dir=$DATA_DIR \
--vocab_file=$BERT_BASE_DIR/vocab.txt \
--bert_config_file=$BERT_BASE_DIR/bert_config.json \
--init_checkpoint=$TRAINED_CLASSIFIER \
--max_seq_length=512 \
--output_dir=./mymodel
https://www.cnblogs.com/rucwxb/p/10277217.html
https://blog.csdn.net/Kaiyuan_sjtu/article/details/88709580