承接上一章,这里示例代码写一下具体如何处理一下内容:
1.处理原本的BIO数据集,达到自己的数据集与bert分词器获得的label可以对应的效果,
2.padding统一长度,方便输入模型中
3.把df转化成dataset格式,能正确输入到模型中
def encode_tags(label_list, inputs):
labels = [[label2id[tag] for tag in doc.split()] for doc in label_list] # 得到label-id的list
encoded_labels = []
count = 0
for doc_labels, doc_offset in zip(labels, inputs.offset_mapping):
# 创建0的空数组,按照bert原本的编码方法修改首尾为0
doc_enc_labels = np.ones(len(doc_offset), dtype=int) * 0
doc_enc_labels[0] = 0
doc_enc_labels[-1] = 0
num = 0
for tupnum in range(len(doc_offset) - 1):
if (doc_offset[tupnum + 1][0] - doc_offset[tupnum][1] == 1) or (
doc_offset[tupnum][1] == 0):
doc_enc_labels[tupnum + 1] = doc_labels[num]
num += 1
else:
pass
count += 1
encoded_labels.append(doc_enc_labels.tolist())
padded_labels = []
for l in encoded_labels:
if len(l) < MAX_INPUT_LENGTH:
l += [0] * (MAX_INPUT_LENGTH - len(l))
else:
l = l[:MAX_INPUT_LENGTH]
padded_labels.append(l)
return padded_labels
把df转化成dataset格式,能正确输入到模型中:
class MyDataset(Dataset):
def __init__(self, input_ids, labels):
self.input_ids = input_ids
self.labels = labels
def __getitem__(self, idx):
item = {key: torch.tensor(val[idx]) for key, val in self.input_ids.items()}
item['labels'] = torch.tensor(self.labels[idx])
return item
def __len__(self):
return len(self.labels)
现在我们需要重新获得inputs,但是不要offset_mapping,上一次需要是为了对齐标签,现在只需要做一个padding。
train_inputs = tokenizer(train_text, max_length=MAX_INPUT_LENGTH, truncation=True, padding='max_length')
dev_inputs = tokenizer(dev_text, max_length=MAX_INPUT_LENGTH, truncation=True, padding='max_length')
依据以上代码即可生成微调训练集,后面会讲到如何输入模型和进行效果评估。