使用预训练模型有许多显著的好处。它降低了计算成本,减少了碳排放,同时允许您使用最先进的模型,而无需从头开始训练一个。Transformers 提供了涉及各种任务的成千上万的预训练模型。当您使用预训练模型时,您需要在与任务相关的数据集上训练该模型。这种操作被称为微调,是一种非常强大的训练技术。在本教程中,您将使用您选择的深度学习框架来微调一个预训练模型。
Transformers 模型而优化的 Trainer 类,使您无需手动编写自己的训练循环步骤而更轻松地开始训练模型。Trainer API 支持各种训练选项和功能,如日志记录、梯度累积和混合精度。
from transformers import T5Tokenizer, T5ForConditionalGeneration
def add_prefix_to_amino_acids(protein_sequence):
amino_acids = list(protein_sequence)
prefixed_amino_acids = ['<p>' + aa for aa in amino_acids]
new_sequence = ''.join(prefixed_amino_acids)
return new_sequence
tokenizer = T5Tokenizer.from_pretrained("/mnt/sdb/home/lrl/BioT5+/biot5-base-dti-human", model_max_length=512)
model = T5ForConditionalGeneration.from_pretrained('/mnt/sdb/home/lrl/BioT5+/biot5-base-dti-human')
task_definition = 'Definition: Drug target interaction prediction task (a binary classification task) for the human dataset. If the given molecule and protein can interact with each other, indicate via "Yes". Otherwise, response via "No".\n\n'
selfies_input = '[C][/C][=C][Branch1][C][\\C][C][=Branch1][C][=O][O]'
protein_input = 'MQALRVSQALIRSFSSTARNRFQNRVREKQKLFQEDNDIPLYLKGGIVDNILYRVTMTLCLGGTVYSLYSLGWASFPRN'
protein_input = add_prefix_to_amino_acids(protein_input)
task_input = f'Now complete the following example -\nInput: Molecule: <bom>{selfies_input}<eom>\nProtein: <bop>{protein_input}<eop>\nOutput: '
model_input = task_definition + task_input
input_ids = tokenizer(model_input, return_tensors="pt").input_ids
generation_config = model.generation_config
generation_config.max_length = 8
generation_config.num_beams = 1
outputs = model.generate(input_ids, generation_config=generation_config)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
trainer = Trainer(
model=model,
args=training_args,
train_dataset=small_train_dataset,
eval_dataset=small_eval_dataset,
compute_metrics=compute_metrics,
)
trainer.train()