Datawhale AI 夏令营 | 物质科学赛道Task3 | Transformer

目录

Transformer介绍

transformer模型使用思路

可能最终还是要用分子指纹

Transformer代码解读

新建文件夹

安装环境

对数据处理进行初步尝试

transformer模型

调整学习率

学习率调度策略

训练

参数解释

生成结果文件


RNN缺点:①对长序列的记忆能力较弱;②一层一层传递导致并行能力差,耗时长。

Transformer介绍

弥补了循环神经网络和卷积神经网络的缺点:逐层信息传递过程中,每个词会依赖上一个词,容易出现信息错误和丢失。

而transformer完全通过注意力机制完成对序列的全局依赖的建模,提高了准确性和计算效率。

基于编码器-解码器架构来处理序列对,纯基于注意力机制。编码器输入后首先进入嵌入层,加入位置编码,经过Transformer块,最后一层的输出作为下一层的输入进行信息传递。

关于transformer的详细介绍可以在Datawhale了解,也可以看【超详细】【原理篇&实战篇】一文读懂Transformer;也可以了解上一期datawhale机器翻译使用transformer的案例——基于Transformer解决机器翻译任务

transformer模型使用思路

由上面的介绍和图解,可以知道transformer也是经典的编码器解码器模型,从基本架构图中也可以看到,编码器部分将输入转化为一个向量之后,再传递到解码器部分,所以可以把transformer的encoder编码器和decoder解码器拆开使用。

可以使用transformer的编码器部分,编码SMILES表达式,再将得到的向量通过一个线性层输出值,使得输出的值就是该化学反应的产率。

可能最终还是要用分子指纹

经过这两个Task的建模测试,我们不难发现,将SMILES作为序列进行建模的测试效果弱于使用分子指纹进行向量化之后采用传统的机器学习方法。

Transformer代码解读

新建文件夹

# 存模型和最终输出结果
!mkdir ../model
!mkdir ../output

安装环境

!pip install torchtext
!pip install pandas

import pandas as pd
from torch.utils.data import Dataset, DataLoader, Subset
from typing import List, Tuple
import re
import torch
import torch.nn as nn
import time
import torch.optim as optim

对数据处理进行初步尝试

# tokenizer,鉴于SMILES的特性,这里需要自己定义tokenizer和vocab
# 这里直接将SMILES str按字符拆分,并替换为词汇表中的序号
class Smiles_tokenizer():
    def __init__(self, pad_token, regex, vocab_file, max_length):
        self.pad_token = pad_token
        self.regex = regex
        self.vocab_file = vocab_file
        self.max_length = max_length

        with open(self.vocab_file, "r") as f:
            lines = f.readlines()
        lines = [line.strip("\n") for line in lines]
        vocab_dic = {}
        for index, token in enumerate(lines):
            vocab_dic[token] = index
        self.vocab_dic = vocab_dic

    def _regex_match(self, smiles):
        regex_string = r"(" + self.regex + r"|"
        regex_string += r".)"
        prog = re.compile(regex_string)

        tokenised = []
        for smi in smiles:
            tokens = prog.findall(smi)
            if len(tokens) > self.max_length:
                tokens = tokens[:self.max_length]
            tokenised.append(tokens) # 返回一个所有的字符串列表
        return tokenised
    
    def tokenize(self, smiles):
        tokens = self._regex_match(smiles)
        # 添加上表示开始和结束的token:<cls>, <end>
        tokens = [["<CLS>"] + token + ["<SEP>"] for token in tokens]
        tokens = self._pad_seqs(tokens, self.pad_token)
        token_idx = self._pad_token_to_idx(tokens)
        return tokens, token_idx

    def _pad_seqs(self, seqs, pad_token):
        pad_length = max([len(seq) for seq in seqs])
        padded = [seq + ([pad_token] * (pad_length - len(seq))) for seq in seqs]
        return padded

    def _pad_token_to_idx(self, tokens):
        idx_list = []
        new_vocab = []
        for token in tokens:
            tokens_idx = []
            for i in token:
                if i in self.vocab_dic.keys():
                    tokens_idx.append(self.vocab_dic[i])
                else:
                    new_vocab.append(i)
                    self.vocab_dic[i] = max(self.vocab_dic.values()) + 1
                    tokens_idx.append(self.vocab_dic[i])
            idx_list.append(tokens_idx)

        with open("../new_vocab_list.txt", "a") as f:
            for i in new_vocab:
                f.write(i)
                f.write("\n")

        return idx_list

    def _save_vocab(self, vocab_path):
        with open(vocab_path, "w") as f:
            for i in self.vocab_dic.keys():
                f.write(i)
                f.write("\n")
        print("update new vocab!")

# 处理数据

def read_data(file_path, train=True):
    df = pd.read_csv(file_path)
    reactant1 = df["Reactant1"].tolist()
    reactant2 = df["Reactant2"].tolist()
    product = df["Product"].tolist()
    additive = df["Additive"].tolist()
    solvent = df["Solvent"].tolist()
    if train:
        react_yield = df["Yield"].tolist()
    else:
        react_yield = [0 for i in range(len(reactant1))]
    
    # 将reactant\additive\solvent拼到一起,之间用.分开。product也拼到一起,用>>分开
    input_data_list = []
    for react1, react2, prod, addi, sol in zip(reactant1, reactant2, product, additive, solvent):
        # input_info = ".".join([react1, react2, addi, sol])
        input_info = ".".join([react1, react2])
        input_info = ">".join([input_info, prod])
        input_data_list.append(input_info)
    output = [(react, y) for react, y in zip(input_data_list, react_yield)]

    return output

这里进行数据处理,先读入数据路径,默认train为True,将reactant\additive\solvent拼到一起,之间用.分开。product也拼到一起,用>>分开。最终化学反应式会表示为"react1.react1>>product"这里先不考虑添加剂(催化剂)和溶剂的问题,仅仅考虑两个反应物生成产物,返回的output列表是多个tuple 的集合,每个tuple是(化学反应式,对应的产率yield)。

# 定义数据集
class ReactionDataset(Dataset):
    def __init__(self, data: List[Tuple[List[str], float]]):
        self.data = data
        
    def __len__(self):
        return len(self.data)

    def __getitem__(self, idx):
        return self.data[idx]
    
def collate_fn(batch):
    REGEX = r"\[[^\]]+]|Br?|Cl?|N|O|S|P|F|I|b|c|n|o|s|p|\(|\)|\.|=|#|-|\+|\\\\|\/|:|~|@|\?|>|\*|\$|\%[0-9]{2}|[0-9]"
    tokenizer = Smiles_tokenizer("<PAD>", REGEX, "../vocab_full.txt", 300)
    smi_list = []
    yield_list = []
    for i in batch:
        smi_list.append(i[0])
        yield_list.append(i[1])
    tokenizer_batch = torch.tensor(tokenizer.tokenize(smi_list)[1])
    yield_list = torch.tensor(yield_list)
    return tokenizer_batch, yield_list

transformer模型

# 模型
'''
直接采用一个transformer encoder model就好了
'''
class TransformerEncoderModel(nn.Module):
    def __init__(self, input_dim, d_model, num_heads, fnn_dim, num_layers, dropout):
        super().__init__()
        self.embedding = nn.Embedding(input_dim, d_model)
        self.layerNorm = nn.LayerNorm(d_model)
        self.encoder_layer = nn.TransformerEncoderLayer(d_model=d_model, 
                                                        nhead=num_heads, 
                                                        dim_feedforward=fnn_dim,
                                                        dropout=dropout,
                                                        batch_first=True,
                                                        norm_first=True # pre-layernorm
                                                        )
        self.transformer_encoder = nn.TransformerEncoder(self.encoder_layer, 
                                                         num_layers=num_layers,
                                                         norm=self.layerNorm)
        self.dropout = nn.Dropout(dropout)
        self.lc = nn.Sequential(nn.Linear(d_model, 256),
                                nn.Sigmoid(),
                                nn.Linear(256, 96),
                                nn.Sigmoid(),
                                nn.Linear(96, 1))

    def forward(self, src):
        # src shape: [batch_size, src_len]
        embedded = self.dropout(self.embedding(src))
        # embedded shape: [batch_size, src_len, d_model]
        outputs = self.transformer_encoder(embedded)
        # outputs shape: [batch_size, src_len, d_model]

        # fisrt
        z = outputs[:,0,:]
        # z = torch.sum(outputs, dim=1)
        # print(z)
        # z shape: [bs, d_model]
        outputs = self.lc(z)
        # print(outputs)
        # outputs shape: [bs, 1]
        return outputs.squeeze(-1)

调整学习率

def adjust_learning_rate(optimizer, epoch, start_lr):
    """Sets the learning rate to the initial LR decayed by 10 every 30 epochs"""
    lr = start_lr * (0.1 ** (epoch // 3))
    for param_group in optimizer.param_groups:
        param_group['lr'] = lr

将学习率设置为每30个周期衰减10次的初始学习率(start_lr)。

学习率调度策略

训练模型时,往往越到后面,越需要减小学习率,因为收敛的局部最小值点收窄,若仍保持前期的大学习率,就会在梯度下降时翻过局部最小点。在这里使用了lr = start_lr * (0.1 ** (epoch // 3))使得学习率衰减,其他学习率调度方法可以在Pytorch中学习。

训练

# 训练
def train():
    ## super param
    N = 10  #int / int(len(dataset) * 1)  # 或者你可以设置为数据集大小的一定比例,如 int(len(dataset) * 0.1)
    INPUT_DIM = 292 # src length
    D_MODEL = 512
    NUM_HEADS = 4
    FNN_DIM = 1024
    NUM_LAYERS = 4
    DROPOUT = 0.2
    CLIP = 1 # CLIP value
    N_EPOCHS = 40
    LR = 1e-4
    
    start_time = time.time()  # 开始计时
    device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
    # device = 'cpu'
    data = read_data("../dataset/round1_train_data.csv")
    dataset = ReactionDataset(data)
    subset_indices = list(range(N))
    subset_dataset = Subset(dataset, subset_indices)
    train_loader = DataLoader(dataset, batch_size=128, shuffle=True, collate_fn=collate_fn)

    model = TransformerEncoderModel(INPUT_DIM, D_MODEL, NUM_HEADS, FNN_DIM, NUM_LAYERS, DROPOUT)
    model = model.to(device)
    model.train()
    
    optimizer = optim.AdamW(model.parameters(), lr=LR, weight_decay=0.01)
    scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.1, patience=10)
    criterion = nn.MSELoss()

    best_valid_loss = 10
    for epoch in range(N_EPOCHS):
        epoch_loss = 0
        # adjust_learning_rate(optimizer, epoch, LR) # 动态调整学习率
        for i, (src, y) in enumerate(train_loader):
            src, y = src.to(device), y.to(device)
            optimizer.zero_grad()
            output = model(src)
            loss = criterion(output, y)
            loss.backward()
            torch.nn.utils.clip_grad_norm_(model.parameters(), CLIP)
            optimizer.step()
            epoch_loss += loss.detach().item()
            
            if i % 50 == 0:
                print(f'Step: {i} | Train Loss: {epoch_loss:.4f}')
                
        loss_in_a_epoch = epoch_loss / len(train_loader)
        scheduler.step(loss_in_a_epoch)
        
        print(f'Epoch: {epoch+1:02} | Train Loss: {loss_in_a_epoch:.3f}')
        if loss_in_a_epoch < best_valid_loss:
            best_valid_loss = loss_in_a_epoch
            # 在训练循环结束后保存模型
            torch.save(model.state_dict(), '../model/transformer.pth')
    end_time = time.time()  # 结束计时
    # 计算并打印运行时间
    elapsed_time_minute = (end_time - start_time)/60
    print(f"Total running time: {elapsed_time_minute:.2f} minutes")

if __name__ == '__main__':
    train()         
参数解释
	N = 10  #int / int(len(dataset) * 1)  # 或者你可以设置为数据集大小的一定比例,如 int(len(dataset) * 0.1)
    INPUT_DIM = 292 # src length
    D_MODEL = 512
    NUM_HEADS = 4
    FNN_DIM = 1024
    NUM_LAYERS = 4
    DROPOUT = 0.2
    CLIP = 1 # CLIP value
    N_EPOCHS = 40
    LR = 1e-4

N值用于划分子索引

N_EPOCHS值是epoch的数量,在一定范围内epoch数量越大,结果越准确,但过大容易出现过拟合。

LR指学习率。

生成结果文件

# 生成结果文件
def predicit_and_make_submit_file(model_file, output_file):
    INPUT_DIM = 292 # src length
    D_MODEL = 512
    NUM_HEADS = 4
    FNN_DIM = 1024
    NUM_LAYERS = 4
    DROPOUT = 0.2
    device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
    test_data = read_data("../dataset/round1_test_data.csv", train=False)
    test_dataset = ReactionDataset(test_data)
    test_loader = DataLoader(test_dataset, batch_size=128, shuffle=False, collate_fn=collate_fn) 

    model = TransformerEncoderModel(INPUT_DIM, D_MODEL, NUM_HEADS, FNN_DIM, NUM_LAYERS, DROPOUT).to(device)
    # 加载最佳模型
    model.load_state_dict(torch.load(model_file))
    model.eval()
    output_list = []
    for i, (src, y) in enumerate(test_loader):
        src = src.to(device)
        with torch.no_grad():
            output = model(src)
            output_list += output.detach().tolist()
    ans_str_lst = ['rxnid,Yield']
    for idx,y in enumerate(output_list):
        ans_str_lst.append(f'test{idx+1},{y:.4f}')
    with open(output_file,'w') as fw:
        fw.writelines('\n'.join(ans_str_lst))
    
predicit_and_make_submit_file("../model/transformer.pth",
                              "../output/result.txt")

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值