34 # model = BiLSTM_Attention(128, 64, 2, 40, True)
35 model = Transformer()
36 model = nn.DataParallel(model, device_ids=[0, 1, 2, 3]).cuda() # 增加改行代码后即可运行多GPU环境
37 optimizer = torch.optim.Adam(model.parameters(), lr=opt.init_lr)
多GPU运行torch
最新推荐文章于 2024-03-05 21:00:43 发布