pytorch中 在不同的device中save and load模型

0.device的定义

就是模型和参数在CPU和GPU上的一个切换,有些地方要注意的

无非就是'cpu''cuda'

1.save:gpu, load:cpu

使用map_location参数

# Specify a path to save to
PATH = "model.pt"

# Save
torch.save(net.state_dict(), PATH)

# Load
device = torch.device('cpu')
model = Net()
model.load_state_dict(torch.load(PATH, map_location=device))

2.save:gpu, load:gpu

如果有多个cuda,要指定显卡

# Save
torch.save(net.state_dict(), PATH)

# Load
device = torch.device("cuda")
model = Net()
model.load_state_dict(torch.load(PATH))
model.to(device)

注意:调用my_tensor.to(device)只会 returns a new copy of my_tensor on GPU (大概意思是:只会返回浅拷贝?共用内存的那种 I guess
因此,应该重写Tensor , 代码如下:

my_tensor = my_tensor.to(torch.device('cuda')).

3. save:cpu, load:gpu

  1. set the map_location argument in the torch.load() function to cuda:device_id.
  2. call model.to(torch.device('cuda')) to convert the model’s parameter tensors to CUDA tensors.
  3. use the .to(torch.device('cuda')) function on all model inputs to prepare the data for the CUDA optimized model.
# Save
torch.save(net.state_dict(), PATH)

# Load
device = torch.device("cuda")
model = Net()
# Choose whatever GPU device number you want
model.load_state_dict(torch.load(PATH, map_location="cuda:0"))
# Make sure to call input = input.to(device) on any input tensors that you feed to the model
model.to(device)

4.saving torch.nn.DataParallel Models

torch.nn.DataParallel is a model wrapper that enables parallel GPU utilization
(使得GPU能参与并行运算)

To save a DataParallel model generically, save the model.module.state_dict(). This way, you have the flexibility to load the model any way you want to any device you want.

实现代码如下:

# Save
torch.save(net.module.state_dict(), PATH)

# Load to whatever device you want

5.参考来源

pytorch官方教程

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值