1 数据序列化
2 数据读取:torch.load()
Map_location:位置映射
Map_location用于指定读取数据的映射设备,可以是 a function, torch.device, string or a dict.
map_location
可以使用函数来表示,我们来看看 torch
doc 中给出的例子:
>>> torch.load('tensors.pt')
# Load all tensors onto the CPU
>>> torch.load('tensors.pt', map_location=torch.device('cpu'))
# Load all tensors onto the CPU, using a function
>>> torch.load('tensors.pt', map_location=lambda storage, loc: storage)
# Load all tensors onto GPU 1
>>> torch.load('tensors.pt', map_location=lambda storage, loc: storage.cuda(1))
当使用函数来指定映射方式时,其格式为:
map_location = lambda storage, loc: dst_storage
其中 storage: <class ‘torch._UntypedStorage’> 为初始存储对象(initial deserialization of the storage, residing on the CPU),loc: str
为序列化时对象的存储位置(“存档时所在的设备”)。
3 模型序列化
# saving
torch.save({"epoch": epoch,
'model': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
...
}, PATH)
# loading
model = TheModelClass(...)
optimizer = TheOptimizerClass(...)
checkpoint = torch.load(PATH)
model.load_state_dict(checkpoint['model'])
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
epoch = checkpoint['epoch']
loss = checkpoint['loss']