1、nn.BatchNorm2d层,训练与测试时number_feature不同时,设置参数:
self.batch_norm = nn.BatchNorm2d(num_nodes, affine=False, track_running_stats=False, momentum=0)
###
forward:
self.batch_norm(X)
此时仅对输入的4层batch数据进行归一化,而没有偏置等操作
2、F.nll_loss使用数据要求
F.nll_loss(pred, label)
其中pred为模型输出,label为标签
pred:其shape为(L,2)型,数据类型为float32型,转换代码如下:
tensor.float()
label:其shape为(L)型,数据类型为long型,转换代码如下:
tensor.long()
3、pytorch中tensor、numpy、list转换
①cpu tensor转numpy()
#假定a为tensor
a.numpy()
a.detach().numpy() # tensor带有梯度时
gpu tensor转numpy()
a.cpu().numpy()
a.cpu().detach().numpy() # tensor带有梯度时
②numpy与list转换
numpy.array(list) # list->numpy
list=array.tolist() # numpy->list
③tensor与list互换
# list->tensor
torch.tensor(list,dtype=float32)
# tensor->list
a.numpy().tolist()
a.detach().numpy().tolist()
a.cpu().detach().numpy().tolist()
4、中间变量保存-pickle
import pickle
a_dict = {'da': 111, 2: [23, 1, 4], '23': {1: 2, 'd': 'sad'}, 1: [torch.rand(2, 3), 1]}
# 保存为pickle
file = open('pickle_example.pickle', 'wb')
pickle.dump(a_dict, file)
file.close()
# 读取pickle
with open('pickle_example.pickle', 'rb') as file:
a_dict1 = pickle.load(file)
print(a_dict1)
{'da': 111, 2: [23, 1, 4], '23': {1: 2, 'd': 'sad'}, 1: [tensor([[0.2898, 0.1825, 0.4143],
[0.5205, 0.5322, 0.7425]]), 1]}