加载数据的方式
一张张加载图片,而不是加载npz的方式
参考:https://blog.csdn.net/aass6d/article/details/105230150
python内存释放
加载完一个trainloader后,想释放掉再加载一个trainloader。
del train_loader
import gc
gc.collect()
参考:https://www.jianshu.com/p/ff08129382b8
将transforms.Normalize()自己实现
def preprocessImg(img):
# 法1
# preprocess = transforms.Compose([
# transforms.ToPILImage(),
# transforms.Resize(size=112), # Pre-trained model uses 140x140 input images
# transforms.ToTensor(),
# # transforms.Normalize(
# # mean=[0.3880, 0.3880, 0.3880],
# # std=[0.2171, 0.2171, 0.2171]
# # )
# ]
# )
# 法2
# img = preprocess(img)
img = img/255.
img = (img-0.3880)/0.2171
img = np.transpose(img,(2, 0, 1))
img = torch.Tensor(img)
img = img.to(torch.device("cuda"))
return img
NCHW格式的numpy转tensor并归一化
数据处理时不做归一化,保留int类型数据,可以使网络加载更多数据;训练时再做归一化。
normal_mean_var = {'mean': [0.456, 0.456, 0.456],
'std': [0.225, 0.225, 0.225]}
infer_transform = transforms.Compose([transforms.ToTensor(), # 这个带归一化
transforms.Normalize(**normal_mean_var)])
face = infer_transform(x1_data)
# 上述无法做NCHW的归一化,只能做单张归一化,可以转化为下列的形式做:
normal_mean_var = {'mean': [116., 116., 116.],
'std': [57., 57., 57.]}
infer_transform = transforms.Compose([transforms.Normalize(**normal_mean_var)])
face = infer_transform(torch.Tensor(x1_data))
存在问题:加载大数据时会很卡,内存爆了
可以将该操作放到生成loader之后,此时batch就不大,不会撑爆内存
def train_model(model, device, train_loader, optimizer, epoch):
model.train() # 模型训练
for batch_index, (data, target) in enumerate(train_loader):
normal_mean_var = {'mean': [116., 116., 116.],
'std': [57., 57., 57.]}
infer_transform = transforms.Compose([transforms.Normalize(**normal_mean_var)])
data = infer_transform(torch.Tensor(data))
x.mean()的用法
x = torch.arange(24).view(1,4, 3, 2)
print(x.mean([2, 3])) # tensor([[ 2.5000, 8.5000, 14.5000, 20.5000]])
使得4下面的2*3矩阵求平均
取数组的某一列元素、判断数组元素是否大于某一阈值
index = np.array([[[1,2,3],[1,2,3],[1,2,3],[1,2,3]]])
print(index[:, 0])
print(index[:][0])
print(index[0])
in1 = np.argwhere(index>1)
in2 = index[in1[:,0], in1[:,1], in1[:,2]]