Triplet loss的优势在于细节区分,即当两个输入相似时,Triplet loss能够更好地对细节进行建模,相当于加入了两个输入差异性差异的度量,学习到输入的更好表示。常用在人脸识别任务中。目的是做到非同类极相似样本的区分,比如说对兄弟二人的区分。
用到我的这个数据上时候发现训练集可以正常运行,但是很奇怪测试集上总是不是,看他其他博客说到更改batchsize等,但是惊奇发现当训练集加载数据为shuffle=True时候,就会像测试集上情况一样。
因此考虑:Triplet loss是用于输入差异性的度量,当我不将数据打乱时候,就会存在operation does not have an identity
def load_training(root_path, dir, batch_size):
transform = transforms.Compose(
[transforms.Resize([224, 224]),
transforms.Grayscale(1),
transforms.ToTensor(),
transforms.Normalize([0.485],
[0.229])#
])
data = datasets.ImageFolder(root=os.path.join(root_path, dir), transform=transform)
train_loader = torch.utils.data.DataLoader(data, batch_size=batch_size, shuffle=True, drop_last=True)
class_label = data.classes
return train_loader, class_label
def load_testing(root_path, dir, batch_size):
transform = transforms.Compose(
[transforms.Resize([224, 224]),
transforms.Grayscale(1),
transforms.ToTensor(),
transforms.Normalize([0.485],
[0.229])#
])
data = datasets.ImageFolder(root=os.path.join(root_path, dir), transform=transform)#
test_loader = torch.utils.data.DataLoader(data, batch_size=batch_size, shuffle=True)
return test_loader