图像分类:批量建立dataset数据集的分类.txt类型标签

图像分类:批量建立dataset数据集的分类.txt类型标签

https://www.bilibili.com/video/BV1hE411t7RN?p=7&spm_id_from=pageDriver&vd_source=5b6e0605c1ed0f1db9c92503dd5994e0

import os
root_dir="dataset/train"
target_dir="ants_image"
img_path=os.listdir(os.path.join(root_dir,target_dir)
label=target_dir.split('_')[0]
out_dir="ant_label"
for i in img_path:
	file_name=i.split('.jpg')[0]
	with open(os.path.join(root_dir,out_dir,"{}.txt=".format(file_name)),'w') as f:
		f.write(label)

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
以下是使用PythonPyTorch框架构建DANN模型进行图像分类的代码示例。假设我们的数据集包括两个域:源域和目标域,每个域包含10个类别,每个类别包含100张大小为28x28的灰度图像。 ```python import torch import torch.nn as nn import torch.optim as optim import torchvision import torchvision.transforms as transforms from torch.autograd import Function from torch.utils.data import DataLoader from torch.utils.data.dataset import Dataset class CustomDataset(Dataset): def __init__(self, data, labels): self.data = data self.labels = labels def __getitem__(self, index): x = self.data[index] y = self.labels[index] return x, y def __len__(self): return len(self.data) class ReverseLayerF(Function): @staticmethod def forward(ctx, x, alpha): ctx.alpha = alpha return x @staticmethod def backward(ctx, grad_output): output = grad_output.neg() * ctx.alpha return output, None class DANN(nn.Module): def __init__(self): super(DANN, self).__init__() self.feature_extractor = nn.Sequential( nn.Conv2d(1, 32, 5), nn.ReLU(), nn.MaxPool2d(2), nn.Conv2d(32, 48, 5), nn.ReLU(), nn.MaxPool2d(2), nn.Flatten(), nn.Linear(48 * 4 * 4, 100), nn.ReLU() ) self.class_classifier = nn.Sequential( nn.Linear(100, 100), nn.ReLU(), nn.Linear(100, 10) ) self.domain_classifier = nn.Sequential( nn.Linear(100, 100), nn.ReLU(), nn.Linear(100, 2) ) def forward(self, x, alpha): features = self.feature_extractor(x) class_output = self.class_classifier(features) reverse_features = ReverseLayerF.apply(features, alpha) domain_output = self.domain_classifier(reverse_features) return class_output, domain_output def train(model, dataloader): optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9) criterion_class = nn.CrossEntropyLoss() criterion_domain = nn.CrossEntropyLoss() for epoch in range(10): for i, (source_data, source_labels) in enumerate(dataloader['source']): source_data, source_labels = source_data.to(device), source_labels.to(device) target_data, _ = next(iter(dataloader['target'])) target_data = target_data.to(device) source_domain_labels = torch.zeros(source_data.size(0)).long().to(device) target_domain_labels = torch.ones(target_data.size(0)).long().to(device) optimizer.zero_grad() source_class_output, source_domain_output = model(source_data, 0.1) source_class_loss = criterion_class(source_class_output, source_labels) source_domain_loss = criterion_domain(source_domain_output, source_domain_labels) target_class_output, target_domain_output = model(target_data, 0.1) target_domain_loss = criterion_domain(target_domain_output, target_domain_labels) loss = source_class_loss + source_domain_loss + target_domain_loss loss.backward() optimizer.step() if i % 10 == 0: print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'.format(epoch+1, 10, i+1, len(dataloader['source']), loss.item())) def test(model, dataloader): correct = 0 total = 0 with torch.no_grad(): for data, labels in dataloader['target']: data, labels = data.to(device), labels.to(device) outputs, _ = model(data, 0) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the test images: %d %%' % (100 * correct / total)) if __name__ == '__main__': device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') transform = transforms.Compose([ transforms.Resize((28, 28)), transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,)) ]) source_dataset = torchvision.datasets.MNIST(root='./data', train=True, download=True, transform=transform) target_dataset = torchvision.datasets.USPS(root='./data', train=True, download=True, transform=transform) source_data = source_dataset.data.unsqueeze(1).float() source_labels = source_dataset.targets target_data = target_dataset.data.unsqueeze(1).float() target_labels = target_dataset.targets source_loader = DataLoader(CustomDataset(source_data, source_labels), batch_size=64, shuffle=True) target_loader = DataLoader(CustomDataset(target_data, target_labels), batch_size=64, shuffle=True) dataloader = {'source': source_loader, 'target': target_loader} model = DANN().to(device) train(model, dataloader) test(model, dataloader) ``` 在这个示例中,我们使用了MNIST和USPS两个数据集作为源域和目标域,分别包含0~9十个数字的手写数字图像。我们使用了PyTorch中的MNIST和USPS数据集类来加载数据,并将图像转换成PyTorch需要的张量格式。同时,我们使用了PyTorch中的DataLoader类来构建数据迭代器,方便进行批量训练和测试。我们使用了交叉熵损失函数来计算分类和域分类的损失,并使用随机梯度下降(SGD)优化器来更新模型参数。在每个epoch开始时,我们从源域数据集中随机选择一批数据,从目标域数据集中选择一批数据,并将其送入模型进行训练。我们使用了Reverse Gradient Layer来实现域适应学习,将特征提取器的梯度反转,从而使得域分类器无法区分源域和目标域之间的特征。在测试阶段,我们将目标域数据集送入训练好的模型中,计算分类准确率。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值