python用matplotlib库画出tensor图片(dataloader里的图片)[学习记录]

python用matplotlib库画出tensor图片(dataloader里的图片)[学习记录]


我使用的是自制的数据集,放进dataloader里,但不确定图片和标签是否匹配,利用matplotlib库画出图片并打印标签便于检查。

代码如下:

import torch
import numpy as np
import matplotlib.pyplot as plt
 
    
for data in dataloader_train:
    imgs, labels = data
    # 假设有一个tensor,表示一张图片
#     tensor_image = torch.randn(3, 224, 224)  # 假设图片大小为224x224,3个颜色通道
    tensor_image = imgs[0]
 
    # 将tensor转换为numpy数组
    numpy_image = tensor_image.permute(1, 2, 0).numpy()  # 将通道维度移到最外层
 
    # 如果是灰度图,则不需要permute,直接转换
    # numpy_image = tensor_image.numpy()
 
    # 对图片进行归一化处理,展示更加清晰
    numpy_image = (numpy_image - numpy_image.min()) / (numpy_image.max() - numpy_image.min())
 
    # 使用matplotlib显示图片
    plt.imshow(numpy_image)
    plt.show()
    print(labels[0])

欢迎交流学习。

  • 3
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
好的,以下是一个简单的图片风格化迁移项目示例,使用 Python 和 PyTorch 框架。 步骤一:导入必要的库 ```python import torch import torch.nn as nn import torch.optim as optim from torch.utils.data import DataLoader from torchvision import datasets, transforms, models from PIL import Image import matplotlib.pyplot as plt import numpy as np import os ``` 步骤二:定义超参数和数据加载器 ```python # 定义超参数 content_weight = 1 # 内容损失函数的权重 style_weight = 1000 # 风格损失函数的权重 epochs = 20 # 迭代次数 batch_size = 4 # 批处理大小 # 加载数据集 data_transform = transforms.Compose([ transforms.Resize(256), # 缩放到256x256像素大小 transforms.CenterCrop(256), # 中心裁剪为256x256像素 transforms.ToTensor(), # 转换为张量 transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) # 标准化 ]) content_dataset = datasets.ImageFolder('./content', transform=data_transform) style_dataset = datasets.ImageFolder('./style', transform=data_transform) content_loader = DataLoader(content_dataset, batch_size=batch_size, shuffle=True) style_loader = DataLoader(style_dataset, batch_size=batch_size, shuffle=True) ``` 步骤三:定义模型 ```python # 定义风格迁移模型 class StyleTransferModel(nn.Module): def __init__(self): super(StyleTransferModel, self).__init__() self.features = models.vgg19(pretrained=True).features[:35] # 加载预训练的VGG19模型 for param in self.parameters(): param.requires_grad = False # 冻结参数 self.content_loss = nn.MSELoss() # 定义内容损失函数 self.style_loss = nn.MSELoss() # 定义风格损失函数 self.content_feature = None # 内容图像的特征 self.style_features = None # 风格图像的特征 self.target_feature = None # 目标图像的特征 def forward(self, x): self.content_feature = self.features(x.clone()) # 克隆一份x,防止直接修改导致误差计算错误 return x def compute_content_loss(self): loss = self.content_loss(self.target_feature, self.content_feature) return content_weight * loss def compute_style_loss(self): loss = 0 for i in range(len(self.style_features)): target_gram = self.gram_matrix(self.target_feature[i]) style_gram = self.gram_matrix(self.style_features[i]) loss += self.style_loss(target_gram, style_gram) return style_weight * loss def gram_matrix(self, x): b, c, h, w = x.size() features = x.view(b * c, h * w) G = torch.mm(features, features.t()) return G.div(b * c * h * w) def set_style_features(self, x): self.style_features = [] for feature in self.features: x = feature(x) if isinstance(feature, nn.ReLU): feature.inplace = False if isinstance(feature, nn.MaxPool2d): self.style_features.append(x) if len(self.style_features) == 5: return def set_target_feature(self, x): self.target_feature = self.features(x.clone()) ``` 步骤四:定义训练函数 ```python def train(model, content_loader, style_loader, epochs): optimizer = optim.Adam(model.parameters(), lr=0.001) # 定义优化器 for epoch in range(epochs): model.train() content_iter = iter(content_loader) style_iter = iter(style_loader) for i in range(len(content_iter)): content, _ = content_iter.next() style, _ = style_iter.next() model.set_style_features(style) # 设置风格图像的特征 model.set_target_feature(content) # 设置目标图像的特征 optimizer.zero_grad() # 梯度清零 loss = model.compute_content_loss() + model.compute_style_loss() # 计算损失函数 loss.backward() # 反向传播 optimizer.step() # 更新参数 print("Epoch ", epoch + 1, " complete.") ``` 步骤五:定义测试函数 ```python def test(model, content_path, style_path, output_path): content_image = Image.open(content_path) style_image = Image.open(style_path) transform = transforms.Compose([ transforms.Resize((256, 256)), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ]) content = transform(content_image).unsqueeze(0) style = transform(style_image).unsqueeze(0) model.set_style_features(style) model.set_target_feature(content) output = model(content) output_image = output.squeeze().detach().numpy() output_image = np.transpose(output_image, (1, 2, 0)) output_image = output_image * [0.229, 0.224, 0.225] + [0.485, 0.456, 0.406] output_image = np.clip(output_image, 0, 1) output_image = Image.fromarray((output_image * 255).astype(np.uint8)) output_image.save(output_path) ``` 步骤六:训练模型 ```python model = StyleTransferModel() train(model, content_loader, style_loader, epochs) ``` 步骤七:测试模型 ```python test(model, './test_content.jpg', './test_style.jpg', './output.jpg') ``` 以上是一个简单的图片风格化迁移项目示例,你可以根据需要进行修改或优化。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

duan_shuai

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值