pytorch深度学习入门—tensor张量的裁剪

Tensor的裁剪可以防止过拟合的出现,也可以有效处理梯度爆炸与梯度消失

torch中可以利用clamp进行梯度裁剪

A.clamp(a,b)表示将A中的元素裁剪到只剩在a—b范围内,原来小于a的元素将赋值为a,大于b的元素将赋值为b

测试代码:

import torch

a = torch.rand(2, 3) * 10
print(a)
a = a.clamp(5, 8)
print(a)

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
下面是一个使用PyTorch实现的计算图像中秧苗漏检率的代码示例: ```python import torch import torch.nn as nn import torch.optim as optim import torchvision.transforms as transforms import torchvision.datasets as datasets # 定义数据预处理方式 transform = transforms.Compose([ transforms.Resize(224), # 将图片缩放到 224x224 transforms.CenterCrop(224), # 中心裁剪 224x224 transforms.ToTensor(), # 将图片转换为张量 transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) # 归一化 ]) # 加载数据集 train_dataset = datasets.ImageFolder('train', transform=transform) test_dataset = datasets.ImageFolder('test', transform=transform) # 定义模型 model = nn.Sequential( nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1), nn.ReLU(inplace=True), nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=2, stride=2), nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1), nn.ReLU(inplace=True), nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=2, stride=2), nn.Conv2d(128, 256, kernel_size=3, stride=1, padding=1), nn.ReLU(inplace=True), nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1), nn.ReLU(inplace=True), nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=2, stride=2), nn.Conv2d(256, 512, kernel_size=3, stride=1, padding=1), nn.ReLU(inplace=True), nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1), nn.ReLU(inplace=True), nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=2, stride=2), nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1), nn.ReLU(inplace=True), nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1), nn.ReLU(inplace=True), nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=2, stride=2), nn.Flatten(), nn.Linear(25088, 4096), nn.ReLU(inplace=True), nn.Dropout(p=0.5), nn.Linear(4096, 4096), nn.ReLU(inplace=True), nn.Dropout(p=0.5), nn.Linear(4096, 2), ) # 定义损失函数和优化器 criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9) # 训练模型 for epoch in range(10): running_loss = 0.0 for i, data in enumerate(train_dataset, 0): inputs, labels = data optimizer.zero_grad() outputs = model(inputs.unsqueeze(0)) loss = criterion(outputs, labels.unsqueeze(0)) loss.backward() optimizer.step() running_loss += loss.item() print('Epoch %d loss: %.3f' % (epoch + 1, running_loss / len(train_dataset))) # 测试模型 correct = 0 total = 0 with torch.no_grad(): for data in test_dataset: inputs, labels = data outputs = model(inputs.unsqueeze(0)) _, predicted = torch.max(outputs.data, 1) total += 1 if predicted == labels: correct += 1 print('Accuracy: %.2f%%' % (100 * correct / total)) # 计算漏检率 missed = 0 total = 0 with torch.no_grad(): for data in test_dataset: inputs, labels = data outputs = model(inputs.unsqueeze(0)) _, predicted = torch.max(outputs.data, 1) total += 1 if predicted != labels: missed += 1 print('Missed detection rate: %.2f%%' % (100 * missed / total)) ``` 在这个示例中,我们首先定义了数据预处理方式和加载数据集的代码。然后,我们定义了一个卷积神经网络模型,并使用SGD优化器和交叉熵损失函数训练模型。在训练过程中,我们计算每个epoch的平均损失。然后,我们使用测试集测试模型的准确率,并计算漏检率。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值