Resnet网络回归的简单实现与经典错误解析

注:参考《动手学深度学习》一书

错误分析参考网页:

1.https://blog.csdn.net/qq_35235032/article/details/107384079

2.https://betheme.net/news/txtlist_i43568v.html?action=onClick

原始运行代码请参考以上两个链接,为避免侵权

本解决方案包括大部分报错,直接获取以上链接中代码,进行调试即可

俩链接的代码都一样

经典主要错误与解决方式

AttributeError: 'NoneType' object has no attribute 'parameters'

原因是博客中给的代码未加入resnet层
解决方法:把ResNet()函数内的代码改成如下(将模型返回+加入resnet_block),增加返回值,可以解决此问题

def ResNet():
    model = torch.nn.Sequential(
        torch.nn.Conv2d(1, 4, kernel_size=2, padding=1, stride=1), 
        torch.nn.BatchNorm2d(4),  
        torch.nn.ReLU(),
        torch.nn.MaxPool2d(kernel_size=2)
    )
 
    model.add_module("global_avg_pool", GlobalAvgPool2d())
    model.add_module("fc", torch.nn.Sequential(FlattenLayer(), torch.nn.Linear(32, 2)))
    return model
# 此处上面一行,加入了返回值 !!!!!

 AttributeError: type object 'datetime.time' has no attribute 'time'

导入包 import time 

AttributeError: '_MultiProcessingDataLoaderIter' object has no attribute 'next'

dataloader是pytorch加载数据里常用方法,原作者在这里简化了包名,以下是dataloader原始使用方式:

    train_data = TensorDataset(train_x_tensor, train_y_tensor)
    test_data = TensorDataset(train_x_tensor, train_y_tensor)

    tr_loader = torch.utils.data.DataLoader(train_data, batch_size, True)
    test_loader = torch.utils.data.DataLoader(train_data, batch_size, True)

 解决方案:

第一,使用原始方式,所以的dataloader加载代码更换如下格式:

# 源代码:

test_batch = Data.DataLoader(torch_testset, batch_size=batch_size, shuffle=False, num_workers=works_num)

# 更改为:

test_batch = torch.utils.data.DataLoader(torch_testset, batch_size=batch_size, shuffle=False, num_workers=works_num)

或者,导入简化的包,代码如下:

import sys
from torch.utils.data import Dataset
import torch.utils.data as Data
import torch.nn.functional as F

RuntimeError: mat1 and mat2 shapes cannot be multiplied (100x4 and 32x2)

解决方式:原因是博客中给的代码未加入resnet层
两个报错的解决方法:把ResNet()函数内的代码改成如下(将模型返回+加入resnet_block)

def ResNet():
    model = torch.nn.Sequential(
        torch.nn.Conv2d(1, 4, kernel_size=2, padding=1, stride=1), 
        torch.nn.BatchNorm2d(4),  
        torch.nn.ReLU(),
        torch.nn.MaxPool2d(kernel_size=2),
        resnet_block(4, 32, 2) # 这里加入resnet !!!!!!
    )
 
    model.add_module("global_avg_pool", GlobalAvgPool2d())
    model.add_module("fc", torch.nn.Sequential(FlattenLayer(), torch.nn.Linear(32, 2)))
    return model

model = model.to(device)
File "E:\software\Anaconda\lib\site-packages\torch\nn\modules\module.py", line 908, in to
device, dtype, non_blocking, convert_to_format = torch._C._nn._parse_to(*args, **kwargs)
TypeError: to() received an invalid combination of arguments - got (type), but expected one of:
* (torch.device device, torch.dtype dtype, bool non_blocking, bool copy, *, torch.memory_format memory_format)
* (torch.dtype dtype, bool non_blocking, bool copy, *, torch.memory_format memory_format)
* (Tensor tensor, bool non_blocking, bool copy, *, torch.memory_format memory_format)

解决方式:加上这段代码

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

在此处

# 3、训练验证模型
# 3.1 训练模型
def train(model, train_batch, test_batch, batch_size, optimizer, device, num_epochs):
    device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
    model = model.to(device)
    print("run in ", device)

运行结果:

2023.4.24 

本文转载

Resnet网络回归的简单实现_resnet回归_飘满红楼的博客-CSDN博客

只是解决了作者代码的问题和在互联网其他地方搜集的解决方法,仅供参考,如有侵权可以联系删除

评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值