大佬们,在重复<20天吃掉pytorch>3-3高阶API中DNN二分类问题时,这个错误怎么解决?

该文介绍了使用PyTorch进行深度神经网络(DNN)二分类问题的解决过程。作者创建了二元分类数据集,定义了一个包含多个全连接层的DNN模型,并使用BCEWithLogitsLoss作为损失函数,Adam优化器进行训练。文章还涉及到了学习率调度器、模型检查点和早停策略作为训练回调,以及训练后模型预测结果的可视化展示。
摘要由CSDN通过智能技术生成

如上所述,最近在用<20天吃掉pytorch>这本书进行pytorch的学习,3-3高阶API中DNN二分类问题时,这个错误如何解决?感谢各位大佬了!

代码如下:

import matplotlib.pyplot as plt
import torch
from torch import nn
import numpy as np
import torchkeras
from torchmetrics import Accuracy
from torchkeras import KerasModel, LightModel
import pytorch_lightning as pl
import torch.nn.functional as F
from torch.utils.data import DataLoader, TensorDataset
# 2.DNN二分类模型
# 2.1 准备数据
n_positive, n_negative = 2000, 2000

# 正样本
r_p = 5.0 + torch.normal(0.0, 1.0, size=[n_positive, 1])
theta_p = 2 * np.pi * torch.rand([n_positive, 1])
Xp = torch.cat([r_p * torch.cos(theta_p), r_p * torch.sin(theta_p)], axis=1)
Yp = torch.ones_like(r_p)

r_n = 8.0 + torch.normal(0.0, 1.0, size=[n_negative, 1])
theta_n = 2 * np.pi * torch.rand([n_negative, 1])
Xn = torch.cat([r_n * torch.cos(theta_n), r_n * torch.sin(theta_n)], axis=1)
Yn = torch.zeros_like(r_n)

# 汇总样本
X = torch.cat([Xp, Xn], axis=0)
Y = torch.cat([Yp, Yn], axis=0)

print("X的shape是:", X.shape)
print("Y的shape是:", Y.shape)

# 构建输入数据管道
ds = TensorDataset(X.clone(), Y.clone())
ds_train, ds_valid = torch.utils.data.random_split(ds, [int(len(ds) * 0.7), len(ds) - int(len(ds) * 0.7)])
dl_train = DataLoader(ds_train, batch_size=100, shuffle=True, num_workers=0)
dl_valid = DataLoader(ds_valid, batch_size=100, num_workers=0)

features, labels = next(iter(dl_train))


# 2.2 定义模型
class Net(nn.Module):
    def __init__(self):
        super().__init__()
        self.fc1 = nn.Linear(2, 4)
        self.fc2 = nn.Linear(4, 8)
        self.fc3 = nn.Linear(8, 1)

    def forward(self, x):
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        y = torch.sigmoid(self.fc3(x))

        return y


net = Net()
loss_fc = nn.BCEWithLogitsLoss()
metrics_dict = {"acc": Accuracy()}
optimizer = torch.optim.Adam(net.parameters(), lr=0.05)
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=10, gamma=0.0001)

model = LightModel(net=net, loss_fn=nn.MSELoss(), metrics_dict={"acc": Accuracy()},
                   optimizer=optimizer, lr_scheduler=lr_scheduler)

torchkeras.summary(model, input_data=features)

# 2.3训练模型
# # 设置回调函数
model_ckpt = pl.callbacks.ModelCheckpoint(monitor="val_acc", save_top_k=1, mode="max")
early_stopping = pl.callbacks.EarlyStopping(monitor="val_acc", patience=3, mode="max")
trainer = pl.Trainer(logger=True, min_epochs=3, max_epochs=20, gpus=0,
                     callbacks=[model_ckpt, early_stopping], enable_progress_bar=True)
#
# # 启动循环
trainer.fit(model, dl_train, dl_valid)

# 可视化
plt.figure(figsize=(12, 8), dpi=300)
plt.subplot(121)
plt.scatter(Xp[:, 0], Xp[:, 1], color="r", label="positive")
plt.scatter(Xn[:, 0], Xn[:, 1], color="g", label="negative")
plt.legend(["positive", "negative"])
plt.title("y_true")

Xp_pred = X[torch.squeeze(net.forward(X) >= 0.5)]
Xn_pred = X[torch.squeeze(net.forward(X) < 0.5)]

plt.subplot(122)
plt.scatter(Xp_pred[:, 0], Xp_pred[:, 1], color="r", label="positive")
plt.scatter(Xn_pred[:, 0], Xn_pred[:, 1], color="g", label="negative")
plt.legend(["positive", "negative"])
plt.title("y_pred")
plt.show()

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值