Pytorch之Logistic逻辑回归

二分类

前话

今天刚好是python生日,最适合写py了
50万轮就已经让我喝了好久的茶了。
这些实战都是来源于书里,可真真切切的自己动手写出来才发现书里的代码问题很多。。

思路

先拟合决策边界(不一定是线性函数,可以是其他非线性函数),再建立决策边界和概率之间的关系,从而得到不同分类的概率。
大体和线性回归模型类似,但逻辑回归中多了一个Sigmoid函数,对wx+b的结果做判断,这里以0.5为分界线,大于或等于0.5为正类,小于0.5为负类。

代码

# sigmod函数
import math
import torch
import matplotlib.pyplot as plt
x = torch.linspace(-10,10,1000)
y = 1/(1+torch.pow(math.e,-x))
plt.plot(x.data.numpy(),y.data.numpy())
plt.show()

在这里插入图片描述

#构造训练数据
x1 = torch.randn(365)+1.5
x2 = torch.randn(365)-1.5
data = zip(x1.data.numpy(),x2.data.numpy())
pos = []
neg = []
def classification(data):
    for i in data:
        if(i[0] > 1.5+0.1*torch.rand(1).item()*(-1)**torch.randint(1,10,(1,1)).item()):
            pos.append(i)
        else:
            neg.append(i)
classification(data)
pos_x = [i[0] for i in pos]
pos_y = [i[1] for i in pos]
neg_x = [i[0] for i in neg]
neg_y = [i[1] for i in neg]
plt.scatter(pos_x,pos_y,c='r',marker="*")
plt.scatter(neg_x,neg_y,c='b',marker="^")
plt.show()

在这里插入图片描述

#定义模型
import torch.nn as nn 
class LogisticRegression(nn.Module):
    def __init__(self):
        super(LogisticRegression,self).__init__()
        self.linear = nn.Linear(2,1)
        self.sigmoid = nn.Sigmoid()
    def forward(self,x):
        return self.sigmoid(self.linear(x))
#训练
model = LogisticRegression()
criterion = nn.BCELoss()
optimizer = torch.optim.SGD(model.parameters(),0.01)
epochs = 500000
features = [[i[0],i[1]] for i in pos]
features.extend([[i[0],i[1]] for i in neg])
features = torch.Tensor(features)
label = [1 for i in range(len(pos))]
label.extend([0 for i in range(len(neg))])
label = torch.Tensor(label).unsqueeze(1)
#--------------------------
import numpy as np
loss_value = np.inf
acc_holder = []
step = 0
#--------------------------
for epoch in range(epochs):
    out = model(features)
    #将0.5作为划分标签,大于或等于0.5表示正类,标签为1;小于0.5为负类,标签为0
    loss = criterion(out,label)
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()
    #分类任务准确率
    acc = (out.ge(0.5).float().squeeze(1)==label.squeeze(1)).sum().float()/features.size()[0]
    #----------------------------
    if(loss<loss_value):
        torch.save(model,'model_logistic.ckpt') #存模型
        loss_value = loss
    if(epoch%1000==0):
        step+=1
        acc_holder.append([step,acc])
        print("Epoch:[{}/{}],loss:[{:.6f}],acc:[{:.4f}]".format(epoch+1,epochs,loss.item(),acc))
    #----------------------------
    if(epoch%10000 == 0):
        plt.scatter(pos_x,pos_y,c='r',marker="*")
        plt.scatter(neg_x,neg_y,c='b',marker="^")
        weight = model.linear.weight[0]
        wo = weight[0]
        wl = weight[1]
        b = model.linear.bias.data[0]
        #绘制出分界线
        test_x = torch.linspace(-10,10,500)
        test_y = (-wo*test_x - b)/wl
        plt.plot(test_x.data.numpy(),test_y.data.numpy(),c="pink")
        plt.title("acc:{:.4f},loss:{:.4f}".format(acc,loss))
        plt.ylim(-5,3)
        plt.xlim(-3,5)
        plt.show()

Epoch:[1/500000],loss:[0.810626],acc:[0.3644]
在这里插入图片描述

Epoch:[1001/500000],loss:[0.392639],acc:[0.8630]
Epoch:[2001/500000],loss:[0.325437],acc:[0.8849]
Epoch:[3001/500000],loss:[0.286039],acc:[0.9068]
Epoch:[4001/500000],loss:[0.258092],acc:[0.9233]
Epoch:[5001/500000],loss:[0.236918],acc:[0.9425]
Epoch:[6001/500000],loss:[0.220228],acc:[0.9534]
Epoch:[7001/500000],loss:[0.206680],acc:[0.9616]
Epoch:[8001/500000],loss:[0.195423],acc:[0.9699]
Epoch:[9001/500000],loss:[0.185890],acc:[0.9726]
Epoch:[10001/500000],loss:[0.177690],acc:[0.9753]
在这里插入图片描述

Epoch:[11001/500000],loss:[0.170542],acc:[0.9808]
Epoch:[12001/500000],loss:[0.164241],acc:[0.9836]
Epoch:[13001/500000],loss:[0.158635],acc:[0.9836]
Epoch:[14001/500000],loss:[0.153604],acc:[0.9836]
Epoch:[15001/500000],loss:[0.149058],acc:[0.9863]
Epoch:[16001/500000],loss:[0.144923],acc:[0.9890]
Epoch:[17001/500000],loss:[0.141141],acc:[0.9890]
Epoch:[18001/500000],loss:[0.137664],acc:[0.9918]
Epoch:[19001/500000],loss:[0.134454],acc:[0.9918]
Epoch:[20001/500000],loss:[0.131478],acc:[0.9918]
在这里插入图片描述

Epoch:[21001/500000],loss:[0.128708],acc:[0.9918]
Epoch:[22001/500000],loss:[0.126123],acc:[0.9918]
Epoch:[23001/500000],loss:[0.123701],acc:[0.9918]
Epoch:[24001/500000],loss:[0.121426],acc:[0.9918]
Epoch:[25001/500000],loss:[0.119285],acc:[0.9918]
Epoch:[26001/500000],loss:[0.117264],acc:[0.9918]
Epoch:[27001/500000],loss:[0.115352],acc:[0.9945]
Epoch:[28001/500000],loss:[0.113540],acc:[0.9945]
Epoch:[29001/500000],loss:[0.111819],acc:[0.9945]
Epoch:[30001/500000],loss:[0.110181],acc:[0.9945]
在这里插入图片描述

Epoch:[31001/500000],loss:[0.108621],acc:[0.9945]
Epoch:[32001/500000],loss:[0.107132],acc:[0.9945]
Epoch:[33001/500000],loss:[0.105709],acc:[0.9945]
Epoch:[34001/500000],loss:[0.104347],acc:[0.9945]
Epoch:[35001/500000],loss:[0.103041],acc:[0.9945]
Epoch:[36001/500000],loss:[0.101789],acc:[0.9945]
Epoch:[37001/500000],loss:[0.100586],acc:[0.9945]
Epoch:[38001/500000],loss:[0.099429],acc:[0.9945]
Epoch:[39001/500000],loss:[0.098314],acc:[0.9945]
Epoch:[40001/500000],loss:[0.097240],acc:[0.9945]
在这里插入图片描述

Epoch:[41001/500000],loss:[0.096205],acc:[0.9945]
Epoch:[42001/500000],loss:[0.095205],acc:[0.9945]
Epoch:[43001/500000],loss:[0.094240],acc:[0.9945]
Epoch:[44001/500000],loss:[0.093305],acc:[0.9945]
Epoch:[45001/500000],loss:[0.092401],acc:[0.9945]
Epoch:[46001/500000],loss:[0.091525],acc:[0.9973]
Epoch:[47001/500000],loss:[0.090677],acc:[0.9973]
Epoch:[48001/500000],loss:[0.089854],acc:[0.9973]
Epoch:[49001/500000],loss:[0.089055],acc:[0.9973]
省略…
Epoch:[489001/500000],loss:[0.032381],acc:[0.9973]
Epoch:[490001/500000],loss:[0.032352],acc:[0.9973]
在这里插入图片描述

Epoch:[491001/500000],loss:[0.032322],acc:[0.9973]
Epoch:[492001/500000],loss:[0.032293],acc:[0.9973]
Epoch:[493001/500000],loss:[0.032263],acc:[0.9973]
Epoch:[494001/500000],loss:[0.032234],acc:[0.9973]
Epoch:[495001/500000],loss:[0.032205],acc:[0.9973]
Epoch:[496001/500000],loss:[0.032175],acc:[0.9973]
Epoch:[497001/500000],loss:[0.032146],acc:[0.9973]
Epoch:[498001/500000],loss:[0.032117],acc:[0.9973]
Epoch:[499001/500000],loss:[0.032088],acc:[0.9973]

结果

import pandas as pd
fig = plt.figure(figsize=(20,15))
fig.autofmt_xdate()
acc_df=pd.DataFrame(acc_holder,columns=["time","loss"])
print("精确率最大的一轮(X1000)是",(acc_df["time"][acc_df["loss"].astype('float64').idxmax()])*1000)
x_times = acc_df["time"].values
plt.ylabel("Acc")
plt.xlabel("Times")
plt.plot(acc_df["loss"].values)
plt.xticks(x_times)
plt.show()

在这里插入图片描述

事实上,它的loss随着训练的轮数一直在减小,直到最后也没有抖动,非常顺滑的曲线。然后它的准确率也一直上升,在47000轮之后趋于稳定不变。。
可以看到最终拟合的非常成功。

  • 0
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值