线性回归:
仿射模型:
y
^
=
x
∗
ω
+
b
\hat{y}=x*\omega+b
y^=x∗ω+b
损失函数:
l
o
s
s
=
(
y
^
−
y
)
2
=
(
x
∗
ω
−
y
)
2
loss=(\hat{y}-y)^2=(x*\omega-y)^2
loss=(y^−y)2=(x∗ω−y)2
分类任务:MNIST数据集
The database of handwritten digits
- Training set: 60,000 examples,
- Test set: 10,000 examples.
- Classes: 10
import torchvision
train_set = torchvision.datasets.MNIST(root='../dataset/mnist', train=True, download=True)
test_set = torchvision.datasets.MNIST(root='../dataset/mnist', train=False, download=True)
在分类任务中,模型的输出是输入精确到某一类别的概率。
如何分类?
采用分类函数
σ
(
x
)
=
1
1
+
e
(
−
x
)
\sigma(x)=\frac{1}{1+e^(-x)}
σ(x)=1+e(−x)1
逻辑回归:
二分类任务的损失函数:
l
o
s
s
=
−
(
y
l
o
g
y
^
+
(
1
−
y
)
l
o
g
(
1
−
y
^
)
)
loss=-(ylog\hat{y}+(1-y)log(1-\hat{y}))
loss=−(ylogy^+(1−y)log(1−y^))
二分类任务的小批量损失函数:
l
o
s
s
=
−
1
N
∑
n
=
1
N
y
n
l
o
g
y
^
n
+
(
1
−
y
n
)
l
o
g
(
1
−
y
^
n
)
loss=-\frac{1}{N}\sum_{n=1}^Ny_nlog\hat{y}_n+(1-y_n)log(1-\hat{y}_n)
loss=−N1∑n=1Nynlogy^n+(1−yn)log(1−y^n)
import torch
#数据
x_data = torch.Tensor([[1.0],[2.0],[3.0]])
y_data = torch.Tensor([[0],[0],[1]])
#使用类设计模型
class LogisticRegressionModel(torch.nn.Module):
def __init__(self):
super(LogisticRegressionModel,self).__init__()
self.linear = torch.nn.Linear(1,1)
def forward(self,x):
y_pred = torch.sigmoid(self.linear(x))
return y_pred
model = LogisticRegressionModel()
criterion = torch.nn.BCELoss(size_average=False,reduction='sum')#损失
optimizer = torch.optim.SGD(model.parameters(),lr=0.7)#优化器
for epoch in range(100):#训练
y_pred = model(x_data)
loss = criterion(y_pred,y_data)
print(epoch,loss.item())
optimizer.zero_grad()
loss.backward()
optimizer.step()
结果:
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0,10,200)
x_t = torch.Tensor(x).view((200,1))
y_t = model(x_t)
y = y_t.data.numpy()
plt.plot(x,y)
plt.plot([0,10],[0.5,0.5],c='r')
plt.xlabel('Hours')
plt.ylabel('Probability')
plt.grid()
plt.show()