Pytorch 02

Basic Networks

x = torch.tensor([[6,2],[5,2],[1,3],[7,6]]).float()
y = torch.tensor([1,5,2,5]).float()

We want to find a function that depends on parameters that lets us get from x to y.

M1 = nn.Linear(2,8,bias=False)
M2 = nn.Linear(8,1, bias=False)
#and we put x into that
M2(M1(x)).squeeze()

In order to optimize for these weights, we first construct our network as follows:

class MyNeuralNet(nn.Module):
    def __init__(self):
        super().__init__()
        self.Matrix1 = nn.Linear(2,8,bias=False)
        self.Matrix2 = nn.Linear(8,1,bias=False)
    def forward(self,x):
        x = self.Matrix1(x)
        x = self.Matrix2(x)
        return x.squeeze()
f = MyNeuralNet()
yhat = f(x)

Adjusting a so that yhat and y are similar:

L = nn.MSELoss()
L(y,yhat)

gradient descent:The idea is to do this over and over again, until one reaches a minimum for L

在这里插入图片描述
Each pass of the full data set x is called an epoch.

opt = SGD(f.parameters(), lr=0.001)
losses = []
for _ in range(50):
    opt.zero_grad() # flush previous epoch's gradient
    loss_value = L(f(x), y) #compute loss
    loss_value.backward() # compute gradient
    opt.step() # Perform iteration using gradient above
    losses.append(loss_value.item())
plt.plot(losses)
plt.ylabel('Loss $L(y,\hat{y};a)$')
plt.xlabel('Epochs')
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值