Pytorch笔记:通过快速搭建法建立深度神经网络cnn

Pytorch笔记:通过快速搭建法建立深度神经网络cnn

import torch
import numpy as np
import torch.nn.functional as F
x=torch.Tensor([[1,1],[1,0],[0,1],[0,0]])#训练数据
y=torch.Tensor([[1],[0],[0],[1]])#标签
#print(y)
#搭建cnn的快速方法
net=torch.nn.Sequential(
    torch.nn.Linear(2,10),
    torch.nn.ReLU(),
    torch.nn.Linear(10,1),
    torch.nn.Sigmoid()
)
print('--------------------------------------')
print('当前神经网络:')
print(net)
loss_function=torch.nn.BCELoss()#使用这个类时要注意,输入值(不是分类)的范围要在 (0,1)(0,1) 之间,否则会报错
print('--------------------------------------')
print('损失函数:')
print(loss_function)
optimizer = torch.optim.SGD(net.parameters(), lr=0.1, momentum=0.9)#自动调整学习率,加快学习速度
print('--------------------------------------')
print('优化函数:')
print(optimizer)
for i in range(100):
    out=net(x)#输入训练数据,神经系统输出数据为out
    #print(out)
    loss=loss_function(out,y)#计算输出与预期值的误差
    print ("loss is %f"%loss.data.numpy())
    optimizer.zero_grad()#清除梯度,否则会累加产生错误
    loss.backward()#误差反向传播
    optimizer.step()#调整参数
print(out)
print(y)

运行结果:

H:\ProgramData\Anaconda3\python.exe D:/PycharmProjects/untitled/cnn.py
--------------------------------------
当前神经网络:
Sequential(
  (0): Linear(in_features=2, out_features=10, bias=True)
  (1): ReLU()
  (2): Linear(in_features=10, out_features=1, bias=True)
  (3): Sigmoid()
)
--------------------------------------
损失函数:
BCELoss()
--------------------------------------
优化函数:
SGD (
Parameter Group 0
    dampening: 0
    lr: 0.1
    momentum: 0.9
    nesterov: False
    weight_decay: 0
)
loss is 0.714357
loss is 0.712673
loss is 0.709792
loss is 0.706330
loss is 0.702862
loss is 0.699791
loss is 0.697298
loss is 0.695333
loss is 0.693696
loss is 0.692191
loss is 0.690599
loss is 0.688796
loss is 0.687048
loss is 0.684861
loss is 0.682292
loss is 0.679705
loss is 0.676963
loss is 0.674167
loss is 0.671422
loss is 0.668478
loss is 0.665219
loss is 0.661502
loss is 0.657435
loss is 0.653398
loss is 0.648618
loss is 0.643798
loss is 0.637802
loss is 0.630523
loss is 0.622880
loss is 0.614692
loss is 0.606852
loss is 0.598942
loss is 0.590934
loss is 0.581764
loss is 0.571460
loss is 0.561200
loss is 0.550226
loss is 0.538725
loss is 0.529060
loss is 0.518752
loss is 0.506867
loss is 0.494870
loss is 0.482732
loss is 0.471275
loss is 0.459368
loss is 0.448011
loss is 0.434552
loss is 0.423201
loss is 0.413056
loss is 0.403062
loss is 0.392866
loss is 0.382960
loss is 0.372989
loss is 0.365298
loss is 0.355998
loss is 0.345202
loss is 0.336765
loss is 0.330361
loss is 0.319713
loss is 0.310890
loss is 0.301235
loss is 0.291678
loss is 0.281841
loss is 0.274955
loss is 0.261970
loss is 0.251758
loss is 0.240970
loss is 0.230600
loss is 0.220190
loss is 0.208684
loss is 0.198305
loss is 0.187689
loss is 0.178409
loss is 0.166813
loss is 0.157619
loss is 0.148150
loss is 0.139574
loss is 0.132218
loss is 0.124349
loss is 0.117668
loss is 0.111024
loss is 0.104253
loss is 0.098694
loss is 0.093254
loss is 0.087893
loss is 0.082427
loss is 0.078319
loss is 0.074564
loss is 0.070603
loss is 0.066771
loss is 0.062862
loss is 0.060213
loss is 0.057664
loss is 0.055006
loss is 0.052303
loss is 0.049804
loss is 0.048023
loss is 0.046113
loss is 0.043984
loss is 0.042172
tensor([[0.9545],
        [0.0267],
        [0.0605],
        [0.9679]], grad_fn=<SigmoidBackward>)
tensor([[1.],
        [0.],
        [0.],
        [1.]])

Process finished with exit code 0

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值