实现一个神经元的NN,支持前向、BP、及梯度计算等

本文展示了手动实现一个简单的神经网络模型的过程,包括使用sigmoid激活函数处理二分类问题,采用BCE损失函数,以及通过SGD进行参数更新。通过逐步训练和测试,观察损失函数的变化,最终得到较好的预测结果。
摘要由CSDN通过智能技术生成

本日加更!插入版权信息,完整代码在最末尾。

# -*- coding: utf-8 -*-
#
# Copyright (C) 2021 #
# @Time    : 2021/8/10 下午8:29
# @Author  : Erebor 
# @Email   : ******@gmail.com
# @File    : NN.py
# @Software: PyCharm

基于以下两个框架,最后也没用上numpy,这个实现实在是太手残了

import numpy as np
import math

训练数据和测试数据(经模糊处理,大家可以自己输入一些自己的数据)

#train
#前面是数据x,后面是输出y
train_list = [
    [0.6, 1],...
]
# test
test_list = [[0.66, 1], ...]

首先,明确这是个二分类问题,因此我们采用sigmoid函数做激活函数,也就是将线性映射到非线性的概率。

def sigmoid(x):
    #sigmoid = 1/(1+e^-x)
    return 1/(1+math.pow(math.e,-x))

设置损失计算方法,我们采用BCE二分类交叉熵,其与sigmoid有天然的互利共赢关系(咦,func好像打错了)

def loss_fuc(yp,y):
    #采用交叉熵损失函数BCE
    return -(y * math.log(yp,2) + (1 - y) * math.log((1-yp),2))

当有了损失函数,我们就可以计算损失函数对参数的梯度了,在本实现中,输入x是单维的,那么一个线性函数y=wx+b即可。

def backward_forw(x,y,output):
    #对w的反向传播
    first = output - 1
    second = sigmoid(y) * (1 - sigmoid(y))
    third = x

    return first*second*third

def backward_forb(x,y,output):
    #对b的反向传播
    first = output - 1
    second = sigmoid(y) * (1 - sigmoid(y))
    third = -1

    return first*second*third

获得损失函数后,我们利用SGD来进行参数更新

def optimizer(theta,learning_rate,gradient):
    #优化器,SGD
    return theta - learning_rate * gradient

最后,终于是模型的定义

class Net():
    def __init__(self):
        self.w = 3
        self.b = 0
        self.w_best = 0 #用来保存训练中的最佳值
        self.b_bests = 0


    def train(self,train_list):
        best_loss = 10000
        for i in enumerate(train_list):
            print("now we use x" + str(i) + "to train")
            output = self.forward(i[1][0])  #获得输出

            loss = loss_fuc(output,i[1][1]) #计算损失函数
            if loss < best_loss:
                best_loss = loss
                self.w_best = self.w
                self.b_best = self.b
                print("saved!")
            print("now loss is :" , loss)

            gradient_w = backward_forw(i[1][0],i[1][1],output)#计算损失函数对参数的梯度
            gradient_b = backward_forb(i[1][0],i[1][1],output)

            self.w = optimizer(self.w,0.01,gradient_w) #计算新参数值
            self.b = optimizer(self.b,0.01,gradient_b)


    def forward(self,x):
        output = self.w * x + self.b
        return sigmoid(output)

定义一个测试函数:

def test(net, test_list):
    for i, test in enumerate(test_list):
        y = net.forward(test[0])
        label = 0
        if(y > 0.7) : #当输出概率值>0.7时,判定其label=1
            label = 1
        print("case {}, label {}, result {}".format(i + 1, test[1], label))

最终的调用:

net = Net()
net.train(train_list)
net.w = net.w_best
net.b = net.b_best
test(net, test_list)

全体代码如下:

def sigmoid(x):
    #sigmoid = 1/(1+e^-x)
    return 1/(1+math.pow(math.e,-x))


def loss_fuc(yp,y):
    #采用交叉熵损失函数BCE
    return -(y * math.log(yp,2) + (1 - y) * math.log((1-yp),2))

def backward_forw(x,y,output):
    #反向传播
    first = output - 1
    second = sigmoid(y) * (1 - sigmoid(y))
    third = x

    return first*second*third

def backward_forb(x,y,output):
    #反向传播
    first = output - 1
    second = sigmoid(y) * (1 - sigmoid(y))
    third = -1

    return first*second*third

def optimizer(theta,learning_rate,gradient):
    #优化器,SGD
    return theta - learning_rate * gradient


class Net():
    def __init__(self):
        self.w = 3
        self.b = 0
        self.w_best = 0
        self.b_bests = 0


    def train(self,train_list):
        best_loss = 10000
        for i in enumerate(train_list):
            print("now we use x" + str(i) + "to train")
            output = self.forward(i[1][0])  #获得输出

            loss = loss_fuc(output,i[1][1]) #计算损失函数
            if loss < best_loss:
                best_loss = loss
                self.w_best = self.w
                self.b_best = self.b
                print("saved!")
            print("now loss is :" , loss)

            gradient_w = backward_forw(i[1][0],i[1][1],output)#计算损失函数对参数的梯度
            gradient_b = backward_forb(i[1][0],i[1][1],output)

            self.w = optimizer(self.w,0.01,gradient_w) #计算新参数值
            self.b = optimizer(self.b,0.01,gradient_b)


    def forward(self,x):
        output = self.w * x + self.b
        return sigmoid(output)



# 测试用
def test(net, test_list):
    for i, test in enumerate(test_list):
        y = net.forward(test[0])
        label = 0
        if(y > 0.7) :
            label = 1
        print("case {}, label {}, result {}".format(i + 1, test[1], label))


net = Net()
net.train(train_list)
net.w = net.w_best
net.b = net.b_best
test(net, test_list)

实验结果:

now we use x(0, [0.6, 1])to train
saved!
now loss is : 0.2207000400730107
now we use x(1, [0.7, 1])to train
saved!
now loss is : 0.1666849049759194
now we use x(2, [0.8, 1])to train
saved!
now loss is : 0.12530681773190233
now we use x(3, [0.9, 1])to train
saved!
now loss is : 0.09386105442655676
now we use x(4, [1.0, 1])to train
saved!
now loss is : 0.0701118547623072
now we use x(5, [1.1, 1])to train
saved!
now loss is : 0.052261336324321735
now we use x(6, [1.2, 1])to train
saved!
now loss is : 0.038893474290203414
now we use x(7, [1.3, 1])to train
saved!
now loss is : 0.028910291824353805
now we use x(8, [1.4, 1])to train
saved!
now loss is : 0.021470338574518008
now we use x(9, [1.5, 1])to train
saved!
now loss is : 0.015934376960938645
:
:
(此处不再展示)
case 1, label 1, result 1
case 2, label 1, result 1
case 3, label 1, result 1
case 4, label 1, result 1
case 5, label 1, result 1

在这里插入图片描述
今天又变强了,下班!

评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值