【BP算法】

在这里插入图片描述
本文只计算一个权重W0_00的梯度,因此只画出涉及到W0_00的forward路线。BP算法就是更方便的计算出神经网络参数的梯度。

定义网络

  • z1=x0*w0_00
  • a1=sigmoid(z1)
  • z2=a1*w1_00
  • a2=sigmoid(z2)
  • p1=a2
  • z3=a1*w1_01
  • a3=sigmoid(z3)
  • p2=a3
import numpy as np
W0 = np.array([[0.1,0.8],[0.4,0.6]])
W1 = np.array([[0.1,0.8],[0.4,0.6]])

X = np.array([[0.35,0.9]]) 
y = np.array([[0.5, 0.5]]) 
def sigmoid(x, deriv = False):
    if deriv == True:
        return x*(1-x)
    else:
        return 1 / (1 + np.exp(-x))
z1 = np.dot(X, W0)
a1 = sigmoid(z1)
z2 = np.dot(a1, W1)
a2 = sigmoid(z2)
y_hat = a2
error = (y_hat - y)**2
error_a2_delta = 2 * (y_hat[0][0] - y[0][0])
error_a3_delta = 2 * (y_hat[0][1] - y[0][1])

error_z2_delta = error_a2_delta * fun(a2[0][0],True)
error_z3_delta = error_a3_delta * fun(a2[0][1],True)
error_a1_delta = error_z2_delta * W1[0][0]+error_z3_delta * W1[0][1]

error_z1_delta = error_a1_delta * fun(a1[0][0],True)
W0_delta_0 = error_z1_delta * X[0][0]

tf2实现

import tensorflow as tf
W0 = tf.Variable(np.array([[0.1,0.8],[0.4,0.6]]))
W1 = tf.Variable(np.array([[0.1,0.8],[0.4,0.6]]))
X = np.array([[0.35,0.9]]) #输入层
y = np.array([[0.5, 0.5]]) #输出值
def net(W0, W1, X):
    layer1 = tf.matmul(X,W0)
    a1 = tf.nn.sigmoid(layer1)
    print(a1)
    layer2 = tf.matmul(a1,W1)
    a2 = tf.nn.sigmoid(layer2)

    return a2
def loss(y_hat, y):
    return (y_hat - y)**2
with tf.GradientTape(persistent=True) as t:
   
    z1 = tf.matmul(X,W0)
    a1 = tf.nn.sigmoid(z1)
    z2 = tf.matmul(a1,W1)
    a2 = tf.nn.sigmoid(z2)
    print(a1)
    l = loss(a2, y)
    t.watch([z1,a1,z2,a2,W0,W1])
grads1 = t.gradient(z2,[a1])
grads = t.gradient(l,[W0])
print(grads)

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值