1 神经网络学习——numpy手敲神经网络

1 初始化定义

import numpy as np
N, data_in, hidden, data_out = 64, 100, 100, 100
epochs = 1000

2 初始化变量

# h = x * w1
# h_relu = relu(h)
# y_pred = w2 * h_relu 
x = np.random.randn(N, data_in)
y_pred = np.random.randn(N, data_out)
w1 = np.random.randn(data_in, hidden)
w2 = np.random.randn(hidden, data_out)

3 手动神经网络

# 我觉得是要看怎么理解,给定x,y_pred,监督训练的话,就是y来作为标签,
# 生成的y无限趋近于y_pred

# 第二种就是给定x,w1,w2预测出的y_pred要无线接近y
import numpy as np
N, data_in, hidden, data_out = 16, 1000, 100, 10
epochs = 1000

x = np.random.randn(N, data_in)
y_pred = np.random.randn(N, data_out)
w1 = np.random.randn(data_in, hidden)
w2 = np.random.randn(hidden, data_out)

# learning rate
learning_rate = 1e-6
for i in range(epochs):
    # forward
    h = x.dot(w1) # N * hidden
    a = np.maximum(0, h) # N * hidden
    y = a.dot(w2) # N * date_out
   
    # loss
    loss = np.square(y - y_pred).sum()
    if i % 10 == 0:
        print(loss)
    
    # backward
    grad_y = 2 * (y - y_pred) # N * date_out
    grad_w2 = a.T.dot(grad_y) # hidden * date_out
    grad_a = grad_y.dot(w2.T) # N * hidden
    grad_h = grad_a.copy()
    grad_h[grad_h<0] = 0 # N * hidden
    grad_w1 = x.T.dot(grad_h) #  data_in * hidden
    
    # learning
    w1 -= learning_rate * grad_w1
    w2 -= learning_rate * grad_w2
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值