《Python神经网络》by Taeiq Rashid学习笔记

本文是Taeiq Rashid的《Python神经网络》学习笔记,介绍了神经网络的基本构造,包括激活函数如sigmoid,输入层、隐藏层的功能,权重的重要性以及如何用矩阵表示神经网络。此外,还探讨了误差反向传播和权重更新的梯度下降法,阐述了神经网络的训练过程。
摘要由CSDN通过智能技术生成

激活函数:相当于设定阈值,达到阈值才进行输出。例如最简单的阶跃函数、sigmoid函数( y = 1 1 + e − x y = \frac{1}{1+e^{-x}} y=1+ex1

输入层:仅表示输入信号,无需应用激活函数;

隐藏层:先对输入加权求和,再应用激活函数。

权重:每个输入所乘的权重是神经网络学习的内容,权重的持续优化得到最优结果。

用矩阵表示:[( w 11 , w 12 w_{11}, w_{12} w11,w12),( w 21 , w 22 w_{21}, w_{22} w21,w22)] * [input1, input2]
X = W I X=WI X=WI

误差的调整:对每个输出的误差,再通过权重从输出向后传播到网络中,即反向传播。

多个输出节点的反向传播误差调整:
在这里插入图片描述
在这里插入图片描述

更新权重:梯度下降
∂ E ∂ w j k = ∂ ( t k − o k ) 2 w j k \frac{\partial E}{\partial w_{jk}}=\frac{\partial (t_k-o_k)^2}{w_{jk}} wjkE=wjk(tkok)2

在这里插入图片描述

Python 制作神经网络

  1. 初始化神经网络
import matplotlib.pyplot as plt
import numpy
import pylab
import scipy.special

# neural network class definition
class neuralNetwork:#定义神经网络的类

    # initialise the neural network
    def __init__(self, inputnodes, hiddennodes, outputnodes, learningrate):#初始化,输入节点数、隐藏层节点数、输出节点数、学习率
        # set number of nodes in each input, hidden, output layer
        self.inodes = inputnodes
        self.hnodes = hiddennodes
        self.onodes = outputnodes

        # link weight matrices, wih and who
        # weights inside the arrays are w_i_j, where link is from node i to node j in the next layer
        # w11 w21
        # w12 w22 etc
        self.wih = numpy.random.normal(0.0, pow(self.inodes, -0.5), (self.hnodes, self.inodes))#随机生成隐藏层和输出层权重
        self.who = numpy.random.normal(0.0, pow(self.hnodes, -0.5), (self.onodes, self.hnodes))
        #正太分布中心为0.0
        # learning rate
        self.lr = learningrate

        # 用sigmoid函数作为激活函数
        self.activation_function = lambda x: scipy.special.expit(x)

        pass

    # train the neural network
    def train(self, inputs_list, targets_list):

        pass

    # query the neural network
    def query(self, inputs_list):
        # convert inputs list to 2d array
        inputs = numpy.array(inputs_list, ndmin=2).T

        # calculate signals into hidden layer
        hidden_inputs = numpy.dot(self.wih, inputs) #X=WI
        # calculate the signals emerging from hidden layer
        hidden_outputs = self.activation_function(hidden_inputs)

        # calculate signals into final output layer
        final_inputs = numpy.dot(self.who, hidden_outputs)
        # calculate the signals emerging from final output layer
        final_outputs = self.activation_function(final_inputs)

        return final_outputs

input_nodes = 3
hidden_nodes = 3
output_nodes = 3

# learning rate is 0.3
learning_rate = 0.3

# create instance of neural network
n = neuralNetwork(input_nodes,hidden_nodes,output_nodes, learning_rate)

print(n.query([1, 0.5, -1.5]))

输出[[0.48701032] [0.42533232] [0.60665723]]

  1. 训练神经网络
  def train(self, inputs_list, targets_list):
        # convert inputs list to 2d array
        inputs = numpy.array(inputs_list, ndmin=2).T
        #转成列向量,因为最后计算结果是列向量,分别两个列向量相减
        targets = numpy.array(targets_list, ndmin=2).T
        
        # calculate signals into hidden layer
        hidden_inputs = numpy.dot(self.wih, inputs)
        # calculate the signals emerging from hidden layer
        hidden_outputs = self.activation_function(hidden_inputs)
        
        # calculate signals into final output layer
        final_inputs = numpy.dot(self.who, hidden_outputs)
        # calculate the signals emerging from final output layer
        final_outputs = self.activation_function(final_inputs)
        
        # output layer error is the (target - actual)
        output_errors = targets - final_outputs
        # hidden layer error is the output_errors, split by weights, recombined at hidden nodes
        hidden_errors = numpy.dot(self.who.T, output_errors) 
        
        # update the weights for the links between the hidden and output layers
        self.who += self.lr * numpy.dot((output_errors * final_outputs * (1.0 - final_outputs)), numpy.transpose(hidden_outputs))
        
        # update the weights for the links between the input and hidden layers
        self.wih += self.lr * numpy.dot((hidden_errors * hidden_outputs * (1.0 - hidden_outputs)), numpy.transpose(inputs))
        
        pass
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值