逐步反向传播示例

背景

反向传播是训练神经网络的常用方法。有没有论文短缺网上说试图解释反向传播是如何工作的,但很少,其中包括与实际数字的例子。这篇文章是我尝试解释它是如何工作的一个具体例子,人们可以比较他们自己的计算,以确保他们正确理解反向传播。

如果您对此感兴趣,您应该注册我的新闻通讯,在那里我发布与我正在进行的人工智能相关的项目。

Python中的反向传播

您可以使用我编写的Python脚本来实现此Github存储库中的反向传播算法。

反向传播可视化

对于在学习时显示神经网络的交互式可视化,请查看我的神经网络可视化

其他资源

如果您发现本教程很有用并希望继续学习神经网络,机器学习和深度学习,我强烈建议您查看Adrian Rosebrock的新书“使用Python进行计算机视觉深度学习”。我非常喜欢这本书,很快就会有一个完整的评论。

概观

对于本教程,我们将使用具有两个输入,两个隐藏神经元,两个输出神经元的神经网络。此外,隐藏和输出神经元将包括偏差。

这是基本结构:

neural_network(7)

为了使用一些数字,这里是初始权重偏差训练输入/输出

neural_network(9)

反向传播的目标是优化权重,以便神经网络可以学习如何正确地将任意输入映射到输出。

对于本教程的其余部分,我们将使用单个训练集:给定输入0.05和0.10,我们希望神经网络输出0.01和0.99。

前进通行证

首先,让我们看看神经网络目前在给定上面的权重和偏差以及0.05和0.10的输入时预测的内容。为此,我们将通过网络向前提供这些输入。

我们计算出净输入总到每个隐藏层神经元,壁球使用的总净输入激活功能(在这里我们使用的逻辑功能),然后重复上述过程与输出层的神经元。

总净输入也被称为只是净输入一些消息来源

以下是我们计算总净输入的方式H_1

net_ {h1} = w_1 * i_1 + w_2 * i_2 + b_1 * 1

net_ {h1} = 0.15 * 0.05 + 0.2 * 0.1 + 0.35 * 1 = 0.3775

然后我们使用logistic函数压缩它以获得输出H_1

out_ {h1} = \ frac {1} {1 + e ^ { -  net_ {h1}}} = \ frac {1} {1 + e ^ { -  0.3775}} = 0.593269992

H_2我们执行相同的过程:

out_ {h2} = 0.596884378

我们使用隐藏层神经元的输出作为输入,对输出层神经元重复此过程。

这是输出O-1

net_ {o1} = w_5 * out_ {h1} + w_6 * out_ {h2} + b_2 * 1

net_ {o1} = 0.4 * 0.593269992 + 0.45 * 0.596884378 + 0.6 * 1 = 1.105905967

out_ {o1} = \ frac {1} {1 + e ^ { -  net_ {o1}}} = \ frac {1} {1 + e ^ { -  1.105905967}} = 0.75136507

并执行相同的过程0-2

out_ {o2} = 0.772928465

计算总误差

我们现在可以使用平方误差函数计算每个输出神经元的误差,并将它们相加以得到总误差:

E_ {total} = \ sum \ frac {1} {2}(目标 - 输出)^ {2}

一些来源将目标称为理想目标,将输出称为实际目标
\压裂{1} {2}包括在内,以便我们区分后对指数被取消。结果最终乘以学习率,所以我们在这里引入常数并不重要[ 1 ]。

例如,目标输出为O-10.01但神经网络输出为0.75136507,因此其错误为:

E_ {o1} = \ frac {1} {2}(target_ {o1}  -  out_ {o1})^ {2} = \ frac {1} {2}(0.01  -  0.75136507)^ {2} = 0.274811083

重复此过程0-2(记住目标是0.99)我们得到:

E_ {o2} = 0.023560026

神经网络的总误差是这些误差的总和:

E_ {total} = E_ {o1} + E_ {o2} = 0.274811083 + 0.023560026 = 0.298371109

向后通行证

我们使用反向传播的目标是更新网络中的每个权重,以便它们使实际输出更接近目标输出,从而最小化每个输出神经元和整个网络的误差。

输出层

考虑的例句。我们想知道变化的例句对总误差的影响程度,也就是说\ frac {\ partial E_ {total}} {\ partial w_ {5}}

\ frac {\ partial E_ {total}} {\ partial w_ {5}}被理解为“ E_ {}总相对于的偏导数W_ {5}”。你也可以说“相对于梯度W_ {5}”。

通过应用链规则,我们知道:

\ frac {\ partial E_ {total}} {\ partial w_ {5}} = \ frac {\ partial E_ {total}} {\ partial out_ {o1}} * \ frac {\ partial out_ {o1}} {\ partial net_ {o1}} * \ frac {\ partial net_ {o1}} {\ partial w_ {5}}

在视觉上,这是我们正在做的事情:

output_1_backprop(4)

我们需要弄清楚这个等式中的每个部分。

首先,总误差相对于输出变化了多少?

E_ {total} = \ frac {1} {2}(target_ {o1}  -  out_ {o1})^ {2} + \ frac {1} {2}(target_ {o2}  -  out_ {o2})^ { 2}

\ frac {\ partial E_ {total}} {\ partial out_ {o1}} = 2 * \ frac {1} {2}(target_ {o1}  -  out_ {o1})^ {2  -  1} * -1 + 0

\ frac {\ partial E_ {total}} {\ partial out_ {o1}} =  - (target_ {o1}  -  out_ {o1})=  - (0.01  -  0.75136507)= 0.74136507

- (目标 - 出) 有时表示为 出 - 目标
当我们取总误差的偏导数时OUT_ {} O1,数量\ frac {1} {2}(target_ {o2}  -  out_ {o2})^ {2}变为零,因为OUT_ {} O1它不影响它,这意味着我们采用的是常数的导数,即零。

接下来,O-1相对于总净输入的变化输出多少?

逻辑函数的偏导数是输出乘以1减去输出:

out_ {o1} = \ frac {1} {1 + e ^ { -  net_ {o1}}}

\ frac {\ partial out_ {o1}} {\ partial net_ {o1}} = out_ {o1}(1  -  out_ {o1})= 0.75136507(1  -  0.75136507)= 0.186815602

最后,总净输入O1变化相对于的例句多少?

net_ {o1} = w_5 * out_ {h1} + w_6 * out_ {h2} + b_2 * 1

\ frac {\ partial net_ {o1}} {\ partial w_ {5}} = 1 * out_ {h1} * w_5 ^ {(1  -  1)} + 0 + 0 = out_ {h1} = 0.593269992

把它们放在一起:

\ frac {\ partial E_ {total}} {\ partial w_ {5}} = \ frac {\ partial E_ {total}} {\ partial out_ {o1}} * \ frac {\ partial out_ {o1}} {\ partial net_ {o1}} * \ frac {\ partial net_ {o1}} {\ partial w_ {5}}

\ frac {\ partial E_ {total}} {\ partial w_ {5}} = 0.74136507 * 0.186815602 * 0.593269992 = 0.082167041

您经常会看到以delta规则的形式组合的计算:

\ frac {\ partial E_ {total}} {\ partial w_ {5}} =  - (target_ {o1}  -  out_ {o1})* out_ {o1}(1  -  out_ {o1})* out_ {h1}

另外,我们有\ frac {\ partial E_ {total}} {\ partial out_ {o1}}\ frac {\ partial out_ {o1}} {\ partial net_ {o1}}它可以写成\ frac {\ partial E_ {total}} {\ partial net_ {o1}},又名\ delta_ {O1}又名(希腊字母Delta)节点三角洲。我们可以用它来重写上面的计算:

\ delta_ {o1} = \ frac {\ partial E_ {total}} {\ partial out_ {o1}} * \ frac {\ partial out_ {o1}} {\ partial net_ {o1}} = \ frac {\ partial E_ {total}} {\ partial net_ {o1}}

\ delta_ {o1} =  - (target_ {o1}  -  out_ {o1})* out_ {o1}(1  -  out_ {o1})

因此:

\ frac {\ partial E_ {total}} {\ partial w_ {5}} = \ delta_ {o1} out_ {h1}

一些消息来源从中提取负号,\三角洲因此它将被写为:

\ frac {\ partial E_ {total}} {\ partial w_ {5}} =  -  \ delta_ {o1} out_ {h1}

为了减少误差,我们然后从当前权重中减去该值(可选地乘以一些学习率eta,我们将其设置为0.5):

w_5 ^ {+} = w_5  -  \ eta * \ frac {\ partial E_ {total}} {\ partial w_ {5}} = 0.4  -  0.5 * 0.082167041 = 0.35891648

一些 来源使用\α(alpha)来表示学习率,其他来源使用 \ ETA(eta),而其他来源甚至使用\小量(epsilon)。

我们可以重复这个过程中获得新的权重w_6w_7以及w_8

w_6 ^ {+} = 0.408666186

w_7 ^ {+} = 0.511301270

w_8 ^ {+} = 0.561370121

我们将新权重引入隐藏层神经元之后,我们在神经网络中执行实际更新(即,当我们继续下面的反向传播算法时,我们使用原始权重,而不是更新的权重)。

隐藏层

接下来,我们将继续为新的计算值,向后传递W_1W_2w_3,和W_4

大局,这是我们需要弄清楚的:

\ frac {\ partial E_ {total}} {\ partial w_ {1}} = \ frac {\ partial E_ {total}} {\ partial out_ {h1}} * \ frac {\ partial out_ {h1}} {\ partial net_ {h1}} * \ frac {\ partial net_ {h1}} {\ partial w_ {1}}

视觉:

NN-计算

我们将使用与输出层类似的过程,但略微不同,以解释每个隐藏层神经元的输出对多个输出神经元的输出(以及错误)的贡献。我们知道这OUT_ {} H1会影响两者OUT_ {} O1OUT_ {} O2因此\ frac {\ partial E_ {total}} {\ partial out_ {h1}}需要考虑它对两个输出神经元的影响:

\ frac {\ partial E_ {total}} {\ partial out_ {h1}} = \ frac {\ partial E_ {o1}} {\ partial out_ {h1}} + \ frac {\ partial E_ {o2}} {\偏出{h1}}

从以下开始\ frac {\ partial E_ {o1}} {\ partial out_ {h1}}

\ frac {\ partial E_ {o1}} {\ partial out_ {h1}} = \ frac {\ partial E_ {o1}} {\ partial net_ {o1}} * \ frac {\ partial net_ {o1}} {\偏出{h1}}

我们可以\ frac {\ partial E_ {o1}} {\ partial net_ {o1}}使用前面计算的值来计算:

\ frac {\ partial E_ {o1}} {\ partial net_ {o1}} = \ frac {\ partial E_ {o1}} {\ partial out_ {o1}} * \ frac {\ partial out_ {o1}} {\ partial net_ {o1}} = 0.74136507 * 0.186815602 = 0.138498562

并且\ frac {\ partial net_ {o1}} {\ partial out_ {h1}}等于的例句

net_ {o1} = w_5 * out_ {h1} + w_6 * out_ {h2} + b_2 * 1

\ frac {\ partial net_ {o1}} {\ partial out_ {h1}} = w_5 = 0.40

将它们插入:

\ frac {\ partial E_ {o1}} {\ partial out_ {h1}} = \ frac {\ partial E_ {o1}} {\ partial net_ {o1}} * \ frac {\ partial net_ {o1}} {\ partial out_ {h1}} = 0.138498562 * 0.40 = 0.055399425

按照相同的过程\ frac {\ partial E_ {o2}} {\ partial out_ {h1}},我们得到:

\ frac {\ partial E_ {o2}} {\ partial out_ {h1}} = -0.019049119

因此:

\ frac {\ partial E_ {total}} {\ partial out_ {h1}} = \ frac {\ partial E_ {o1}} {\ partial out_ {h1}} + \ frac {\ partial E_ {o2}} {\ partial out_ {h1}} = 0.055399425 + -0.019049119 = 0.036350306

现在,我们有\ frac {\ partial E_ {total}} {\ partial out_ {h1}},我们需要弄清楚\ frac {\ partial out_ {h1}} {\ partial net_ {h1}},然后\ frac {\ partial net_ {h1}} {\ partial w}每一个权重:

out_ {h1} = \ frac {1} {1 + e ^ { -  net_ {h1}}}

\ frac {\ partial out_ {h1}} {\ partial net_ {h1}} = out_ {h1}(1  -  out_ {h1})= 0.59326999(1  -  0.59326999)= 0.241300709

我们计算总净输入的偏导数H_1W_1我们对输出神经元的相同:

net_ {h1} = w_1 * i_1 + w_3 * i_2 + b_1 * 1

\ frac {\ partial net_ {h1}} {\ partial w_1} = i_1 = 0.05

把它们放在一起:

\ frac {\ partial E_ {total}} {\ partial w_ {1}} = \ frac {\ partial E_ {total}} {\ partial out_ {h1}} * \ frac {\ partial out_ {h1}} {\ partial net_ {h1}} * \ frac {\ partial net_ {h1}} {\ partial w_ {1}}

\ frac {\ partial E_ {total}} {\ partial w_ {1}} = 0.036350306 * 0.241300709 * 0.05 = 0.000438568

您可能还会看到这样写:

\ frac {\ partial E_ {total}} {\ partial w_ {1}} =(\ sum \ limits_ {o} {\ frac {\ partial E_ {total}} {\ partial out_ {o}} * \ frac { \ partial out_ {o}} {\ partial net_ {o}} * \ frac {\ partial net_ {o}} {\ partial out_ {h1}}})* \ frac {\ partial out_ {h1}} {\ partial net_ {h1}} * \ frac {\ partial net_ {h1}} {\ partial w_ {1}}

\ frac {\ partial E_ {total}} {\ partial w_ {1}} =(\ sum \ limits_ {o} {\ delta_ {o} * w_ {ho}})* out_ {h1}(1  -  out_ { h1})* i_ {1}

\ frac {\ partial E_ {total}} {\ partial w_ {1}} = \ delta_ {h1} i_ {1}

我们现在可以更新W_1

w_1 ^ {+} = w_1  -  \ eta * \ frac {\ partial E_ {total}} {\ partial w_ {1}} = 0.15  -  0.5 * 0.000438568 = 0.149780716

重复这些W_2w_3W_4

w_2 ^ {+} = 0.19956143

w_3 ^ {+} = 0.24975114

w_4 ^ {+} = 0.29950229

最后,我们更新了所有重量!当我们最初输入0.05和0.1输入时,网络上的错误是0.298371109。在第一轮反向传播之后,总误差现在降至0.291027924。它可能看起来不多,但在重复此过程10,000次之后,例如,错误直线下降到0.0000351085。此时,当我们向前馈送0.05和0.1时,两个输出神经元产生0.015912196(对比0.01目标)和0.984065734(对比0.99目标)。

如果你已经做到这一点并发现上述任何一个错误,或者可以想出任何方法让未来的读者更清楚,请不要犹豫,给我留言。谢谢!

#coding:utf-8
import random
import math

#
#   参数解释:
#   "pd_" :偏导的前缀
#   "d_" :导数的前缀
#   "w_ho" :隐含层到输出层的权重系数索引
#   "w_ih" :输入层到隐含层的权重系数的索引

class NeuralNetwork:
    LEARNING_RATE = 0.5

    def __init__(self, num_inputs, num_hidden, num_outputs, hidden_layer_weights = None, hidden_layer_bias = None, output_layer_weights = None, output_layer_bias = None):
        self.num_inputs = num_inputs

        self.hidden_layer = NeuronLayer(num_hidden, hidden_layer_bias)
        self.output_layer = NeuronLayer(num_outputs, output_layer_bias)

        self.init_weights_from_inputs_to_hidden_layer_neurons(hidden_layer_weights)
        self.init_weights_from_hidden_layer_neurons_to_output_layer_neurons(output_layer_weights)

    def init_weights_from_inputs_to_hidden_layer_neurons(self, hidden_layer_weights):
        weight_num = 0
        for h in range(len(self.hidden_layer.neurons)):
            for i in range(self.num_inputs):
                if not hidden_layer_weights:
                    self.hidden_layer.neurons[h].weights.append(random.random())
                else:
                    self.hidden_layer.neurons[h].weights.append(hidden_layer_weights[weight_num])
                weight_num += 1

    def init_weights_from_hidden_layer_neurons_to_output_layer_neurons(self, output_layer_weights):
        weight_num = 0
        for o in range(len(self.output_layer.neurons)):
            for h in range(len(self.hidden_layer.neurons)):
                if not output_layer_weights:
                    self.output_layer.neurons[o].weights.append(random.random())
                else:
                    self.output_layer.neurons[o].weights.append(output_layer_weights[weight_num])
                weight_num += 1

    def inspect(self):
        print('------')
        print('* Inputs: {}'.format(self.num_inputs))
        print('------')
        print('Hidden Layer')
        self.hidden_layer.inspect()
        print('------')
        print('* Output Layer')
        self.output_layer.inspect()
        print('------')

    def feed_forward(self, inputs):
        hidden_layer_outputs = self.hidden_layer.feed_forward(inputs)
        return self.output_layer.feed_forward(hidden_layer_outputs)

    def train(self, training_inputs, training_outputs):
        self.feed_forward(training_inputs)

        # 1. 输出神经元的值
        pd_errors_wrt_output_neuron_total_net_input = [0] * len(self.output_layer.neurons)
        for o in range(len(self.output_layer.neurons)):

            # ∂E/∂zⱼ
            pd_errors_wrt_output_neuron_total_net_input[o] = self.output_layer.neurons[o].calculate_pd_error_wrt_total_net_input(training_outputs[o])

        # 2. 隐含层神经元的值
        pd_errors_wrt_hidden_neuron_total_net_input = [0] * len(self.hidden_layer.neurons)
        for h in range(len(self.hidden_layer.neurons)):

            # dE/dyⱼ = Σ ∂E/∂zⱼ * ∂z/∂yⱼ = Σ ∂E/∂zⱼ * wᵢⱼ
            d_error_wrt_hidden_neuron_output = 0
            for o in range(len(self.output_layer.neurons)):
                d_error_wrt_hidden_neuron_output += pd_errors_wrt_output_neuron_total_net_input[o] * self.output_layer.neurons[o].weights[h]

            # ∂E/∂zⱼ = dE/dyⱼ * ∂zⱼ/∂
            pd_errors_wrt_hidden_neuron_total_net_input[h] = d_error_wrt_hidden_neuron_output * self.hidden_layer.neurons[h].calculate_pd_total_net_input_wrt_input()

        # 3. 更新输出层权重系数
        for o in range(len(self.output_layer.neurons)):
            for w_ho in range(len(self.output_layer.neurons[o].weights)):

                # ∂Eⱼ/∂wᵢⱼ = ∂E/∂zⱼ * ∂zⱼ/∂wᵢⱼ
                pd_error_wrt_weight = pd_errors_wrt_output_neuron_total_net_input[o] * self.output_layer.neurons[o].calculate_pd_total_net_input_wrt_weight(w_ho)

                # Δw = α * ∂Eⱼ/∂wᵢ
                self.output_layer.neurons[o].weights[w_ho] -= self.LEARNING_RATE * pd_error_wrt_weight

        # 4. 更新隐含层的权重系数
        for h in range(len(self.hidden_layer.neurons)):
            for w_ih in range(len(self.hidden_layer.neurons[h].weights)):

                # ∂Eⱼ/∂wᵢ = ∂E/∂zⱼ * ∂zⱼ/∂wᵢ
                pd_error_wrt_weight = pd_errors_wrt_hidden_neuron_total_net_input[h] * self.hidden_layer.neurons[h].calculate_pd_total_net_input_wrt_weight(w_ih)

                # Δw = α * ∂Eⱼ/∂wᵢ
                self.hidden_layer.neurons[h].weights[w_ih] -= self.LEARNING_RATE * pd_error_wrt_weight

    def calculate_total_error(self, training_sets):
        total_error = 0
        for t in range(len(training_sets)):
            training_inputs, training_outputs = training_sets[t]
            self.feed_forward(training_inputs)
            for o in range(len(training_outputs)):
                total_error += self.output_layer.neurons[o].calculate_error(training_outputs[o])
        return total_error

class NeuronLayer:
    def __init__(self, num_neurons, bias):

        # 同一层的神经元共享一个截距项b
        self.bias = bias if bias else random.random()

        self.neurons = []
        for i in range(num_neurons):
            self.neurons.append(Neuron(self.bias))

    def inspect(self):
        print('Neurons:', len(self.neurons))
        for n in range(len(self.neurons)):
            print(' Neuron', n)
            for w in range(len(self.neurons[n].weights)):
                print('  Weight:', self.neurons[n].weights[w])
            print('  Bias:', self.bias)

    def feed_forward(self, inputs):
        outputs = []
        for neuron in self.neurons:
            outputs.append(neuron.calculate_output(inputs))
        return outputs

    def get_outputs(self):
        outputs = []
        for neuron in self.neurons:
            outputs.append(neuron.output)
        return outputs

class Neuron:
    def __init__(self, bias):
        self.bias = bias
        self.weights = []

    def calculate_output(self, inputs):
        self.inputs = inputs
        self.output = self.squash(self.calculate_total_net_input())
        return self.output

    def calculate_total_net_input(self):
        total = 0
        for i in range(len(self.inputs)):
            total += self.inputs[i] * self.weights[i]
        return total + self.bias

    # 激活函数sigmoid
    def squash(self, total_net_input):
        return 1 / (1 + math.exp(-total_net_input))


    def calculate_pd_error_wrt_total_net_input(self, target_output):
        return self.calculate_pd_error_wrt_output(target_output) * self.calculate_pd_total_net_input_wrt_input();

    # 每一个神经元的误差是由平方差公式计算的
    def calculate_error(self, target_output):
        return 0.5 * (target_output - self.output) ** 2

    
    def calculate_pd_error_wrt_output(self, target_output):
        return -(target_output - self.output)

    
    def calculate_pd_total_net_input_wrt_input(self):
        return self.output * (1 - self.output)


    def calculate_pd_total_net_input_wrt_weight(self, index):
        return self.inputs[index]


# 文中的例子:

nn = NeuralNetwork(2, 2, 2, hidden_layer_weights=[0.15, 0.2, 0.25, 0.3], hidden_layer_bias=0.35, output_layer_weights=[0.4, 0.45, 0.5, 0.55], output_layer_bias=0.6)
for i in range(10000):
    nn.train([0.05, 0.1], [0.01, 0.09])
    print(i, round(nn.calculate_total_error([[[0.05, 0.1], [0.01, 0.09]]]), 9))


#另外一个例子,可以把上面的例子注释掉再运行一下:

# training_sets = [
#     [[0, 0], [0]],
#     [[0, 1], [1]],
#     [[1, 0], [1]],
#     [[1, 1], [0]]
# ]

# nn = NeuralNetwork(len(training_sets[0][0]), 5, len(training_sets[0][1]))
# for i in range(10000):
#     training_inputs, training_outputs = random.choice(training_sets)
#     nn.train(training_inputs, training_outputs)
#     print(i, nn.calculate_total_error(training_sets))

运行结果一样了

------
* Inputs: 2
------
Hidden Layer
Neurons: 2
 Neuron 0
  Weight: 0.1497807161327628
  Weight: 0.19956143226552567
  Bias: 0.35
 Neuron 1
  Weight: 0.24975114363236958
  Weight: 0.29950228726473915
  Bias: 0.35
------
* Output Layer
Neurons: 2
 Neuron 0
  Weight: 0.35891647971788465
  Weight: 0.4086661860762334
  Bias: 0.6
 Neuron 1
  Weight: 0.5113012702387375
  Weight: 0.5613701211079891
  Bias: 0.6
------

 

转载于:https://my.oschina.net/lwaif/blog/3061048

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值