竞争学习神经网络详解

竞争学习神经网络

竞争神经网络属于一类循环网络,而且基于无监督学习算法,比如说:竞争算法

在竞争学习中,The output neurons of a neural network compete among themselves to become active(to be fired).
然而在多层感知器中,可能多个神经元会同时兴奋。

There are 3 basic elements necessary to build a network with a competitive learning rule, a standard technique for this type of artificial neural networks.

  1. 每个神经元需要有相同的结构,随机初始化权重,这样一来,The neurons respond differently to a given set of input samples.
  2. A limit value that is determined on the strength of each neuron.
  3. A mechanism that permits the neurons to complete for the right to respond to a given subset of inputs, such that only one output neuron is active at a time. The neuron that wins the competition is called winner-takes-all neuron.
    竞争学习的关键就在于网络中存在抑制层。
    在最简单竞争学习中,一个神经网络仅仅只有一个输出层。输出层的每一个神经元都跟输入节点全连接。网络在神经元之间可能包括反馈连接。
    在这里插入图片描述
    在这个网络中,反馈连接充当的是侧抑制。侧抑制就是输出层之间的神经元的一些抑制。
    每个神经元都倾向于抑制跟他同一层的神经元。相比之下,网络中的前馈的突触连接都是兴奋的。
    对于一个具体的输入样本来说,
    X = ( X 1 , X 2 , X 3 , . . . , X n ) T X = (X_1, X_2, X_3,..., X_n)^T X=(X1,X2,X3,...,Xn)T ,输出值 n e t k net_k netk最大的那个神经元获胜。
    获胜神经元k的输出 Y k Y_k Yk为1,所有其他的输出层神经元的输出为0
    在这里插入图片描述
    n e t k net_k netk : the combined action of all the forward and feedback inputs to neuron k.
    Let W j k W_jk Wjk denote the synaptic weights(突触权重)connecting node j to neuron k.

学习的原理:A neuron then learns by shifting synaptic weights from its inactive input nodes to its active input nodes. If a particular neuron wins the competition, each input node of that neuron relinquishes(抛弃)some proportion of its synaptic weight, and the weight relinquished is then distributed among the active input nodes.

上面这段原理,我还不理解。如果有比较懂的大牛,欢迎留言指点。
根据这个标准,这个竞争学习的规则,
Δ w j k = { η ( X j − w j k ) i f   n e u r o n   k   w i n s   t h e   c o m p e t i t i o n 0 i f   n e u r o n   k   l o s e s   t h e   c o m p e t i t i o n \Delta w_{jk} = \begin{cases} \eta(X_j - w_{jk}) & if\ neuron\ k\ wins\ the\ competition \\ 0 & if\ neuron\ k\ loses\ the\ competition \end{cases} Δwjk={η(Xjwjk)0if neuron k wins the competitionif neuron k loses the competition
where η \eta η is the learning rate. The rule has the overall effect of moving the synaptic weights of the winning neuron toward the input pattern X.
下图展现了竞争学习的初状态和网络的末状态
在这里插入图片描述
在这里插入图片描述
显然,Each output neuron discovers a cluster of input samples by moving its synaptic weights to the center of gravity of the discovered cluster.
During the competitive-learning process, similar samples are grouped by the network and represented by a single artificial neuron at the output…

Competitive( or winner-take-all) -neural networks are often used to cluster input data where the number of output clusters is given in advance.

竞争网络中有一个非常典型的案例叫做——海明网络,下面介绍一下海明网络
1.海明网络包含两层。

  • 第一层:The first layer is a standard, feedforward layer and it performs a correlation between the input vector and the preprocessed output vector
  • 第二层:The second layer performs a competition to determine which of the preprocessed output vectors is closet to the input vector. The index of the-second-layer neuron with a stable, positive output(the winner of the competition) is the index of the prototype vector that best matches the input.

虽然说竞争学习可以做一些高效的分类,但是存在一些局限。有个问题给我映像比较深:
A competitive-learning process always has as many clusters as it has output neurons, This may not be acceptable for some applications, especially when the number of clusters is not known or if it is difficult to estimate it in advance.

接下来是一个竞争网络的实例:
这个竞争网络一共有三个输入和三个输出。网络是全连接的. There are connections between all inputs and outputs and there are also lateral connections between output nodes. 有些反馈权重我们等于0,因此对于这些边,我们没有在下图的神经网络中画出来。
在这里插入图片描述
在这里插入图片描述
注意:
1. n e t 1 ∗ net_1^* net1是仅仅考虑forward feed网络的
2. n e t 1 net_1 net1是在 n e t 1 ∗ , n e t 2 ∗ , n e t 3 ∗ , net_1^*,net_2^*,net_3^*, net1,net2,net3,的基础上继续进行运算的。也就是说:
n e t 1 = n e t 1 ∗ + 0.5 ∗ n e t 2 ∗ + 0.6 ∗ n e t 3 ∗ = 0 + 0.5 ∗ 0.3 + 0.6 ∗ ( − 0.2 ) net_1 = net_1^* + 0.5 * net_2^* + 0.6 * net_3^* = 0 + 0.5 * 0.3 + 0.6 * (-0.2) net1=net1+0.5net2+0.6net3=0+0.50.3+0.6(0.2)
3.最终因为 n e t 2 net_2 net2最高,根据竞争学习的规则,只有跟该神经元向量的突触权重才会修改。所以(learning rate我们设置的是0.2也就是 η \eta η
Δ w 12 = η ( x 1 − w 12 ) = 0.2 ( 1 − 0.3 ) Δ w 22 = η ( x 2 − w 22 ) = 0.2 ( 0 − 0.7 ) Δ w 32 = η ( x 3 − w 32 ) = 0.2 ( 1 − 0 ) \Delta w_{12} = \eta(x_1 - w_{12}) =0.2 (1 - 0.3)\\ \Delta w_{22} = \eta(x_2 - w_{22}) =0.2 (0- 0.7)\\ \Delta w_{32} = \eta(x_3 - w_{32}) =0 .2 (1 -0) Δw12=η(x1w12)=0.2(10.3)Δw22=η(x2w22)=0.2(00.7)Δw32=η(x3w32)=0.2(10)
注意这里: w 32 = 0 w_{32} = 0 w32=0 因为是0,所以在网络中没有画出来
所以新一轮的:
w 12 = w 12 + Δ w 12 = 0.3 + 0.2 ( 1 − 0.3 ) w 22 = w 22 + Δ w 22 = 0.7 + 0.2 ( 0 − 0.7 ) w 32 = w 32 + Δ w 32 = 0 + 0.2 ( 1 − 0 ) w_{12} = w_{12} + \Delta w_{12} = 0.3 + 0.2(1- 0.3)\\ w_{22} = w_{22} + \Delta w_{22} = 0.7 + 0.2(0 - 0.7)\\ w_{32} = w_{32} + \Delta w_{32} = 0 + 0.2(1-0)\\ w12=w12+Δw12=0.3+0.2(10.3)w22=w22+Δw22=0.7+0.2(00.7)w32=w32+Δw32=0+0.2(10)
其余的权重不变

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Chenglin_Yu

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值