好瓜与坏瓜的误差反向传播(BP)

百度百科:BP(Error Back Propagation)算法是由学习过程中信号的正向传播与误差的反向传播两个过程组成。由于多层前馈网络的训练经常采用误差反向传播算法,人们也常把将多层前馈网络直接称为BP网络。

BP过程怎么进行

西瓜书上第五章《神经网络》内容中介绍了BP过程(本渣渣觉得叫算法超出了我的认知能力,就叫它过程吧^ _ ^)。BP过程用来训练多层神经网络,通常说BP网络时,一般指用BP过程训练的多层前馈神经网络。BP过程包括两部分:向前传播、反向传播。

用主要思想是网络通过输入数据产生输出结果 o ,将输出结果与样例的真实结果 y 进行比较。计算输出结果与真实结果之间的误差,利用误差反向调整网络中的参数,使得调整后的误差不断减小,最终达到设定的误差值。

一般推导多以一个样本(x,y)为例,在具有大量样本时候D={(x1,y1),(x2,y2),…,(xm,ym)}时,可用矩阵表示和计算。(推导过程不加了,一搜一大把~ ~)

好瓜还是坏瓜

下面使用周志华《机器学习》中的西瓜数据集进行BP吃瓜。
首先看一下数据集:
在这里插入图片描述
从色泽、根蒂、敲声、纹理、脐部、触感六个方面来判断西瓜的好坏。一共有17组数据。把上述属性转换成对应的值(我也不知道这么取是不是可行,但是代码能跑,哈哈)。

瓜数据集中一个西瓜属性有:色泽、根蒂、敲声、纹理、脐部和触感。
对于色泽,有:青绿、乌黑、浅白,分别记为:10-1。
对于根蒂,有:蜷缩、稍蜷、硬挺,分别记为:-101。
对于敲声,有:浊响、沉闷、清脆,分别记为:10-1。
对于纹理,有:清晰、稍糊、模糊,分别记为:10-1。
对于脐部,有:凹陷、稍凹、平坦,分别记为:1,0-1。
对于触感,有:硬滑、软粘,分别记为:1-1。
对于结果,有:好瓜记为1,坏瓜记为0

对于使用的神经网络结构,设计为三层神经网络:输入层input、隐层hidden、输出层output。输入层有七个神经元,为六个属性和一个偏置值(取1);隐层有五个神经元,为四个第一层的输出值和一个偏置值(取1);输出层有一个神经元。网络图如下:
在这里插入图片描述
数据集转换成一个17×7的输入矩阵,如下:

import numpy as py
data = [1,-1,1,1,1,0,1,
0,-1,0,1,1,0,1,
0,-1,1,1,1,0,1,
1,-1,0,1,1,0,1,
1,-1,1,1,1,0,1,
1,0,1,1,0,-1,1,
0,0,1,1,0,-1,1,
0,0,1,0,0,0,1,
0,0,0,1,0,0,1,
1,1,-1,0,-1,-1,1,
-1,1,-1,-1,-1,0,1,
-1,-1,1,-1,-1,-1,1,
1,0,1,0,1,0,1,
-1,0,0,0,1,0,1,
0,0,1,1,0,-1,1,
-1,-1,1,-1,-1,0,1,
0,-1,0,0,0,0,1]
data_set = np.array(data).reshape(17,7)
results = [1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0]
y = np.array(results).reshape(17,1)

输入层到隐层的权重系数为7×4的矩阵;隐层到输出层权重系数为5×1的矩阵。(这里把偏作为一个输入值1,给其赋予权重,从而是每一个神经元的偏置值都不同,也因此在后面的反响传播过程中需要注意矩阵的大小)

w1 = []
w2 = []
np.random.seed(1)#设定随机数种子
#随机产生输入层到隐层的权重
for i in range(7):
	for j in range(1,5):
		w = np.random.randn()
		w1.append(w)
#随机生成隐层到输出层的权重
for d in range(1,6):
	w = np.random.randn()
	w2.append(w)
#将权重转换成矩阵
w_IH = np.array(w1).reshape(7,4)
w_HO = np.array(w2).reshape(5,1)
#取学习率为0.2
learning_rate = 0.2

没有定义函数,没有类,只有简单粗暴的数学计算~ ~ ~

while True:
	"""前向传播"""
	z1 = np.dot(data_set,w_IH)
	f1 = 1/(1 + np.exp(-z1))
	f = np.c_[f1,np.ones(17)]
	z2 = np.dot(f,w_HO)
	o = 1/(1 + np.exp(-z2))
	"""计算误差"""
	err = (np.dot(np.transpose(y - o),(y - o)))/(17*2)
	"""如果误差小于0.015就退出权重更新"""
	if err < 0.015:
		break
	"""如果误差不满足条件,就反向传播更新权重"""
	else:
		w_IH += learning_rate*np.dot(np.transpose(data_set),(np.dot(((y-o)*(o*(1-o))),(np.transpose(np.delete(w_HO,4,axis=0))))*(f1*(1-f1))))#更新输入层到隐层权重
		w_HO += np.array(learning_rate/17*(((y-o)*(o*(1-o))*(f)).sum(axis=0))).reshape(5,1)#更新隐层到输出层权重

使用西瓜数据可以得到以下结果:
权重:

w_IH = np.array([3.20862089,-3.81943859,-6.01408043,-3.88638662,
0.71817546,-4.0745906,0.22332237,0.86457833,
-3.39514462,5.15128278,-6.87937574,-7.58202415,
-9.45771449,10.19349352,2.37789088,1.51604257,
3.08741488,-5.21858851,-8.16311227,-7.56774607,
-1.20399196,1.42302572,-3.53150881,-1.97686336,
0.73261353,-0.97218052,2.24491647,3.09080145])]).reshape(7,4)
w_HO = np.array([-4.44471455,6.79479746,-5.71851347,-5.50830956,-0.84194372]).reshape(5,1)

训练过程中误差变化的趋势图:
在这里插入图片描述
误差越来越小,而且群众更新的幅度越来越慢,设置0.014就要跑好久。设置为0.015,程序运行了5.30681239 seconds,迭代更新权重21557次。
试一下训练出来的模型效果怎么样,还是用原来的习惯数据集进行验证(虽然这不科学)。

import numpy as np
w_IH = np.array([3.20862089,-3.81943859,-6.01408043,-3.88638662,
0.71817546,-4.0745906,0.22332237,0.86457833,
-3.39514462,5.15128278,-6.87937574,-7.58202415,
-9.45771449,10.19349352,2.37789088,1.51604257,
3.08741488,-5.21858851,-8.16311227,-7.56774607,
-1.20399196,1.42302572,-3.53150881,-1.97686336,
0.73261353,-0.97218052,2.24491647,3.09080145]).reshape(7,4)
 
w_HO = np.array([-4.44471455,6.79479746,-5.71851347,-5.50830956,-0.84194372]).reshape(5,1)

data_1 = []
for i in range(0,17):
	for j in range(i*7,(i+1)*7):
		data_1.append(data[j])
	data_set = np.array(data_1).reshape(1,7)
	z1 = np.dot(data_set,w_IH)
	f1 = 1/(1 + np.exp(-z1))
	f = np.c_[f1,np.ones(1)]
	z2 = np.dot(f,w_HO)
	o = 1/(1 + np.exp(-z2))
	print(data_set)
	if o <= 0.5:
		print("the predicted value is " + str(float(o)),"\n","This is a bad watermelon")
	else:
		print("the pridicted value is "+str(float(o)),"\n","This is a good watermelon")
	data_1 = []
	print()

得到结果为:

[[ 1 -1  1  1  1  0  1]]
the pridicted value is 0.997390280493305
 This is a good watermelon

[[ 0 -1  0  1  1  0  1]]
the pridicted value is 0.9966512709670499
 This is a good watermelon

[[ 0 -1  1  1  1  0  1]]
the pridicted value is 0.9974071075852207
 This is a good watermelon

[[ 1 -1  0  1  1  0  1]]
the pridicted value is 0.9965712422712244
 This is a good watermelon

[[ 1 -1  1  1  1  0  1]]
the pridicted value is 0.997390280493305
 This is a good watermelon

[[ 1  0  1  1  0 -1  1]]
the pridicted value is 0.9971545992949601
 This is a good watermelon

[[ 0  0  1  1  0 -1  1]]
the pridicted value is 0.5000007892450851
 This is a good watermelon

[[0 0 1 0 0 0 1]]
the pridicted value is 0.9956983413894498
 This is a good watermelon

[[0 0 0 1 0 0 1]]
the predicted value is 0.005679044215452493
 This is a bad watermelon

[[ 1  1 -1  0 -1 -1  1]]
the predicted value is 6.79857766330772e-08
 This is a bad watermelon

[[-1  1 -1 -1 -1  0  1]]
the predicted value is 6.744708747162627e-08
 This is a bad watermelon

[[-1 -1  1 -1 -1 -1  1]]
the predicted value is 0.00019832587289876006
 This is a bad watermelon

[[1 0 1 0 1 0 1]]
the predicted value is 0.0059409704755534745
 This is a bad watermelon

[[-1  0  0  0  1  0  1]]
the predicted value is 0.0003025115347280729
 This is a bad watermelon

[[ 0  0  1  1  0 -1  1]]
the pridicted value is 0.5000007892450851
 This is a good watermelon

[[-1 -1  1 -1 -1  0  1]]
the predicted value is 0.0007473591454174494
 This is a bad watermelon

[[ 0 -1  0  0  0  0  1]]
the predicted value is 0.001359640263762882
 This is a bad watermelon

第十五个判断错了。

以上产生结果、图标完整代码如下:

import numpy as np
from matplotlib import pyplot as plt
import time
start = time.clock()

data = [1,-1,1,1,1,0,1,
0,-1,0,1,1,0,1,
0,-1,1,1,1,0,1,
1,-1,0,1,1,0,1,
1,-1,1,1,1,0,1,
1,0,1,1,0,-1,1,
0,0,1,1,0,-1,1,
0,0,1,0,0,0,1,
0,0,0,1,0,0,1,
1,1,-1,0,-1,-1,1,
-1,1,-1,-1,-1,0,1,
-1,-1,1,-1,-1,-1,1,
1,0,1,0,1,0,1,
-1,0,0,0,1,0,1,
0,0,1,1,0,-1,1,
-1,-1,1,-1,-1,0,1,
0,-1,0,0,0,0,1]

results = [1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0]

data_set = np.array(data).reshape(17,7)
y = np.array(results).reshape(17,1)
#print(data_set,"\n",y)
w1 = []
w2 = []
np.random.seed(1)
for i in range(7):
	for j in range(1,5):
		w = np.random.randn()
		w1.append(w)
#print(w1)
for d in range(1,6):
	w = np.random.randn()
	w2.append(w)
w_IH = np.array(w1).reshape(7,4)
w_HO = np.array(w2).reshape(5,1)

a = []
b = []
i = 0
"""反向传播"""
learning_rate = 0.2
#plt.ion()
while True:
	"""前向传递"""
	z1 = np.dot(data_set,w_IH)
	f1 = 1/(1 + np.exp(-z1))
	f = np.c_[f1,np.ones(17)]
	z2 = np.dot(f,w_HO)
	o = 1/(1 + np.exp(-z2))
	"""计算误差"""
	err = (np.dot(np.transpose(y - o),(y - o)))/(17*2)
	i += 1
	a.append(i)
	b.append(float(err))
	print(err)
	if err < 0.015:
		break
	else:
		w_IH += learning_rate*np.dot(np.transpose(data_set),(np.dot(((y-o)*(o*(1-o))),(np.transpose(np.delete(w_HO,4,axis=0))))*(f1*(1-f1))))
		w_HO += np.array(learning_rate/17*(((y-o)*(o*(1-o))*(f)).sum(axis=0))).reshape(5,1)
elapsed = (time.clock() - start)
print("the programe runs for " + str(elapsed) + "seconds")
print(i)
print(w_IH)
print(w_HO)
fig = plt.figure()
ax=fig.add_subplot(1,1,1)
plt.xlabel("generation(end when generation reach " + str(i) + ")")
plt.ylabel("err")
plt.plot(a,b)
plt.title("the err varies with generation(end when err<0.015)")
plt.show()	
import numpy as np

data = [1,-1,1,1,1,0,1,
0,-1,0,1,1,0,1,
0,-1,1,1,1,0,1,
1,-1,0,1,1,0,1,
1,-1,1,1,1,0,1,
1,0,1,1,0,-1,1,
0,0,1,1,0,-1,1,
0,0,1,0,0,0,1,
0,0,0,1,0,0,1,
1,1,-1,0,-1,-1,1,
-1,1,-1,-1,-1,0,1,
-1,-1,1,-1,-1,-1,1,
1,0,1,0,1,0,1,
-1,0,0,0,1,0,1,
0,0,1,1,0,-1,1,
-1,-1,1,-1,-1,0,1,
0,-1,0,0,0,0,1]
	
w_IH = np.array([3.20862089,-3.81943859,-6.01408043,-3.88638662,
0.71817546,-4.0745906,0.22332237,0.86457833,
-3.39514462,5.15128278,-6.87937574,-7.58202415,
-9.45771449,10.19349352,2.37789088,1.51604257,
3.08741488,-5.21858851,-8.16311227,-7.56774607,
-1.20399196,1.42302572,-3.53150881,-1.97686336,
0.73261353,-0.97218052,2.24491647,3.09080145]).reshape(7,4)
 
w_HO = np.array([-4.44471455,6.79479746,-5.71851347,-5.50830956,-0.84194372]).reshape(5,1)

data_1 = []
for i in range(0,17):
	for j in range(i*7,(i+1)*7):
		data_1.append(data[j])
	data_set = np.array(data_1).reshape(1,7)
	z1 = np.dot(data_set,w_IH)
	f1 = 1/(1 + np.exp(-z1))
	f = np.c_[f1,np.ones(1)]
	z2 = np.dot(f,w_HO)
	o = 1/(1 + np.exp(-z2))
	print(data_set)
	if o <= 0.5:
		print("the predicted value is " + str(float(o)),"\n","This is a bad watermelon")
	else:
		print("the pridicted value is "+str(float(o)),"\n","This is a good watermelon")
	data_1 = []
	print()

溜了溜了~ ~ ~

  • 3
    点赞
  • 12
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值