Rbf神经网络使用Tensorflow实现

本文详细介绍了使用 TensorFlow 构建 RBF (Radial Basis Function) 神经网络的步骤,包括定义隐藏层、径向基函数计算和输出层的softmax处理。通过实例展示了如何用 RBF 网络进行分类,并通过数据验证了其准确性。
摘要由CSDN通过智能技术生成

本文用tensorflow实现rbf神经网络:
主要内容:
1、rbf神经网络实现步骤
2、tensorflow实现rbf神经网络分类

**

1、rbf神经网络实现步骤

**
(1)定义隐藏层神经元个数为hidden=20(神经元个数是随便选的),选择每个神经元对应的中心点center,中心点的选择方法:将输入样本x的的每个特征的最大值max与最小值min的差值分为hidden份(max - min)/hidden,每一个中心点的坐标就是:

# 求x所有特征的最大值与最小值
t_max = np.max(x, axis=0)
t_min = np.min(x, axis=0)
# 将最大值和最小值分为self.hidden份,平均分配中心
center = []
for i in range(self.hidden):
	center.append(i * ((t_max - t_min)/self.hidden) + t_min)
center = np.array(center)

(2)使用径向基函数计算隐藏层的值
(3)使用softmax计算输出层的值

**

2、tensorflow实现rbf神经网络分类

**

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import LabelEncoder
# 由于我下的tensorflow是2.0版本,下面这句话为了防止在2.0下使用1.0版本报错
tf.compat.v1.disable_eager_execution()


class RbfClassification:
    def __init__(self, learning_rate=0.01, hidden=10):
    	# 学习率
        self.lr = learning_rate
        # 隐藏层神经元个数
        self.hidden = hidden

    def center_mat(self, x):
        # 求x所有特征的最大值与最小值
        t_max = np.max(x, axis=0)
        t_min = np.min(x, axis=0)
        # 将最大值和最小值分为self.hidden份,平均分配中心
        center = []
        for i in range(self.hidden):
            center.append(i * ((t_max - t_min)/self.hidden) + t_min)
        center = np.array(center)

        # 求解||x-c||2,将结果存储到mat中
        mat = []
        for i in x:
            temp = []
            for j in center - i:
                temp.append(np.dot(j, j.T))
            mat.append(temp)
        return np.array(mat)

    def run(self, x, y):
        # 1、定义输入和输出
        self.x = tf.compat.v1.placeholder(tf.float32, [None, x.shape[1]])
        self.y = tf.compat.v1.placeholder(tf.float32, [None, y.shape[1]])
        self.mat = tf.compat.v1.placeholder(tf.float32, [None, self.hidden])

        # 2、构建输入层到隐藏层
        beta = tf.Variable(tf.random.normal([1, self.hidden]))  # 径向基函数为e^-beta*(||x-c||^2)
        L1 = tf.math.exp(self.mat*-beta)  # 使用径向基函数计算隐藏层的值

        # 3、构建隐藏层到输出层
        weight = tf.Variable(tf.random.normal([self.hidden, self.y.shape[1]]))  # 权值
        basic = tf.Variable(tf.random.normal([1, self.y.shape[1]]))  # 偏置值
        output = tf.matmul(L1, weight) + basic  # 输出值

        # 4、损失函数
        loss = tf.reduce_mean(tf.compat.v1.nn.softmax_cross_entropy_with_logits(labels=self.y, logits=output))

        # 5、梯度下降法
        predict = tf.compat.v1.train.GradientDescentOptimizer(self.lr).minimize(loss)

        with tf.compat.v1.Session() as sess:
            # 初始化变量
            sess.run(tf.compat.v1.global_variables_initializer())
            # 求mat,存储到m1
            m1 = self.center_mat(x)
            for step in range(20):
                sess.run(predict, feed_dict={self.x:x, self.y:y, self.mat:m1})
                result = sess.run([loss, output], feed_dict={self.x:x, self.y:y, self.mat:m1})
                # 输出每一步的损失函数loss的值
                print(f"step:{step}  loss:{result[0]}")
            # 循环500次后,输出output的值
            print(f"output:{result[1]}")
			
			# 计算精确度
            acc = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(self.y, 1), tf.argmax(result[1], 1)), tf.float32))
            re = sess.run(acc, feed_dict={self.x:x, self.y:y})
            print(f"准确率:{re}")

使用数据验证程序正确性

def main():
    # 制作数据
    x = []
    y = []
    for i in range(2):
        for _ in range(20):
            a = np.random.normal(i + 5, 0.2)
            b = np.random.normal(i, 0.2)
            x.append([a, b])
            y.append([i])

    x = np.array(x)
    # 将y转变为onehot编码,并转换为数组类型
    oht = OneHotEncoder()
    y = oht.fit_transform(y).toarray()
	
	# 调用自己写的rbf程序
    lc = RbfClassification(0.2, 20)
    result = lc.run(x, y)


if __name__ == "__main__":
    main()

程序结果:

step:0  loss:0.009847460314631462
step:1  loss:0.0063354759477078915
step:2  loss:0.004663033410906792
step:3  loss:0.00367681123316288
step:4  loss:0.003026006743311882
step:5  loss:0.002564833965152502
step:6  loss:0.0022214394994080067
step:7  loss:0.001956184161826968
step:8  loss:0.0017454007174819708
step:9  loss:0.0015740934759378433
step:10  loss:0.0014322480419650674
step:11  loss:0.0013130262959748507
step:12  loss:0.0012114696437492967
step:13  loss:0.0011240066960453987
step:14  loss:0.0010479569900780916
step:15  loss:0.0009812603238970041
step:16  loss:0.0009223271044902503
step:17  loss:0.000869911746121943
step:18  loss:0.0008229954401031137
step:19  loss:0.0007807918009348214
output:[[ -5.6905594  -23.674717  ]
 [ -8.617398   -38.798042  ]
 [ -4.1444187  -15.849449  ]
 [ -3.6620772  -13.654512  ]
 [ -8.246327   -37.815224  ]
 [ -4.8025513  -19.621326  ]
 [ -4.6801014  -19.308613  ]
 [ -4.043401   -12.66901   ]
 [ -3.275519   -11.271512  ]
 [ -8.364632   -36.719036  ]
 [ -3.449597   -12.38932   ]
 [ -4.537184   -18.557596  ]
 [ -3.8763468  -14.292524  ]
 [ -4.3595047  -16.981241  ]
 [ -4.385331   -17.0463    ]
 [ -3.290766   -11.16351   ]
 [ -6.2468643  -27.3551    ]
 [ -4.4771547  -18.061419  ]
 [ -7.7329607  -33.897655  ]
 [ -4.5932465  -18.70145   ]
 [ -4.0903406    2.3125944 ]
 [-33.720146     8.19898   ]
 [ -6.0941734    2.4650843 ]
 [ -7.911108     8.0174885 ]
 [-14.664846     9.801966  ]
 [ -3.646073     1.3106902 ]
 [ -3.3612907    0.48594096]
 [-26.83491      9.89762   ]
 [ -5.46649      5.226462  ]
 [-17.203758    11.449418  ]
 [ -5.746872     5.3967757 ]
 [ -7.02769      6.4736304 ]
 [ -4.9688196    4.3449683 ]
 [ -8.87957      8.680702  ]
 [ -9.566407     9.235068  ]
 [-40.259476     6.399553  ]
 [ -9.986042     7.9442725 ]
 [ -7.6550283    7.5912905 ]
 [ -5.986595     5.7204523 ]
 [-11.090441     9.66671   ]]
准确率:1.0
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值