RBF神经网络简单介绍与MATLAB实现

6 篇文章 8 订阅

RBF的直观介绍

RBF具体原理，网络上很多文章一定讲得比我好，所以我也不费口舌了，这里只说一说对RBF网络的一些直观的认识

1 RBF是一种两层的网络

y j = ∑ i = 1 n w i j ϕ ( ∥ x − u i ∥ 2 ) , ( j = 1 , … , p ) y_j = \sum_{i=1}^n w_{ij} \phi(\Vert x - u_i\Vert^2), (j = 1,\dots,p)

2 RBF的隐层是一种非线性的映射

RBF隐层常用激活函数是高斯函数：
ϕ ( ∥ x − u ∥ ) = e − ∥ x − u ∥ 2 σ 2 \phi(\Vert x - u\Vert) = e^{-\frac{\Vert x-u\Vert^2}{\sigma^2}}

4 RBF的基本思想是：将数据转化到高维空间，使其在高维空间线性可分

RBF隐层将数据转化到高维空间（一般是高维），认为存在某个高维空间能够使得数据在这个空间是线性可分的。因此啊，输出层是线性的。这和核方法的思想是一样一样的。下面举个老师PPT上的例子：

RBF学习算法

1. 利用kmeans算法寻找中心向量 u i u_i
2. 利用kNN(K nearest neighbor)rule 计算 σ \sigma
σ i = 1 K ∑ k = 1 K ∥ u k − u i ∥ 2 \sigma_i = \sqrt{\frac{1}{K}\sum_{k=1}^K \Vert u_k - u_i\Vert^2}
3. W W 可以利用最小二乘法求得

MATLAB实现RBF神经网络

demo.m 对XOR数据进行了RBF的训练和预测，展现了整个流程。最后的几行代码是利用封装形式进行训练和预测。

clc;
clear all;
close all;

%% ---- Build a training set of a similar version of XOR
c_1 = [0 0];
c_2 = [1 1];
c_3 = [0 1];
c_4 = [1 0];

n_L1 = 20; % number of label 1
n_L2 = 20; % number of label 2

A = zeros(n_L1*2, 3);
A(:,3) = 1;
B = zeros(n_L2*2, 3);
B(:,3) = 0;

% create random points
for i=1:n_L1
A(i, 1:2) = c_1 + rand(1,2)/2;
A(i+n_L1, 1:2) = c_2 + rand(1,2)/2;
end
for i=1:n_L2
B(i, 1:2) = c_3 + rand(1,2)/2;
B(i+n_L2, 1:2) = c_4 + rand(1,2)/2;
end

% show points
scatter(A(:,1), A(:,2),[],'r');
hold on
scatter(B(:,1), B(:,2),[],'g');
X = [A;B];
data = X(:,1:2);
label = X(:,3);

%% Using kmeans to find cinter vector
n_center_vec = 10;
rng(1);
[idx, C] = kmeans(data, n_center_vec);
hold on
scatter(C(:,1), C(:,2), 'b', 'LineWidth', 2);

%% Calulate sigma
n_data = size(X,1);

% calculate K
K = zeros(n_center_vec, 1);
for i=1:n_center_vec
K(i) = numel(find(idx == i));
end

% Using knnsearch to find K nearest neighbor points for each center vector
% then calucate sigma
sigma = zeros(n_center_vec, 1);
for i=1:n_center_vec
[n, d] = knnsearch(data, C(i,:), 'k', K(i));
L2 = (bsxfun(@minus, data(n,:), C(i,:)).^2);
L2 = sum(L2(:));
sigma(i) = sqrt(1/K(i)*L2);
end

%% Calutate weights
% kernel matrix
k_mat = zeros(n_data, n_center_vec);

for i=1:n_center_vec
r = bsxfun(@minus, data, C(i,:)).^2;
r = sum(r,2);
k_mat(:,i) = exp((-r.^2)/(2*sigma(i)^2));
end

W = pinv(k_mat'*k_mat)*k_mat'*label;
y = k_mat*W;
%y(y>=0.5) = 1;
%y(y<0.5) = 0;

%% training function and predict function
[W1, sigma1, C1] = RBF_training(data, label, 10);
y1 = RBF_predict(data, W, sigma, C1);
[W2, sigma2, C2] = lazyRBF_training(data, label, 2);
y2 = RBF_predict(data, W2, sigma2, C2);


RBF_training.m 对demo.m中训练的过程进行封装

function [ W, sigma, C ] = RBF_training( data, label, n_center_vec )
%RBF_TRAINING Summary of this function goes here
%   Detailed explanation goes here

% Using kmeans to find cinter vector
rng(1);
[idx, C] = kmeans(data, n_center_vec);

% Calulate sigma
n_data = size(data,1);

% calculate K
K = zeros(n_center_vec, 1);
for i=1:n_center_vec
K(i) = numel(find(idx == i));
end

% Using knnsearch to find K nearest neighbor points for each center vector
% then calucate sigma
sigma = zeros(n_center_vec, 1);
for i=1:n_center_vec
[n] = knnsearch(data, C(i,:), 'k', K(i));
L2 = (bsxfun(@minus, data(n,:), C(i,:)).^2);
L2 = sum(L2(:));
sigma(i) = sqrt(1/K(i)*L2);
end
% Calutate weights
% kernel matrix
k_mat = zeros(n_data, n_center_vec);

for i=1:n_center_vec
r = bsxfun(@minus, data, C(i,:)).^2;
r = sum(r,2);
k_mat(:,i) = exp((-r.^2)/(2*sigma(i)^2));
end

W = pinv(k_mat'*k_mat)*k_mat'*label;
end



RBF_lazytraning.m 对lazy RBF的实现，主要就是中心向量为训练集自己，然后再构造核矩阵。由于 Φ \Phi 一定可逆，所以在求逆时，可以使用快速的’\‘方法

function [ W, sigma, C ] = lazyRBF_training( data, label, sigma )
%LAZERBF_TRAINING Summary of this function goes here
%   Detailed explanation goes here
if nargin < 3
sigma = 1;
end

n_data = size(data,1);
C = data;

% make kernel matrix
k_mat = zeros(n_data);
for i=1:n_data
L2 = sum((data - repmat(data(i,:), n_data, 1)).^2, 2);
k_mat(i,:) = exp(L2'/(2*sigma));
end

W = k_mat\label;
end



RBF_predict.m 预测

function [ y ] = RBF_predict( data, W, sigma, C )
%RBF_PREDICT Summary of this function goes here
%   Detailed explanation goes here
n_data = size(data, 1);
n_center_vec = size(C, 1);
if numel(sigma) == 1
sigma = repmat(sigma, n_center_vec, 1);
end

% kernel matrix
k_mat = zeros(n_data, n_center_vec);
for i=1:n_center_vec
r = bsxfun(@minus, data, C(i,:)).^2;
r = sum(r,2);
k_mat(:,i) = exp((-r.^2)/(2*sigma(i)^2));
end

y = k_mat*W;
end


• 85
点赞
• 663
收藏
觉得还不错? 一键收藏
• 15
评论

“相关推荐”对你有帮助么？

• 非常没帮助
• 没帮助
• 一般
• 有帮助
• 非常有帮助

1.余额是钱包充值的虚拟货币，按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载，可以购买VIP、付费专栏及课程。