神经网络之一

实现简单逻辑运算的Perveptron以及Multiperceptron

1.XOR

Method one(use inbuilt functions):

function [ output_args ] = xor1( input_args )
%XOR1 Summary of this function goes here
%  Detailed explanation goes here
net=network;%create a network
net.numInputs=1; % one input source
net.numLayers=2;% tow layers
net.biasConnect=[1;1]; %To create a bias connection to two layers
net.outputConnect=[0 1];  % to connect layers 2 to network outputs,
net.targetConnect=[0 1];  % To give layer 2 a target connection
net.inputConnect=[1;0];   % means:  net.inputConnect(1,1) = 1, also an input weight connection going to the 1st layer from the 1st input. 
net.layerConnect=[0 0;1 0];% represents the presence of a layer-weight connection going to the 2nd layer from the  1st layer
net.inputs{1}.range=[0 1;0 1];%range of the first layer
net.layers{1}.size=2;          %no. of neurons of the first layers
net.layers{1}.transferFcn='tansig';
net.layers{1}.initFcn='initnw';
net.layers{2}.size=1;
net.layers{2}.transferFcn='tansig';
net.layers{2}.initFcn='initnw';
net.adaptFcn='trains';
net.performFcn='mse'; %mean squared error
net.trainFcn='trainlm';
net.initFcn='initlay';
net=init(net);
p=[0 0 1 1 ;0 1 0 1];
t=[0 1 1 0];
net.trainParam.epochs=800;
net=train(net,p,t);
Y=sim(net,p)
plotpv(p,t);% draw position of targets

 

b. method two, wrote code .

 

function [ output_args ] = xor( input_args )
%XOR Summary of this function goes here
%  Detailed explanation goes here
%逻辑“异或”训练,用单层感知器
x=[-1,-1,-1,-1;0,0,1,1;0,1,0,1];%带权值的输入
d1=[-1,1,-1,-1];
d2=[1,1,-1,1];                %导师信号
w10=[-0.1,0.1,0.1]';
w20=[-0.1,0.1,0.1]';             %初始权值
r1=[1,1,1,1];
r2=[1,1,1,1];                   %输出误差
j=0;
while any(r1)~=0|any(r2)~=0
o1(1)=sign(w10'*x(:,1));
o2(1)=sign(w20'*x(:,1));
r1(1)=d1(1)-o1(1);
r2(1)=d2(1)-o2(1);
w1(:,1)=w10+0.1*(d1(1)-o1(1))*x(:,1);
w2(:,1)=w20+0.1*(d2(1)-o2(1))*x(:,1);
for i=2:4
      o1(i)=sign(w1(:,i-1)'*x(:,i));
o2(i)=sign(w2(:,i-1)'*x(:,i));
      r1(i)=d1(i)-o1(i); r2(i)=d2(i)-o2(i);
      w1(:,i)=w1(:,i-1)+0.1*(d1(i)-o1(i))*x(:,i); 
w2(:,i)=w2(:,i-1)+0.1*(d2(i)-o2(i))*x(:,i);
end
w10=w1(:,i);
w20=w2(:,i);
j=j+1;
end
w1,r1,j,w2,r2

 

2 . and logic 运算的实现

a. use inbuilt functions

 

function [ output_args ] = and2( input_args )
%AND2 Summary of this function goes here
%  Detailed explanation goes here
net = newp([0 1; -2 2],1);
      P = [0 0 1 1; 0 1 0 1]; 
      T = [0 0 0 1];
      net.trainParam.epochs = 30;
      net = train(net,P,T);
      Y = sim(net,P);
       plot(T,'r*');%to compare the target and output by ploting them respectively
 hold;
 plot(Y,'o');
      %T;
b. wrote code:

T = [0 0 0 1];
w0=0.7;
w1=0.9;
P = [0 0 1 1; 0 1 0 1];
T1=[];
lr=0.7;
cnt=1;
tru=1;
itr=1;
while(tru)
for i=1:4
itr=itr+1;
if(w0*P(1,i)+w1*P(2,i) > 0) y=1; else y=0; end;
if y ~= T(i)
delw0 = lr*(T(i)-y)*P(1,i);
delw1 = lr*(T(i)-y)*P(2,i);
w0= w0 + delw0;
w1= w1 + delw1;
else
cnt=cnt+1;
T1(i)=y;
end;
end
if(cnt==5)
tru=0;
cnt=1;
T
T1
itr
end
end 
%  plotpv(P, T) %plotting the dots we need to sepasrate

 

这个and perceptron(感知器)也可以修改输入输出向量从而实现OR等其他逻辑运算

 

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值