matlab-神经网络-线性网络(2)

我们使用线性网络来识别6和8的数字,数字以点阵方式表示

%以点阵方式表示6和8
%8
p1=[0 1 1 1 1 1 0
    0 1 0 0 0 1 0 
    0 1 0 0 0 1 0
    0 1 1 1 1 1 0
    0 1 0 0 0 1 0
    0 1 0 0 0 1 0
    0 1 1 1 1 1 0];
p2=[0 0 1 1 0 0 0
    0 1 0 0 0 1 0 
    0 1 0 0 1 0 0
    0 0 1 1 0 0 0
    0 0 1 0 1 0 0
    0 1 0 0 1 0 0
    0 0 1 1 0 0 0];
%6
p3=[0 1 1 1 1 0 0
    0 1 0 0 0 0 0 
    0 1 0 0 0 0 0
    0 1 1 1 1 0 0
    0 1 0 0 1 0 0
    0 1 0 0 1 0 0
    0 1 1 1 1 0 0];
p4=[0 0 0 0 1 0 0
    0 0 0 1 0 0 0 
    0 0 1 0 0 0 0
    0 1 1 1 1 0 0
    0 1 0 0 0 1 0
    0 1 0 0 1 0 0
    0 0 1 1 0 0 0];
%6
p61=[0 0 1 0 0 0 0
         0 1 0 0 0 0 0 
         0 1 0 0 0 0 0
         0 1 0 1 1 0 0
         0 1 1 0 0 1 0
         0 1 0 0 0 1 0
         0 0 1 1 1 0 0];
%8
p81=[0 1 1 1 0 0 0
     0 1 0 0 0 1 0 
     0 1 0 0 1 0 0
     0 0 1 1 0 0 0
     0 0 1 1 0 0 0
     0 1 0 0 1 0 0
     0 0 1 1 1 0 0];
p82=[0 0 1 1 1 0 0
     0 1 0 0 0 1 0 
     0 1 0 0 0 1 0
     0 0 1 1 1 0 0
     0 1 0 0 0 1 0
     0 1 0 0 0 1 0
     0 0 1 1 1 0 0];
[m,n]=size(p1);
p1=reshape(p1',1,m*n);
p2=reshape(p2',1,m*n);
p3=reshape(p3',1,m*n);
p4=reshape(p4',1,m*n);
p6=reshape(p61',1,m*n);
p81=reshape(p81',1,m*n);
p82=reshape(p81',1,m*n);
%形成6、8的点阵的各2个输入样本矩阵
P=[p1;
   p2;
   p3;
   p4]'
T=[8 8 6 6]
net = newlin(P,T,[0 1],0.01);
net.trainParam.epochs = 600;
%误差到1e-5后停止训练
net.trainParam.goal = 1e-5;
net = train(net,P,T);


 

 识别效果不错


>> Y = round(sim(net,p81'))

Y =

     8

>> Y = round(sim(net,p82'))

Y =

     8

>> Y = round(sim(net,p6'))

Y =

     6

>> Y = round(sim(net,p4'))

Y =

     6

>> Y = round(sim(net,p2'))

Y =

     8

>> Y = round(sim(net,p3'))

Y =

     6

>> Y = round(sim(net,p1'))

Y =

     8

>>

 p61是待识别的6,p81和p82是待识别的8,p1-p4是样本数据

 help newlin
 NEWLIN Create a linear layer.
 
   Syntax
 
     net = newlin(P,S,ID,LR)
     net = newlin(P,T,ID,LR)
 
   Description
 
     Linear layers are often used as adaptive filters
     for signal processing and prediction.
 
     NEWLIN(P,S,ID,LR) takes these arguments,
       P  - RxQ matrix of Q representative input vectors.
       S  - Number of elements in the output vector.
       ID - Input delay vector, default = [0].
       LR - Learning rate, default = 0.01;
     and returns a new linear layer.
 
     NEWLIN(P,T,ID,LR) takes the same arguments except for
       T - SxQ2 matrix of Q2 representative S-element output vectors.
 
     NET = NEWLIN(PR,S,0,P) takes an alternate argument,
       P  - Matrix of input vectors.
     and returns a linear layer with the maximum stable
     learning rate for learning with inputs P.
 
   Examples
 
     This code creates a single input, single neuron linear layer,
     with input delays of 0 and 1, and a learning.  It is simulated
     for the input sequence P1.
 
       P1 = {0 -1 1 1 0 -1 1 0 0 1};
       T1 = {0 -1 0 2 1 -1 0 1 0 1};
 
       net = newlin(P1,T1,[0 1],0.01);
       Y = sim(net,P1)
 
     Here the network adapts for inputs P1 and targets T1.
 
       [net,Y,E,Pf] = adapt(net,P1,T1); Y
 
     Here the linear layer continues to adapt for a new sequence
     using the previous final conditions PF as initial conditions.
 
       P2 = {1 0 -1 -1 1 1 1 0 -1};
       T2 = {2 1 -1 -2 0 2 2 1 0};
       [net,Y,E,Pf] = adapt(net,P2,T2,Pf); Y
 
     Here we initialize the layer's weights and biases to new values.
 
       net = init(net);
 
     Here we train the newly initialized layer on the entire sequence
     for 200 epochs to an error goal of 0.1.
 
       P3 = [P1 P2];
       T3 = [T1 T2];
       net.trainParam.epochs = 200;
       net.trainParam.goal = 0.1;
       net = train(net,P3,T3);
       Y = sim(net,[P1 P2])
 
   Algorithm
 
     Linear layers consist of a single layer with the DOTPROD
     weight function, NETSUM net input function, and PURELIN
     transfer function.
 
     The layer has a weight from the input and a bias.
 
     Weights and biases are initialized with INITZERO.
 
     Adaption and training are done with TRAINS and TRAINB,
     which both update weight and bias values with LEARNWH.
     Performance is measured with MSE.

 

 

 

 

help newlind
 NEWLIND Design a linear layer.
 
   Syntax
 
     net = newlind(P,T,Pi)
 
   Description
 
     NEWLIND(P,T,Pi) takes these input arguments,
       P  - RxQ matrix of Q input vectors.
       T  - SxQ matrix of Q target class vectors.
       Pi - 1xID cell array of initial input delay states,
            each element Pi{i,k} is an RixQ matrix, default = [].
     and returns a linear layer designed to output T
     (with minimum sum square error) given input P.
 
     NEWLIND(P,T,Pi) can also solve for linear networks with input delays and
     multiple inputs and layers by supplying input and target data in cell
     array form:
       P  - NixTS cell array, each element P{i,ts} is an RixQ input matrix.
       T  - NtxTS cell array, each element P{i,ts} is an VixQ matrix.
       Pi - NixID cell array, each element Pi{i,k} is an RixQ matrix, default = [].
     returns a linear network with ID input delays, Ni network inputs, Nl layers,
     and  designed to output T (with minimum sum square error) given input P.
 
   Examples
 
     We would like a linear layer that outputs T given P
     for the following definitions.
 
       P = [1 2 3];
       T = [2.0 4.1 5.9];
 
     Here we use NETLIND to design such a linear network that minimizes
     the sum squared error between its output Y and T.
 
       net = newlind(P,T);
       Y = sim(net,P)
 
     We would like another linear layer that outputs the sequence T
     given the sequence P and two initial input delay states Pi.
 
       P = {1 2 1 3 3 2};
       Pi = {1 3};
       T = {5.0 6.1 4.0 6.0 6.9 8.0};
       net = newlind(P,T,Pi);
       Y = sim(net,P,Pi)
 
     We would like a linear network with two outputs Y1 and Y2, that generate
     sequences T1 and T2, given the sequences P1 and P2 with 3 initial input
     delay states Pi1 for input 1, and 3 initial delays states Pi2 for input 2.
 
       P1 = {1 2 1 3 3 2}; Pi1 = {1 3 0};
       P2 = {1 2 1 1 2 1}; Pi2 = {2 1 2};
       T1 = {5.0 6.1 4.0 6.0 6.9 8.0};
       T2 = {11.0 12.1 10.1 10.9 13.0 13.0};
       net = newlind([P1; P2],[T1; T2],[Pi1; Pi2]);
       Y = sim(net,[P1; P2],[Pi1; Pi2]);
       Y1 = Y(1,:)
       Y2 = Y(2,:)
 
   Algorithm
 
     NEWLIND calculates weight W and bias B values for a
     linear layer from inputs P and targets T by solving
     this linear equation in the least squares sense:
    
       [W b] * [P; ones] = T

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值