matlab-神经网络-感知器(3)

在人工神经网络领域中,感知机也被指为单层的人工神经网络,以区别于较复杂的多层感知机(Multilayer Perceptron)。 作为一种线性分类器,(单层)感知机可说是最简单的前向人工神经网络形式。尽管结构简单,感知机能够学习并解决相当复杂的问题。感知机主要的本质缺陷是它不能处理线性不可分问题。

 

感知机使用特征向量来表示的前馈式人工神经网络,它是一种二元分类器,把矩阵上的输入x(实数值向量)映射到输出值f(x)上(一个二元的值)。

f(x) = \begin{cases}1 & \text{if }w \cdot x + b > 0\\0 & \text{else}\end{cases}

w是实数的表式权重的向量,w \cdot x是点积。b是偏置,一个常数不依赖于任何输入值。偏置可以认为是激励函数的偏移量,或者给神经元一个基础活跃等级。

f(x) (0 或 1)用于对x进行分类,看它是肯定的还是否定的,这属于二元分类问题。如果b是否定的,那么加权后的输入必须产生一个肯定的值并且大于-b,这样才能令分类神经元大于阈值0。从空间上看,偏置改变了决策边界的位置(虽然不是定向的)。

由于输入直接经过权重关系转换为输出,所以感知机可以被视为最简单形式的前馈式人工神经网络。

 

>> P=[0 1 0 1 1;1 1 1 0 0]

P =

     0     1     0     1     1
     1     1     1     0     0

>>
>> T=[0 1 0 0 0]

T =

     0     1     0     0     0

>> net = newp(minmax(P),1)


net =

    Neural Network object:

    architecture:

         numInputs: 1
         numLayers: 1
       biasConnect: [1]
      inputConnect: [1]
      layerConnect: [0]
     outputConnect: [1]

        numOutputs: 1  (read-only)
    numInputDelays: 0  (read-only)
    numLayerDelays: 0  (read-only)

    subobject structures:

            inputs: {1x1 cell} of inputs
            layers: {1x1 cell} of layers
           outputs: {1x1 cell} containing 1 output
            biases: {1x1 cell} containing 1 bias
      inputWeights: {1x1 cell} containing 1 input weight
      layerWeights: {1x1 cell} containing no layer weights

    functions:

          adaptFcn: 'trains'
         divideFcn: (none)
       gradientFcn: 'calcgrad'
           initFcn: 'initlay'
        performFcn: 'mae'
          plotFcns: {'plotperform','plottrainstate'}
          trainFcn: 'trainc'

    parameters:

        adaptParam: .passes
       divideParam: (none)
     gradientParam: (none)
         initParam: (none)
      performParam: (none)
        trainParam: .show, .showWindow, .showCommandLine, .epochs,
                    .goal, .time

    weight and bias values:

                IW: {1x1 cell} containing 1 input weight matrix
                LW: {1x1 cell} containing no layer weight matrices
                 b: {1x1 cell} containing 1 bias vector

    other:

              name: ''
          userdata: (user information)

>> net.iw{1,1}

ans =

     0     0

 
>> net.iw{1,1}=[1 1]

net =

    Neural Network object:

    architecture:

         numInputs: 1
         numLayers: 1
       biasConnect: [1]
      inputConnect: [1]
      layerConnect: [0]
     outputConnect: [1]

        numOutputs: 1  (read-only)
    numInputDelays: 0  (read-only)
    numLayerDelays: 0  (read-only)

    subobject structures:

            inputs: {1x1 cell} of inputs
            layers: {1x1 cell} of layers
           outputs: {1x1 cell} containing 1 output
            biases: {1x1 cell} containing 1 bias
      inputWeights: {1x1 cell} containing 1 input weight
      layerWeights: {1x1 cell} containing no layer weights

    functions:

          adaptFcn: 'trains'
         divideFcn: (none)
       gradientFcn: 'calcgrad'
           initFcn: 'initlay'
        performFcn: 'mae'
          plotFcns: {'plotperform','plottrainstate'}
          trainFcn: 'trainc'

    parameters:

        adaptParam: .passes
       divideParam: (none)
     gradientParam: (none)
         initParam: (none)
      performParam: (none)
        trainParam: .show, .showWindow, .showCommandLine, .epochs,
                    .goal, .time

    weight and bias values:

                IW: {1x1 cell} containing 1 input weight matrix
                LW: {1x1 cell} containing no layer weight matrices
                 b: {1x1 cell} containing 1 bias vector

    other:

              name: ''
          userdata: (user information)

>> net.b{1}

ans =

     0

>> net.b{1}=-2

net =

    Neural Network object:

    architecture:

         numInputs: 1
         numLayers: 1
       biasConnect: [1]
      inputConnect: [1]
      layerConnect: [0]
     outputConnect: [1]

        numOutputs: 1  (read-only)
    numInputDelays: 0  (read-only)
    numLayerDelays: 0  (read-only)

    subobject structures:

            inputs: {1x1 cell} of inputs
            layers: {1x1 cell} of layers
           outputs: {1x1 cell} containing 1 output
            biases: {1x1 cell} containing 1 bias
      inputWeights: {1x1 cell} containing 1 input weight
      layerWeights: {1x1 cell} containing no layer weights

    functions:

          adaptFcn: 'trains'
         divideFcn: (none)
       gradientFcn: 'calcgrad'
           initFcn: 'initlay'
        performFcn: 'mae'
          plotFcns: {'plotperform','plottrainstate'}
          trainFcn: 'trainc'

    parameters:

        adaptParam: .passes
       divideParam: (none)
     gradientParam: (none)
         initParam: (none)
      performParam: (none)
        trainParam: .show, .showWindow, .showCommandLine, .epochs,
                    .goal, .time

    weight and bias values:

                IW: {1x1 cell} containing 1 input weight matrix
                LW: {1x1 cell} containing no layer weight matrices
                 b: {1x1 cell} containing 1 bias vector

    other:

              name: ''
          userdata: (user information)

我们这个感知器的任务就是完成 and 运算,即只有输入的2个元素都为 1,输出才为1

sim是仿真函数,对神经网络进行仿真,可以理解为进行测试
>> sim(net,[0;1])

ans =

     0

>> sim(net,[1;1])

ans =

     1

>> sim(net,[1;0])

ans =

     0

>> sim(net,[1;0])

ans =

     0

>> y=sim(net,[1;0])

y =

     0

mae为计算平均误差,e为误差矩阵,因为人为的设置了正确的权值,所以没有误差,t为正确输出,y为感知机的实际输出

>> y=sim(net,[1 0 1;0 1 1])

y =

     0     0     1
>> t=[0 0 1]

t =

     0     0     1

>> e=t-y

e =

     0     0     0

>> perf=mae(3)

perf =

     3

>> perf=mae(e)

perf =

     0

>>

 

注意上面的输入方式是1;0为一组样本,然后0;1为另一组样本,1;1为最后一组样本,共3 组样本,见下面示例

ans(1,1)和ans(2,1)是一组输入数据


>> [1 0 1;0 1 1]

ans =

     1     0     1
     0     1     1

 

 我们把权值修改成一个错的,来看看误差矩阵和平均误差

>> y=sim(net,[1 0 1;0 1 1])

y =

     0     0     0

>> e=t-y

e =

     0     0     1

>> perf=mae(e)

perf =

    0.3333

>> t

t =

     0     0     1

>>

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值