matlab-神经网络-感知器(1)

注意:

空格和","都是数组元素分隔符

[1,2]和[1 2]是同个数组

 下面是一个神经元的情况:

>> clear all
>> P=[0 2]

P =

     0     2

>> T=[0 1]

T =

     0     1

 

P和T是2个矩阵,

P:R*Q的矩阵,由R组输入向量的最大值和最小值组成

T:S*Q的矩阵,S为神经元个数

 

>> nte=newp(P,T)

nte =

    Neural Network object:

    architecture:

         numInputs: 1
         numLayers: 1
       biasConnect: [1]
      inputConnect: [1]
      layerConnect: [0]
     outputConnect: [1]

        numOutputs: 1  (read-only)
    numInputDelays: 0  (read-only)
    numLayerDelays: 0  (read-only)

    subobject structures:

            inputs: {1x1 cell} of inputs
            layers: {1x1 cell} of layers
           outputs: {1x1 cell} containing 1 output
            biases: {1x1 cell} containing 1 bias
      inputWeights: {1x1 cell} containing 1 input weight
      layerWeights: {1x1 cell} containing no layer weights

    functions:

          adaptFcn: 'trains'
         divideFcn: (none)
       gradientFcn: 'calcgrad'
           initFcn: 'initlay'
        performFcn: 'mae'
          plotFcns: {'plotperform','plottrainstate'}
          trainFcn: 'trainc'

    parameters:

        adaptParam: .passes
       divideParam: (none)
     gradientParam: (none)
         initParam: (none)
      performParam: (none)
        trainParam: .show, .showWindow, .showCommandLine, .epochs,
                    .goal, .time

    weight and bias values:

                IW: {1x1 cell} containing 1 input weight matrix
                LW: {1x1 cell} containing no layer weight matrices
                 b: {1x1 cell} containing 1 bias vector

    other:

              name: ''
          userdata: (user information)

 

权值

>> w1=nte.iw{1}

w1 =

     0

阈值

>> w1=nte.b{1}

w1 =

     0

 
>> nte.iw{1,1}=7

nte =

    Neural Network object:

    architecture:

         numInputs: 1
         numLayers: 1
       biasConnect: [1]
      inputConnect: [1]
      layerConnect: [0]
     outputConnect: [1]

        numOutputs: 1  (read-only)
    numInputDelays: 0  (read-only)
    numLayerDelays: 0  (read-only)

    subobject structures:

            inputs: {1x1 cell} of inputs
            layers: {1x1 cell} of layers
           outputs: {1x1 cell} containing 1 output
            biases: {1x1 cell} containing 1 bias
      inputWeights: {1x1 cell} containing 1 input weight
      layerWeights: {1x1 cell} containing no layer weights

    functions:

          adaptFcn: 'trains'
         divideFcn: (none)
       gradientFcn: 'calcgrad'
           initFcn: 'initlay'
        performFcn: 'mae'
          plotFcns: {'plotperform','plottrainstate'}
          trainFcn: 'trainc'

    parameters:

        adaptParam: .passes
       divideParam: (none)
     gradientParam: (none)
         initParam: (none)
      performParam: (none)
        trainParam: .show, .showWindow, .showCommandLine, .epochs,
                    .goal, .time

    weight and bias values:

                IW: {1x1 cell} containing 1 input weight matrix
                LW: {1x1 cell} containing no layer weight matrices
                 b: {1x1 cell} containing 1 bias vector

    other:

              name: ''
          userdata: (user information)

>> w1=nte.b{1}

w1 =

     0

>> nte.b{1}=8

nte =

    Neural Network object:

    architecture:

         numInputs: 1
         numLayers: 1
       biasConnect: [1]
      inputConnect: [1]
      layerConnect: [0]
     outputConnect: [1]

        numOutputs: 1  (read-only)
    numInputDelays: 0  (read-only)
    numLayerDelays: 0  (read-only)

    subobject structures:

            inputs: {1x1 cell} of inputs
            layers: {1x1 cell} of layers
           outputs: {1x1 cell} containing 1 output
            biases: {1x1 cell} containing 1 bias
      inputWeights: {1x1 cell} containing 1 input weight
      layerWeights: {1x1 cell} containing no layer weights

    functions:

          adaptFcn: 'trains'
         divideFcn: (none)
       gradientFcn: 'calcgrad'
           initFcn: 'initlay'
        performFcn: 'mae'
          plotFcns: {'plotperform','plottrainstate'}
          trainFcn: 'trainc'

    parameters:

        adaptParam: .passes
       divideParam: (none)
     gradientParam: (none)
         initParam: (none)
      performParam: (none)
        trainParam: .show, .showWindow, .showCommandLine, .epochs,
                    .goal, .time

    weight and bias values:

                IW: {1x1 cell} containing 1 input weight matrix
                LW: {1x1 cell} containing no layer weight matrices
                 b: {1x1 cell} containing 1 bias vector

    other:

              name: ''
          userdata: (user information)

>> b=nte.b{1}

b =

     8

初始化网络

>> nte=init(nte)

nte =

    Neural Network object:

    architecture:

         numInputs: 1
         numLayers: 1
       biasConnect: [1]
      inputConnect: [1]
      layerConnect: [0]
     outputConnect: [1]

        numOutputs: 1  (read-only)
    numInputDelays: 0  (read-only)
    numLayerDelays: 0  (read-only)

    subobject structures:

            inputs: {1x1 cell} of inputs
            layers: {1x1 cell} of layers
           outputs: {1x1 cell} containing 1 output
            biases: {1x1 cell} containing 1 bias
      inputWeights: {1x1 cell} containing 1 input weight
      layerWeights: {1x1 cell} containing no layer weights

    functions:

          adaptFcn: 'trains'
         divideFcn: (none)
       gradientFcn: 'calcgrad'
           initFcn: 'initlay'
        performFcn: 'mae'
          plotFcns: {'plotperform','plottrainstate'}
          trainFcn: 'trainc'

    parameters:

        adaptParam: .passes
       divideParam: (none)
     gradientParam: (none)
         initParam: (none)
      performParam: (none)
        trainParam: .show, .showWindow, .showCommandLine, .epochs,
                    .goal, .time

    weight and bias values:

                IW: {1x1 cell} containing 1 input weight matrix
                LW: {1x1 cell} containing no layer weight matrices
                 b: {1x1 cell} containing 1 bias vector

    other:

              name: ''
          userdata: (user information)

>> nte.iw{1}

ans =

     0

>> nte.b{1}

ans =

     0

>>

 随机数为权值和阈值

>> clear all
>> net=newp([0 1;-2 2],1)

net =

    Neural Network object:

    architecture:

         numInputs: 1
         numLayers: 1
       biasConnect: [1]
      inputConnect: [1]
      layerConnect: [0]
     outputConnect: [1]

        numOutputs: 1  (read-only)
    numInputDelays: 0  (read-only)
    numLayerDelays: 0  (read-only)

    subobject structures:

            inputs: {1x1 cell} of inputs
            layers: {1x1 cell} of layers
           outputs: {1x1 cell} containing 1 output
            biases: {1x1 cell} containing 1 bias
      inputWeights: {1x1 cell} containing 1 input weight
      layerWeights: {1x1 cell} containing no layer weights

    functions:

          adaptFcn: 'trains'
         divideFcn: (none)
       gradientFcn: 'calcgrad'
           initFcn: 'initlay'
        performFcn: 'mae'
          plotFcns: {'plotperform','plottrainstate'}
          trainFcn: 'trainc'

    parameters:

        adaptParam: .passes
       divideParam: (none)
     gradientParam: (none)
         initParam: (none)
      performParam: (none)
        trainParam: .show, .showWindow, .showCommandLine, .epochs,
                    .goal, .time

    weight and bias values:

                IW: {1x1 cell} containing 1 input weight matrix
                LW: {1x1 cell} containing no layer weight matrices
                 b: {1x1 cell} containing 1 bias vector

    other:

              name: ''
          userdata: (user information)

>> net.inputweights{1,1}.initFcn='rands'

net =

    Neural Network object:

    architecture:

         numInputs: 1
         numLayers: 1
       biasConnect: [1]
      inputConnect: [1]
      layerConnect: [0]
     outputConnect: [1]

        numOutputs: 1  (read-only)
    numInputDelays: 0  (read-only)
    numLayerDelays: 0  (read-only)

    subobject structures:

            inputs: {1x1 cell} of inputs
            layers: {1x1 cell} of layers
           outputs: {1x1 cell} containing 1 output
            biases: {1x1 cell} containing 1 bias
      inputWeights: {1x1 cell} containing 1 input weight
      layerWeights: {1x1 cell} containing no layer weights

    functions:

          adaptFcn: 'trains'
         divideFcn: (none)
       gradientFcn: 'calcgrad'
           initFcn: 'initlay'
        performFcn: 'mae'
          plotFcns: {'plotperform','plottrainstate'}
          trainFcn: 'trainc'

    parameters:

        adaptParam: .passes
       divideParam: (none)
     gradientParam: (none)
         initParam: (none)
      performParam: (none)
        trainParam: .show, .showWindow, .showCommandLine, .epochs,
                    .goal, .time

    weight and bias values:

                IW: {1x1 cell} containing 1 input weight matrix
                LW: {1x1 cell} containing no layer weight matrices
                 b: {1x1 cell} containing 1 bias vector

    other:

              name: ''
          userdata: (user information)

>> net.biases{1}.initFcn='rands'

net =

    Neural Network object:

    architecture:

         numInputs: 1
         numLayers: 1
       biasConnect: [1]
      inputConnect: [1]
      layerConnect: [0]
     outputConnect: [1]

        numOutputs: 1  (read-only)
    numInputDelays: 0  (read-only)
    numLayerDelays: 0  (read-only)

    subobject structures:

            inputs: {1x1 cell} of inputs
            layers: {1x1 cell} of layers
           outputs: {1x1 cell} containing 1 output
            biases: {1x1 cell} containing 1 bias
      inputWeights: {1x1 cell} containing 1 input weight
      layerWeights: {1x1 cell} containing no layer weights

    functions:

          adaptFcn: 'trains'
         divideFcn: (none)
       gradientFcn: 'calcgrad'
           initFcn: 'initlay'
        performFcn: 'mae'
          plotFcns: {'plotperform','plottrainstate'}
          trainFcn: 'trainc'

    parameters:

        adaptParam: .passes
       divideParam: (none)
     gradientParam: (none)
         initParam: (none)
      performParam: (none)
        trainParam: .show, .showWindow, .showCommandLine, .epochs,
                    .goal, .time

    weight and bias values:

                IW: {1x1 cell} containing 1 input weight matrix
                LW: {1x1 cell} containing no layer weight matrices
                 b: {1x1 cell} containing 1 bias vector

    other:

              name: ''
          userdata: (user information)

>> net=init(net)

net =

    Neural Network object:

    architecture:

         numInputs: 1
         numLayers: 1
       biasConnect: [1]
      inputConnect: [1]
      layerConnect: [0]
     outputConnect: [1]

        numOutputs: 1  (read-only)
    numInputDelays: 0  (read-only)
    numLayerDelays: 0  (read-only)

    subobject structures:

            inputs: {1x1 cell} of inputs
            layers: {1x1 cell} of layers
           outputs: {1x1 cell} containing 1 output
            biases: {1x1 cell} containing 1 bias
      inputWeights: {1x1 cell} containing 1 input weight
      layerWeights: {1x1 cell} containing no layer weights

    functions:

          adaptFcn: 'trains'
         divideFcn: (none)
       gradientFcn: 'calcgrad'
           initFcn: 'initlay'
        performFcn: 'mae'
          plotFcns: {'plotperform','plottrainstate'}
          trainFcn: 'trainc'

    parameters:

        adaptParam: .passes
       divideParam: (none)
     gradientParam: (none)
         initParam: (none)
      performParam: (none)
        trainParam: .show, .showWindow, .showCommandLine, .epochs,
                    .goal, .time

    weight and bias values:

                IW: {1x1 cell} containing 1 input weight matrix
                LW: {1x1 cell} containing no layer weight matrices
                 b: {1x1 cell} containing 1 bias vector

    other:

              name: ''
          userdata: (user information)

>> net.iw{1,1}

ans =

    0.8116   -0.7460

>> net.b{1}

ans =

    0.6294

>>

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值