多层感知机分类器matlab,matlab-神经网络-自定义多层感知器解决异或(2)

继续定义单元神经元

net.inputs{i}.range

This property defines the range of each element of the ith network input.

It can be set to any Ri x 2 matrix, where Ri is the number of elements in the input (net.inputs{i}.size), and each element in column 1 is less than the element next to it in column 2.

Each jth row defines the minimum and maximum values of the jth input element, in that order:

net.inputs{i}(j,:)

Uses.   Some initialization functions use input ranges to find appropriate initial values for input weight matrices.

Side Effects.   Whenever the number of rows in this property is altered, the input size, processedSize, and processedRange change to remain consistent. The sizes of any weights coming from this input and the dimensions of the weight matrices also change.

>> net.inputs{1}.range=[0 1;0 1]

net =

Neural Network object:

architecture:

numInputs: 1

numLayers: 2

biasConnect: [1; 1]

inputConnect: [1; 0]

layerConnect: [0 0; 1 0]

outputConnect: [0 1]

numOutputs: 1  (read-only)

numInputDelays: 0  (read-only)

numLayerDelays: 0  (read-only)

subobject structures:

inputs: {1×1 cell} of inputs

layers: {2×1 cell} of layers

outputs: {1×2 cell} containing 1 output

biases: {2×1 cell} containing 2 biases

inputWeights: {2×1 cell} containing 1 input weight

layerWeights: {2×2 cell} containing 1 layer weight

functions:

adaptFcn: (none)

divideFcn: (none)

gradientFcn: (none)

initFcn: (none)

performFcn: (none)

plotFcns: {}

trainFcn: (none)

parameters:

adaptParam: (none)

divideParam: (none)

gradientParam: (none)

initParam: (none)

performParam: (none)

trainParam: (none)

weight and bias values:

IW: {2×1 cell} containing 1 input weight matrix

LW: {2×2 cell} containing 1 layer weight matrix

b: {2×1 cell} containing 2 bias vectors

other:

name: ”

userdata: (user information)

>>

======

net.layers{i}.size

This property defines the number of neurons in the ith layer. It can be set to 0 or a positive integer.

Side Effects.   Whenever this property is altered, the sizes of any input weights going to the layer (net.inputWeights{i,:}.size), any layer weights going to the layer (net.layerWeights{i,:}.size) or coming from the layer (net.inputWeights{i,:}.size), and the layer’s bias (net.biases{i}.size), change.

The dimensions of the corresponding weight matrices (net.IW{i,:}, net.LW{i,:}, net.LW{:,i}), and biases (net.b{i}) also change.

Changing this property also changes the size of the layer’s output (net.outputs{i}.size) and target (net.targets{i}.size) if they exist.

Finally, when this property is altered, the dimensions of the layer’s neurons (net.layers{i}.dimension) are set to the same value. (This results in a one-dimensional arrangement of neurons. If another arrangement is required, set the dimensions property directly instead of using size.

=======

>> net.layers{1}.size=2

net =

Neural Network object:

architecture:

numInputs: 1

numLayers: 2

biasConnect: [1; 1]

inputConnect: [1; 0]

layerConnect: [0 0; 1 0]

outputConnect: [0 1]

numOutputs: 1  (read-only)

numInputDelays: 0  (read-only)

numLayerDelays: 0  (read-only)

subobject structures:

inputs: {1×1 cell} of inputs

layers: {2×1 cell} of layers

outputs: {1×2 cell} containing 1 output

biases: {2×1 cell} containing 2 biases

inputWeights: {2×1 cell} containing 1 input weight

layerWeights: {2×2 cell} containing 1 layer weight

functions:

adaptFcn: (none)

divideFcn: (none)

gradientFcn: (none)

initFcn: (none)

performFcn: (none)

plotFcns: {}

trainFcn: (none)

parameters:

adaptParam: (none)

divideParam: (none)

gradientParam: (none)

initParam: (none)

performParam: (none)

trainParam: (none)

weight and bias values:

IW: {2×1 cell} containing 1 input weight matrix

LW: {2×2 cell} containing 1 layer weight matrix

b: {2×1 cell} containing 2 bias vectors

other:

name: ”

userdata: (user information)

>>

=====

net.layers{i}.initFcn

This property defines which of the layer initialization functions are used to initialize the ith layer, if the network initialization function (net.initFcn) is initlay. If the network initialization is set to initlay, then the function indicated by this property is used to initialize the layer’s weights and biases.

For a list of functions, type

help nninit

=====

>> net.layers{1}.initFcn=’initnw’

net =

Neural Network object:

architecture:

numInputs: 1

numLayers: 2

biasConnect: [1; 1]

inputConnect: [1; 0]

layerConnect: [0 0; 1 0]

outputConnect: [0 1]

numOutputs: 1  (read-only)

numInputDelays: 0  (read-only)

numLayerDelays: 0  (read-only)

subobject structures:

inputs: {1×1 cell} of inputs

layers: {2×1 cell} of layers

outputs: {1×2 cell} containing 1 output

biases: {2×1 cell} containing 2 biases

inputWeights: {2×1 cell} containing 1 input weight

layerWeights: {2×2 cell} containing 1 layer weight

functions:

adaptFcn: (none)

divideFcn: (none)

gradientFcn: (none)

initFcn: (none)

performFcn: (none)

plotFcns: {}

trainFcn: (none)

parameters:

adaptParam: (none)

divideParam: (none)

gradientParam: (none)

initParam: (none)

performParam: (none)

trainParam: (none)

weight and bias values:

IW: {2×1 cell} containing 1 input weight matrix

LW: {2×2 cell} containing 1 layer weight matrix

b: {2×1 cell} containing 2 bias vectors

other:

name: ”

userdata: (user information)

>>

>> net.layers{2}.size=1

>> net.layers{2}.initFcn=’initnw’

>> net.layers{2}.transferFcn=’hardlim’

net =

Neural Network object:

architecture:

numInputs: 1

numLayers: 2

biasConnect: [1; 1]

inputConnect: [1; 0]

layerConnect: [0 0; 1 0]

outputConnect: [0 1]

numOutputs: 1  (read-only)

numInputDelays: 0  (read-only)

numLayerDelays: 0  (read-only)

subobject structures:

inputs: {1×1 cell} of inputs

layers: {2×1 cell} of layers

outputs: {1×2 cell} containing 1 output

biases: {2×1 cell} containing 2 biases

inputWeights: {2×1 cell} containing 1 input weight

layerWeights: {2×2 cell} containing 1 layer weight

functions:

adaptFcn: (none)

divideFcn: (none)

gradientFcn: (none)

initFcn: (none)

performFcn: (none)

plotFcns: {}

trainFcn: (none)

parameters:

adaptParam: (none)

divideParam: (none)

gradientParam: (none)

initParam: (none)

performParam: (none)

trainParam: (none)

weight and bias values:

IW: {2×1 cell} containing 1 input weight matrix

LW: {2×2 cell} containing 1 layer weight matrix

b: {2×1 cell} containing 2 bias vectors

other:

name: ”

userdata: (user information)

>>

本文转载自:深未来

欢迎加入我爱机器学习QQ14群:336582044

getqrcode.jpg

微信扫一扫,关注我爱机器学习公众号

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值