感知器matlab,matlab-神经网络-感知器(1)

注意:

空格和","都是数组元素分隔符

[1,2]和[1 2]是同个数组

>> clear all

>> P=[0 2]

P =

0     2

>> T=[0 1]

T =

0     1

P和T是2个矩阵,

P:R*Q的矩阵,由R组输入向量的最大值和最小值组成

T:S*Q的矩阵,S为神经元个数

>> nte=newp(P,T)

nte =

Neural Network object:

architecture:

numInputs: 1

numLayers: 1

biasConnect: [1]

inputConnect: [1]

layerConnect: [0]

outputConnect: [1]

numOutputs: 1  (read-only)

numInputDelays: 0  (read-only)

numLayerDelays: 0  (read-only)

subobject structures:

inputs: {1×1 cell} of inputs

layers: {1×1 cell} of layers

outputs: {1×1 cell} containing 1 output

biases: {1×1 cell} containing 1 bias

inputWeights: {1×1 cell} containing 1 input weight

layerWeights: {1×1 cell} containing no layer weights

functions:

adaptFcn: ‘trains’

divideFcn: (none)

gradientFcn: ‘calcgrad’

initFcn: ‘initlay’

performFcn: ‘mae’

plotFcns: {‘plotperform’,’plottrainstate’}

trainFcn: ‘trainc’

parameters:

adaptParam: .passes

divideParam: (none)

gradientParam: (none)

initParam: (none)

performParam: (none)

trainParam: .show, .showWindow, .showCommandLine, .epochs,

.goal, .time

weight and bias values:

IW: {1×1 cell} containing 1 input weight matrix

LW: {1×1 cell} containing no layer weight matrices

b: {1×1 cell} containing 1 bias vector

other:

name: ”

userdata: (user information)

权值

>> w1=nte.iw{1}

w1 =

0

阈值

>> w1=nte.b{1}

w1 =

0

>> nte.iw{1,1}=7

nte =

Neural Network object:

architecture:

numInputs: 1

numLayers: 1

biasConnect: [1]

inputConnect: [1]

layerConnect: [0]

outputConnect: [1]

numOutputs: 1  (read-only)

numInputDelays: 0  (read-only)

numLayerDelays: 0  (read-only)

subobject structures:

inputs: {1×1 cell} of inputs

layers: {1×1 cell} of layers

outputs: {1×1 cell} containing 1 output

biases: {1×1 cell} containing 1 bias

inputWeights: {1×1 cell} containing 1 input weight

layerWeights: {1×1 cell} containing no layer weights

functions:

adaptFcn: ‘trains’

divideFcn: (none)

gradientFcn: ‘calcgrad’

initFcn: ‘initlay’

performFcn: ‘mae’

plotFcns: {‘plotperform’,’plottrainstate’}

trainFcn: ‘trainc’

parameters:

adaptParam: .passes

divideParam: (none)

gradientParam: (none)

initParam: (none)

performParam: (none)

trainParam: .show, .showWindow, .showCommandLine, .epochs,

.goal, .time

weight and bias values:

IW: {1×1 cell} containing 1 input weight matrix

LW: {1×1 cell} containing no layer weight matrices

b: {1×1 cell} containing 1 bias vector

other:

name: ”

userdata: (user information)

>> w1=nte.b{1}

w1 =

0

>> nte.b{1}=8

nte =

Neural Network object:

architecture:

numInputs: 1

numLayers: 1

biasConnect: [1]

inputConnect: [1]

layerConnect: [0]

outputConnect: [1]

numOutputs: 1  (read-only)

numInputDelays: 0  (read-only)

numLayerDelays: 0  (read-only)

subobject structures:

inputs: {1×1 cell} of inputs

layers: {1×1 cell} of layers

outputs: {1×1 cell} containing 1 output

biases: {1×1 cell} containing 1 bias

inputWeights: {1×1 cell} containing 1 input weight

layerWeights: {1×1 cell} containing no layer weights

functions:

adaptFcn: ‘trains’

divideFcn: (none)

gradientFcn: ‘calcgrad’

initFcn: ‘initlay’

performFcn: ‘mae’

plotFcns: {‘plotperform’,’plottrainstate’}

trainFcn: ‘trainc’

parameters:

adaptParam: .passes

divideParam: (none)

gradientParam: (none)

initParam: (none)

performParam: (none)

trainParam: .show, .showWindow, .showCommandLine, .epochs,

.goal, .time

weight and bias values:

IW: {1×1 cell} containing 1 input weight matrix

LW: {1×1 cell} containing no layer weight matrices

b: {1×1 cell} containing 1 bias vector

other:

name: ”

userdata: (user information)

>> b=nte.b{1}

b =

8

初始化网络

>> nte=init(nte)

nte =

Neural Network object:

architecture:

numInputs: 1

numLayers: 1

biasConnect: [1]

inputConnect: [1]

layerConnect: [0]

outputConnect: [1]

numOutputs: 1  (read-only)

numInputDelays: 0  (read-only)

numLayerDelays: 0  (read-only)

subobject structures:

inputs: {1×1 cell} of inputs

layers: {1×1 cell} of layers

outputs: {1×1 cell} containing 1 output

biases: {1×1 cell} containing 1 bias

inputWeights: {1×1 cell} containing 1 input weight

layerWeights: {1×1 cell} containing no layer weights

functions:

adaptFcn: ‘trains’

divideFcn: (none)

gradientFcn: ‘calcgrad’

initFcn: ‘initlay’

performFcn: ‘mae’

plotFcns: {‘plotperform’,’plottrainstate’}

trainFcn: ‘trainc’

parameters:

adaptParam: .passes

divideParam: (none)

gradientParam: (none)

initParam: (none)

performParam: (none)

trainParam: .show, .showWindow, .showCommandLine, .epochs,

.goal, .time

weight and bias values:

IW: {1×1 cell} containing 1 input weight matrix

LW: {1×1 cell} containing no layer weight matrices

b: {1×1 cell} containing 1 bias vector

other:

name: ”

userdata: (user information)

>> nte.iw{1}

ans =

0

>> nte.b{1}

ans =

0

>>

本文转载自:深未来

欢迎加入我爱机器学习QQ14群:336582044

getqrcode.jpg

微信扫一扫,关注我爱机器学习公众号

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
单层感知神经网络是一种简单的神经网络模型,常用于线性分类问题。它基于感知算法,通过一个线性函数将输入数据映射到输出类别。 在Matlab中,可以使用CSDN(中国最大的IT开发者社区)提供的文档和代码来实现单层感知神经网络。首先,需要加载神经网络工具箱,该工具箱提供了一些函数和方法用于实现神经网络的建模和训练。 接着,我们需要准备训练数据集和测试数据集,其中包含输入特征和对应的输出类别。可以使用Matlab提供的数据导入工具函数将数据导入到Matlab工作空间中。 接下来,我们可以使用CSDN提供的代码来创建一个单层感知神经网络模型。该代码使用了Matlab神经网络工具箱提供的函数和方法。 神经网络模型创建后,我们需要对模型进行训练。可以使用Matlab神经网络工具箱中的训练函数对模型进行训练,例如使用梯度下降法或者牛顿法等优化算法进行训练。 训练完成后,我们可以使用测试数据集对模型进行评估,测试模型在新数据上的分类性能。可以使用Matlab提供的评估函数进行分类性能的评估,例如计算准确率、召回率、F1值等指标。 通过以上步骤,我们可以使用Matlab和CSDN提供的工具和代码来实现单层感知神经网络的线性分类功能。这种简单的神经网络模型在一些简单的线性分类问题上具有良好的性能,但对于复杂的非线性分类问题可能不够有效。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值