pattern classfication 第6章笔记

回顾前面提到的LMS 算法,它通过学习输入的模式(pattern),经过一系列流程,输出一个分类器(classifier)

其大致流程如下:

 

input patterns  →  LMS  (selected function Φ) → discriminants → solutions

 

第三环节的 discriminants (判别式) 正是用于分类的

 

然而,LMS 算法在许多理论问题上的应用,效率不高。 其中,一个不得不考虑的问题就是,第二环节中用到的 function Φ 在LMS 算法中是预先知道的。

然而,对于一个问题,如果你事先并不知道它的一些特性、特征,即你不能预先得出一个合适的 function Φ,那么最后的结果会很差。

如果你说,好啊,那我就在所有可能的多项式集合里面,一个一个地 作 brute force ,把每一个待定系数多项式都当成 Φ来试

 

显而易见有两大缺点:

① 暴力穷举本身低效,指数级的复杂度

② 受给定的input ( 即给定的 training set , 一个 training set 包含多个 pattern )中训练模式数目的限制:

     if patterns are too limited, then there are too many free parameters in selected Pi (Φ)       (不定方程组)

     if patterns are more than enough, then there are too many equations, which lead to inadequate learning    (超定方程组) 

 

使用BP算法的多层神经网络( neural networks )是 两层的LMS算法的一种

if you adopt neural networks to solve your problem, you don't need to be puzzled by thinking a suitable function Φ.

Because a suitable function Φ and discriminants used for classification will be gradually learned in the learning of the patterns.

 

A simple neural network at least own 3-layer → one input layer,one hidden layer,one output layer

 

There are weights linking two nodes in two nearby layers

 

The outputs of the neural network are the discriminant functions used for classification and of course, the neural network has been trained in the procedure of patterns' learning previously

 

Then, if two inputs have the same output, we are told that these two inputs are in the same category.

 

Commonly, there are three protocols of BP algo(back propagation).  ie stochastic、batch 、on-line

 

① stochastic:

each time a pattern is randomly chosen from the training set, then network weights update

 

② batch:

Before the updating of the weights, all the patterns in the training set present once.

It use epoch

 

③ on-line:

each pattern present and only once

 

There is a problem remains for stochastic that a weight update may reduce the error on the single pattern being presented,

it may increase the error on the full training set

 

Throughout the algorithm, we use gradient descent to reduce error.

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值