【计算智能】模糊分类与聚类


模式识别方法的选择取决于对象的特征和问题的性质,模糊分类采用模糊数学语言对事物按照 规则或隶属度进行描述和分类.

在这里插入图片描述

非参数分类方法——以类别隶属度表达特征属于某一类别的程度

在这里插入图片描述
在这里插入图片描述

基于经验规则而不具有自学习的特征

模糊分类简单实用,方法的局限性限制了其应用,主要在于:

 可用于设计分类规则的信息相当有限,例如类别隶属度的数量、宽度 及位置等等;

 当特征数较多或高维情况时,模糊分类方法不再适用;

 模糊分类不适用于学习率变化的情况;

 模糊分类不具有自适应性能。

基于规则的分类

给每个特征赋予一个隶属度值:
在这里插入图片描述

与 决策树关系:
在这里插入图片描述

规则的学习

分类规则的提取与设计需满足:

 互斥规则(Mutually exclusive rules):每一个样本记录 最多只能触发一条规则,规则集中不存在两条规则被同 一条记录覆盖的情况,则称规则集是互斥规则;

 穷举规则(Exhaustive rules):每一个样本记录至少触 发一条规则,如果对属性值的任一组合,都存在一条规 则可以覆盖该情况,则称规则集是穷举规则。

【例】基于模糊逻辑的图像边缘提取
1 图像边缘简介

边缘:图像边缘是像素灰度存在阶跃变化或屋脊状变化的像素的集合。

2边缘提取的核心思想

在这里插入图片描述

更简化的规则: 像素梯度为0的区域,为图像的平坦区 域 像素梯度不为0的区域,为图像的边缘区域

​ ——模糊逻辑中的知识推理规则

3 基于模糊逻辑的图像边缘提取

在这里插入图片描述

模糊聚类

在这里插入图片描述

模糊k-均值聚类
k-均值聚类

在这里插入图片描述

在这里插入图片描述

模糊k-均值聚类

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

区别和联系

 k-均值算法在聚类过程中,每次得到的结果,类别之间的边界是 明确的,聚类中心根据当前属于该类样本迭代更新;

 模糊k -均值算法在聚类过程中,每一次计算聚类中心都需要用到 全部样本,每次得到的类别边界是模糊的,聚类准则也体现了模 糊性。

模糊k-均值聚类特点:

 首先设定一些类及每个样本对各类的隶属度,然后通过迭代不断 更新隶属度,直至隶属度的变化量小于规定的阈值,达到收敛;

 预先指定的模糊参数m>1,确定了聚类之间的交叉程度;

 假若参数m>1很多,意味着第j个聚类中心点对新聚类中心的影响 较小。

模糊分类分类在多元文本分类中的应用

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

  • 5
    点赞
  • 19
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
It is known that there is no sufficient Matlab program about neuro-fuzzy classifiers. Generally, ANFIS is used as classifier. ANFIS is a function approximator program. But, the usage of ANFIS for classifications is unfavorable. For example, there are three classes, and labeled as 1, 2 and 3. The ANFIS outputs are not integer. For that reason the ANFIS outputs are rounded, and determined the class labels. But, sometimes, ANFIS can give 0 or 4 class labels. These situations are not accepted. As a result ANFIS is not suitable for classification problems. In this study, I prepared different adaptive neuro-fuzzy classifiers. In the all programs, which are given below, I used the k-means algorithm to initialize the fuzzy rules. For that reason, the user should give the number of cluster for each class. Also, Gaussian membership function is only used for fuzzy set descriptions, because of its simple derivative expressions The first of them is scg_nfclass.m. This classifier based on Jang’s neuro-fuzzy classifier [1]. The differences are about the rule weights and parameter optimization. The rule weights are adapted by the number of rule samples. The scaled conjugate gradient (SCG) algorithm is used to determine the optimum values of nonlinear parameters. The SCG is faster than the steepest descent and some second order derivative based methods. Also, it is suitable for large scale problems [2]. The second program is scg_nfclass_speedup.m. This classifier is similar the scg_nfclass. The difference is about parameter optimization. Although it is based on SCG algorithm, it is faster than the traditional SCG. Because, it used least squares estimation method for gradient estimation without using all training samples. The speeding up is seemed for medium and large scale problems [2]. The third program is scg_power_nfclass.m. Linguistic hedges are applied to the fuzzy sets of rules, and are adapted by SCG algorithm. By this way, some distinctive features are emphasized by power values, and some irrelevant features are damped with power values. The power effects in any feature are generally different for different classes. The using of linguistic hedges increase the recognition rates [3]. The last program is scg_power_nfclass_feature.m. In this program, the powers of fuzzy sets are used for feature selection [4]. If linguistic hedge values of classes in any feature are bigger than 0.5 and close to 1, this feature is relevant, otherwise it is irrelevant. The program creates a feature selection and a rejection criterion by using power values of features. References: [1] Sun CT, Jang JSR (1993). A neuro-fuzzy classifier and its applications. Proc. of IEEE Int. Conf. on Fuzzy Systems, San Francisco 1:94–98.Int. Conf. on Fuzzy Systems, San Francisco 1:94–98 [2] B. Cetişli, A. Barkana (2010). Speeding up the scaled conjugate gradient algorithm and its application in neuro-fuzzy classifier training. Soft Computing 14(4):365–378. [3] B. Cetişli (2010). Development of an adaptive neuro-fuzzy classifier using linguistic hedges: Part 1. Expert Systems with Applications, 37(8), pp. 6093-6101. [4] B. Cetişli (2010). The effect of linguistic hedges on feature selection: Part 2. Expert Systems with Applications, 37(8), pp 6102-6108. e-mail:bcetisli@mmf.sdu.edu.tr bcetisli@gmail.com
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值