ArcGIS教程:模糊分类的工作原理

模糊分类工具在ArcGIS中用于基于概率对数据进行重分类,0表示确定不属于,1表示确定属于,介于两者之间表示可能性。工具利用特定函数如模糊线性、模糊高斯等转换连续输入数据,适用于需要模糊叠加分析的场景。模糊高斯适用于正态分布,模糊较大值和较小值考虑输入值大小,模糊线性基于用户指定范围进行线性转换。
摘要由CSDN通过智能技术生成

  模糊分类工具基于属于指定集合的概率对输入数据进行重分类或转换至介于 0 到 1 的范围内。0 分配给确定不属于指定集合的位置,1 分配给确定属于指定集合那些值,0 到 1 之间整个范围的概率分配给某些等级的可能成员(值越大,概率越大)。

  可以使用 ArcGIS Spatial Analyst 扩展模块中任意数量的可用函数和运算符对输入值进行转换,将这些值重分类为 0 到 1范围内的概率范围。但是,模糊分类工具允许根据一系列适用于模糊化处理的特定函数对连续的输入数据进行转换。例如,模糊线性分类函数将输入值线性地转换到 0 到 1 的范围内,其中为最小输入值分配 0,为最大输入值分配 1。所有中间值都将基于线性比例获取到某些分类值,其中为较大的输入值分配较大的概率或接近 1 的概率。

  在脚本语言中,这些函数都被实现为对应的 Python 类。

  由于这些分类函数特定于连续的输入数据,在需要将分类数据用作“模糊叠加”分析的输入时,则需要使用任意数量的 Spatial Analyst 工具将该数据转换为 0 到 1 概率的分类范围。最适用于此过程的两个工具是重分类和分割。重分类工具允许将分类数据转换到 0 到 10 的范围(无法使用此工具将数据直接重分类到 0 到 1 的范围)内,然后将转换所得的数据除以 10 以获取 0 到1 之间的范围。

  各分类函数在公式和应用方面都有所不同。使用哪个函数具体取决于哪个函数能够根据正在模拟的现象最好地捕获到数据的变换。可以通过一系列输入参数进一步优化各个分类函数的特征。

  以下是各种模糊分类函数及其最适用对象的列表。

模糊分类类型

  以下对七个模糊分类函数进行了逐一说明。

  模糊高斯

  模糊高斯函数将原始值转换为正态分布。正态分布的中点为集合定义了理想定义,为该中点分配值 1,而分类过程中的

  • 0
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
It is known that there is no sufficient Matlab program about neuro-fuzzy classifiers. Generally, ANFIS is used as classifier. ANFIS is a function approximator program. But, the usage of ANFIS for classifications is unfavorable. For example, there are three classes, and labeled as 1, 2 and 3. The ANFIS outputs are not integer. For that reason the ANFIS outputs are rounded, and determined the class labels. But, sometimes, ANFIS can give 0 or 4 class labels. These situations are not accepted. As a result ANFIS is not suitable for classification problems. In this study, I prepared different adaptive neuro-fuzzy classifiers. In the all programs, which are given below, I used the k-means algorithm to initialize the fuzzy rules. For that reason, the user should give the number of cluster for each class. Also, Gaussian membership function is only used for fuzzy set descriptions, because of its simple derivative expressions The first of them is scg_nfclass.m. This classifier based on Jang’s neuro-fuzzy classifier [1]. The differences are about the rule weights and parameter optimization. The rule weights are adapted by the number of rule samples. The scaled conjugate gradient (SCG) algorithm is used to determine the optimum values of nonlinear parameters. The SCG is faster than the steepest descent and some second order derivative based methods. Also, it is suitable for large scale problems [2]. The second program is scg_nfclass_speedup.m. This classifier is similar the scg_nfclass. The difference is about parameter optimization. Although it is based on SCG algorithm, it is faster than the traditional SCG. Because, it used least squares estimation method for gradient estimation without using all training samples. The speeding up is seemed for medium and large scale problems [2]. The third program is scg_power_nfclass.m. Linguistic hedges are applied to the fuzzy sets of rules, and are adapted by SCG algorithm. By this way, some distinctive features are emphasized by power values, and some irrelevant features are damped with power values. The power effects in any feature are generally different for different classes. The using of linguistic hedges increase the recognition rates [3]. The last program is scg_power_nfclass_feature.m. In this program, the powers of fuzzy sets are used for feature selection [4]. If linguistic hedge values of classes in any feature are bigger than 0.5 and close to 1, this feature is relevant, otherwise it is irrelevant. The program creates a feature selection and a rejection criterion by using power values of features. References: [1] Sun CT, Jang JSR (1993). A neuro-fuzzy classifier and its applications. Proc. of IEEE Int. Conf. on Fuzzy Systems, San Francisco 1:94–98.Int. Conf. on Fuzzy Systems, San Francisco 1:94–98 [2] B. Cetişli, A. Barkana (2010). Speeding up the scaled conjugate gradient algorithm and its application in neuro-fuzzy classifier training. Soft Computing 14(4):365–378. [3] B. Cetişli (2010). Development of an adaptive neuro-fuzzy classifier using linguistic hedges: Part 1. Expert Systems with Applications, 37(8), pp. 6093-6101. [4] B. Cetişli (2010). The effect of linguistic hedges on feature selection: Part 2. Expert Systems with Applications, 37(8), pp 6102-6108. e-mail:bcetisli@mmf.sdu.edu.tr bcetisli@gmail.com
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值