python实现模糊神经网络(pytorch版)

1 理论

模糊神经网络是一种基于模糊逻辑的神经网络模型,其主要用于处理模糊信息和不确定性的问题。模糊神经网络可以将输入数据映射到一个模糊集合中,然后通过一系列的模糊规则进行求解,最终输出一个模糊集合。

模糊神经网络的基本原理是将输入数据从实数域映射到模糊集合中,然后利用一组模糊规则对其进行处理,最终输出一个模糊集合。模糊集合是一种介于0和1之间的模糊值,代表了某个事物的隶属度。在模糊神经网络的训练过程中,通常使用反向传播算法来更新权重和偏置。

模糊神经网络的一般过程包括以下步骤:

1.确定输入变量和输出变量。输入变量是神经网络的输入特征,输出变量是神经网络的输出结果。

2.将输入变量映射到模糊集合中。这个过程称为模糊化。模糊化可以使用三角函数、梯形函数等不同的方法来实现。

3.确定模糊规则。模糊规则是指将输入变量和输出变量之间的关系用一些语言规则进行描述。通常使用的语言规则形式为:“如果输入变量A是模糊集合X1,且输入变量B是模糊集合X2,那么输出变量C是模糊集合Y1”。

4.基于模糊规则进行推理。推理是指将输入的模糊集合根据模糊规则进行处理,生成模糊输出结果。

5.将模糊输出结果反模糊化。反模糊化是指将模糊输出结果转化为实际的数值结果。反模糊化可以使用各种方法,如平均值法、重心法等。

6.利用反向传播算法进行训练。反向传播算法是一种用于训练神经网络的常用方法,通过计算误差梯度来更新权重和偏置,以提高神经网络的准确性。

模糊神经网络的应用非常广泛,包括模糊控制、模糊分类、模糊聚类等方面。例如,模糊控制可以用于控制温度、湿度等物理量,模糊分类可以用于图像识别、语音识别等领域,模糊聚类可以用于数据挖掘、模式识别等方面。

2 pytorch实现


# https://github.com/kenoma/pytorch-fuzzy
import torch
import torch.nn as nn
import numpy as np
from torch import Tensor

class FuzzyLayer(torch.nn.Module):

    def __init__(self, initial_centers, initial_scales, trainable=True):
        """
        mu_j(x,a,c) = exp(-|| a . x ||^2)
        """
        super().__init__()

        if np.shape(initial_centers) != np.shape(initial_scales):
            raise Exception("initial_centers shape does not match initial_scales")

        sizes = np.shape(initial_centers)
        self.size_out, self.size_in, *_ = sizes

        diags = []
        for s,c in zip(initial_scales, initial_centers):
            diags.append(np.insert(np.diag(s), self.size_in, c, axis = 1))
        a = torch.FloatTensor(np.array(diags))

        const_row = np.zeros(self.size_in+1)
        const_row[self.size_in] = 1
        const_row = np.array([const_row]*self.size_out)
        const_row = np.reshape(const_row, (self.size_out, 1, self.size_in+1))
        self.c_r = nn.Parameter(torch.FloatTensor(const_row), requires_grad=False)
        self.c_one = nn.Parameter(torch.FloatTensor([1]), requires_grad=False)
        self.A = nn.Parameter(a, requires_grad=trainable) 

    @classmethod
    def fromdimentions(cls, size_in, size_out, trainable=True):
        initial_centers = torch.randn((size_out, size_in))
        initial_scales = torch.ones((size_out, size_in))
        return cls(initial_centers, initial_scales, trainable)

    @classmethod
    def fromcenters(cls, initial_centers, trainable=True):
        initial_centers =  np.multiply(-1, initial_centers)
        sizes = np.shape(initial_centers)
        initial_scales = torch.ones(sizes)
        return cls(initial_centers, initial_scales, trainable)

    def forward(self, input: Tensor) -> Tensor:
        batch_size = input.shape[0]
        ta = torch.cat([self.A, self.c_r],1)
        repeated_one = self.c_one.repeat(batch_size,1)
        ext_x = torch.cat([input, repeated_one], 1)
        #reshaped_x = torch.reshape(ext_x, (1, self.size_in+1))
        tx = torch.transpose(ext_x, 0, 1)
        mul = torch.matmul(ta, tx)
        exponents = torch.norm(mul[:,:self.size_in], p=2, dim=1)
        memberships = torch.exp(-exponents)
        return memberships.transpose(0,1)

demo的分类结果:
在这里插入图片描述
在这里插入图片描述
还有个自适应模糊神经网络,感兴趣可以看看:

https://github.com/twmeggs/anfis

参考:https://fuxi.163.com/database/980

  • 9
    点赞
  • 15
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
It is known that there is no sufficient Matlab program about neuro-fuzzy classifiers. Generally, ANFIS is used as classifier. ANFIS is a function approximator program. But, the usage of ANFIS for classifications is unfavorable. For example, there are three classes, and labeled as 1, 2 and 3. The ANFIS outputs are not integer. For that reason the ANFIS outputs are rounded, and determined the class labels. But, sometimes, ANFIS can give 0 or 4 class labels. These situations are not accepted. As a result ANFIS is not suitable for classification problems. In this study, I prepared different adaptive neuro-fuzzy classifiers. In the all programs, which are given below, I used the k-means algorithm to initialize the fuzzy rules. For that reason, the user should give the number of cluster for each class. Also, Gaussian membership function is only used for fuzzy set descriptions, because of its simple derivative expressions The first of them is scg_nfclass.m. This classifier based on Jang’s neuro-fuzzy classifier [1]. The differences are about the rule weights and parameter optimization. The rule weights are adapted by the number of rule samples. The scaled conjugate gradient (SCG) algorithm is used to determine the optimum values of nonlinear parameters. The SCG is faster than the steepest descent and some second order derivative based methods. Also, it is suitable for large scale problems [2]. The second program is scg_nfclass_speedup.m. This classifier is similar the scg_nfclass. The difference is about parameter optimization. Although it is based on SCG algorithm, it is faster than the traditional SCG. Because, it used least squares estimation method for gradient estimation without using all training samples. The speeding up is seemed for medium and large scale problems [2]. The third program is scg_power_nfclass.m. Linguistic hedges are applied to the fuzzy sets of rules, and are adapted by SCG algorithm. By this way, some distinctive features are emphasized by power values, and some irrelevant features are damped with power values. The power effects in any feature are generally different for different classes. The using of linguistic hedges increase the recognition rates [3]. The last program is scg_power_nfclass_feature.m. In this program, the powers of fuzzy sets are used for feature selection [4]. If linguistic hedge values of classes in any feature are bigger than 0.5 and close to 1, this feature is relevant, otherwise it is irrelevant. The program creates a feature selection and a rejection criterion by using power values of features. References: [1] Sun CT, Jang JSR (1993). A neuro-fuzzy classifier and its applications. Proc. of IEEE Int. Conf. on Fuzzy Systems, San Francisco 1:94–98.Int. Conf. on Fuzzy Systems, San Francisco 1:94–98 [2] B. Cetişli, A. Barkana (2010). Speeding up the scaled conjugate gradient algorithm and its application in neuro-fuzzy classifier training. Soft Computing 14(4):365–378. [3] B. Cetişli (2010). Development of an adaptive neuro-fuzzy classifier using linguistic hedges: Part 1. Expert Systems with Applications, 37(8), pp. 6093-6101. [4] B. Cetişli (2010). The effect of linguistic hedges on feature selection: Part 2. Expert Systems with Applications, 37(8), pp 6102-6108. e-mail:bcetisli@mmf.sdu.edu.tr bcetisli@gmail.com
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值