基于残差神经网络的轴承故障诊断(Python代码实现)

2.1、背景知识

残差神经网络(ResNet)

残差神经网络(ResNet)是由微软研究院的何恺明、张祥雨、任少卿、孙剑等人提出的。ResNet 在2015 年的ILSVRC(ImageNet Large Scale Visual Recognition Challenge)中取得了冠军。

残差神经网络的主要贡献是发现了“退化现象(Degradation)”,并针对退化现象发明了 “快捷连接(Shortcut connection)”,极大的消除了深度过大的神经网络训练困难问题。神经网络的“深度”首次突破了100层、最大的神经网络甚至超过了1000层。

ResNet论文网址:https://arxiv.org/abs/1512.0338

2、代码设计

2.1、导入相应的库

# 该该代码为resnet18
# 加载数据库
import os
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras import datasets,losses,Sequential,optimizers
import scipy.io as scio
plt.rcParams['font.sans-serif'] = ['kaiti']
plt.rcParams['axes.unicode_minus'] = False
import pandas as pd

2.2、加载数据集

数据集西储大学的轴承数据集,我选取了四种故障类型进行轴承的故障诊断。

训练集:1200组(每种故障数目为400组)

测试集:720组(每种故障数目为180组)


# 加载mat数据集
dataFile = r'E:\数据集\data.mat'
data = scio.loadmat(dataFile)

#训练集
x_train = data['train_data']
y_train= data['train_label']
x_test =data['test_data']
y_test =data['test_label']


Batch_Size=32
x_train=tf.reshape(x_train,[-1,6000,1,1])
x_test=tf.reshape(x_test,[-1,6000,1,1])

# 查看数据集的格式
print(x_train.shape)
print(x_test.shape)
print(y_train.shape)
print(y_test.shape)

2.3、构建resnet网络模型

class ResNet(tf.keras.Model):
    def __init__(self, layers_num, num_classes=4):
        super(ResNet, self).__init__()
        # 开始的输入层经过一个3*3,步长为1的卷积层和最大池化层
        self.stem = Sequential([
            tf.keras.layers.Conv2D(64, kernel_size=[3, 1], strides=[1, 1]),
            tf.keras.layers.BatchNormalization(),
            tf.keras.layers.Activation('relu'),

            tf.keras.layers.MaxPool2D(pool_size=[2, 1], strides=[1, 1], padding='same')
        ])
        # 通过第一个组
        self.layer1 = self.build_resblock(64, layers_num[0])
        # 通过第二个组
        self.layer2 = self.build_resblock(128, layers_num[1], stride=2)
        # 通过第三个组
        self.layer3 = self.build_resblock(256, layers_num[2], stride=2)
        # 通过第四个组
        self.layer4 = self.build_resblock(512, layers_num[3], stride=2)

        # 经过全局平均池化层
        self.avgPool = tf.keras.layers.GlobalAveragePooling2D()

        # 经过最后的全连接层
        self.fc = tf.keras.layers.Dense(num_classes)

    # 这个函数是在构建一个组中的ResNetBlock
    def build_resblock(self, filter_num, blocks, stride=1):
        res_block = Sequential([
            ResNetBlock(filter_num, stride)
        ])
        for i in range(1, blocks):
            res_block.add(ResNetBlock(filter_num, stride=1))
        return res_block

    def call(self, inputs, training=None):
        x = self.stem(inputs)

        x = self.layer1(x)
        x = self.layer2(x)
        x = self.layer3(x)
        x = self.layer4(x)

        out = self.avgPool(x)

        output = self.fc(out)
        return output

3.故障诊断效果

测试集故障诊断精度为:97%。

Resnet作为一个对比模型,其精度已经可以啦。没有继续进行提高得必要了。

  • 0
    点赞
  • 35
    收藏
    觉得还不错? 一键收藏
  • 3
    评论
以下是一个用Python编写的残差神经网络(Residual Neural Network)的代码示例: ```python import tensorflow as tf from tensorflow.keras import layers def resnet_block(input_data, filters, kernel_size, strides): # 第一个卷积层 x = layers.Conv2D(filters, kernel_size, strides=strides, padding='same')(input_data) x = layers.BatchNormalization()(x) x = layers.Activation('relu')(x) # 第二个卷积层 x = layers.Conv2D(filters, kernel_size, padding='same')(x) x = layers.BatchNormalization()(x) # 将输入数据与输出进行相加 residual = layers.Conv2D(filters, kernel_size=1, strides=strides, padding='same')(input_data) x = layers.add([x, residual]) x = layers.Activation('relu')(x) return x def build_resnet(input_shape, num_classes): input_data = tf.keras.Input(shape=input_shape) # 第一个卷积层 x = layers.Conv2D(64, 7, strides=2, padding='same')(input_data) x = layers.BatchNormalization()(x) x = layers.Activation('relu')(x) x = layers.MaxPooling2D(pool_size=3, strides=2, padding='same')(x) # 残差块 x = resnet_block(x, filters=64, kernel_size=3, strides=1) x = resnet_block(x, filters=64, kernel_size=3, strides=1) x = resnet_block(x, filters=128, kernel_size=3, strides=2) x = resnet_block(x, filters=128, kernel_size=3, strides=1) x = resnet_block(x, filters=256, kernel_size=3, strides=2) x = resnet_block(x, filters=256, kernel_size=3, strides=1) x = resnet_block(x, filters=512, kernel_size=3, strides=2) x = resnet_block(x, filters=512, kernel_size=3, strides=1) # 全局平均池化层 x = layers.GlobalAveragePooling2D()(x) # 全连接层 x = layers.Dense(256)(x) x = layers.BatchNormalization()(x) x = layers.Activation('relu')(x) x = layers.Dense(num_classes, activation='softmax')(x) model = tf.keras.Model(input_data, x) return model ``` 这段代码定义了一个基本的残差神经网络结构,通过调用`build_resnet`函数,可以构建一个具有指定输入形状和类别数的残差神经网络模型。你可以根据自己的需求进行修改和扩展。
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值