2.1、背景知识
残差神经网络(ResNet)
残差神经网络(ResNet)是由微软研究院的何恺明、张祥雨、任少卿、孙剑等人提出的。ResNet 在2015 年的ILSVRC(ImageNet Large Scale Visual Recognition Challenge)中取得了冠军。
残差神经网络的主要贡献是发现了“退化现象(Degradation)”,并针对退化现象发明了 “快捷连接(Shortcut connection)”,极大的消除了深度过大的神经网络训练困难问题。神经网络的“深度”首次突破了100层、最大的神经网络甚至超过了1000层。
ResNet论文网址:https://arxiv.org/abs/1512.0338
2、代码设计
2.1、导入相应的库
# 该该代码为resnet18
# 加载数据库
import os
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras import datasets,losses,Sequential,optimizers
import scipy.io as scio
plt.rcParams['font.sans-serif'] = ['kaiti']
plt.rcParams['axes.unicode_minus'] = False
import pandas as pd
2.2、加载数据集
数据集西储大学的轴承数据集,我选取了四种故障类型进行轴承的故障诊断。
训练集:1200组(每种故障数目为400组)
测试集:720组(每种故障数目为180组)
# 加载mat数据集
dataFile = r'E:\数据集\data.mat'
data = scio.loadmat(dataFile)
#训练集
x_train = data['train_data']
y_train= data['train_label']
x_test =data['test_data']
y_test =data['test_label']
Batch_Size=32
x_train=tf.reshape(x_train,[-1,6000,1,1])
x_test=tf.reshape(x_test,[-1,6000,1,1])
# 查看数据集的格式
print(x_train.shape)
print(x_test.shape)
print(y_train.shape)
print(y_test.shape)
2.3、构建resnet网络模型
class ResNet(tf.keras.Model):
def __init__(self, layers_num, num_classes=4):
super(ResNet, self).__init__()
# 开始的输入层经过一个3*3,步长为1的卷积层和最大池化层
self.stem = Sequential([
tf.keras.layers.Conv2D(64, kernel_size=[3, 1], strides=[1, 1]),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Activation('relu'),
tf.keras.layers.MaxPool2D(pool_size=[2, 1], strides=[1, 1], padding='same')
])
# 通过第一个组
self.layer1 = self.build_resblock(64, layers_num[0])
# 通过第二个组
self.layer2 = self.build_resblock(128, layers_num[1], stride=2)
# 通过第三个组
self.layer3 = self.build_resblock(256, layers_num[2], stride=2)
# 通过第四个组
self.layer4 = self.build_resblock(512, layers_num[3], stride=2)
# 经过全局平均池化层
self.avgPool = tf.keras.layers.GlobalAveragePooling2D()
# 经过最后的全连接层
self.fc = tf.keras.layers.Dense(num_classes)
# 这个函数是在构建一个组中的ResNetBlock
def build_resblock(self, filter_num, blocks, stride=1):
res_block = Sequential([
ResNetBlock(filter_num, stride)
])
for i in range(1, blocks):
res_block.add(ResNetBlock(filter_num, stride=1))
return res_block
def call(self, inputs, training=None):
x = self.stem(inputs)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
out = self.avgPool(x)
output = self.fc(out)
return output
3.故障诊断效果
测试集故障诊断精度为:97%。
Resnet作为一个对比模型,其精度已经可以啦。没有继续进行提高得必要了。