基于二维灰度图的卷积神经网络故障诊断(tensorflow)

        本文展示了如何构造一个卷积神经网络 (CNN) 并对自己的数据集进行分类。

1.数据集介绍、读入、转化

        本文所用的数据集为西储大学轴承数据集,经过数据预处理将一维时序数据转化为灰度图,对灰度图进行分类。具体数据处理过程可参考: 一维时序信号转化成灰度图,基于DCGAN的时序信号数据集扩充_deeplearning小学生的博客-CSDN博客

        经数据预处理后的数据集如下:train_pics,test_pics,train_labels,test_labels,文件格式为npz格式,train_pics和test_pics分别为训练集和测试集,内容为10类故障类型的灰度图;labels文件内容为与pics文件一一对应的标签。

数据读入及转化代码如下:

train_pics_dict = np.load("train_pics.npz")
train_labels_dict = np.load("train_labels.npz")
test_pics_dict = np.load("test_pics.npz")
test_labels_dict = np.load("test_labels.npz")

train_images = []
train_labels = []
test_images = []
test_labels = []

for i in train_pics_dict.files:
    tmp=np.array(train_pics_dict[i])
    tmp=tmp.reshape((64,64,1))
    train_images.append(tmp)
    train_labels.append(int(train_labels_dict[i]))

for i in test_pics_dict.files:
    tmp=np.array(test_pics_dict[i])
    tmp=tmp.reshape((64,64,1))
    test_images.append(tmp)
    test_labels.append(int(test_labels_dict[i]))

train_images=np.array(train_images)
test_images=np.array(test_images)
train_labels=np.array(train_labels)
train_labels=train_labels.reshape((-1,1))
test_labels=np.array(test_labels)
test_labels=test_labels.reshape((-1,1))


# Normalize pixel values to be between 0 and 1
#train_images, test_images = train_images / 255.0, test_images / 255.0


 

2.构造网络模型,编译并训练模型 


model = models.Sequential()
model.add(layers.Conv2D(32, (5, 5),padding='same', activation='relu', input_shape=(64, 64,1)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3),padding='same', activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3),padding='same', activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(256, (3, 3),padding='same', activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(2560, activation='relu'))
model.add(layers.Dense(10))


model.compile(optimizer='adam',
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])

history = model.fit(train_images, train_labels, epochs=10,
                    validation_data=(test_images, test_labels))

3.结果分析

 最终预测准确率可以接近百分之百。

完整代码如下:

import tensorflow as tf
import LoadPic
import numpy as np
from tensorflow.keras import datasets, layers, models
import matplotlib.pyplot as plt
#import  cv2
from PIL import Image

train_pics_dict = np.load("train_pics.npz")
train_labels_dict = np.load("train_labels.npz")
test_pics_dict = np.load("test_pics.npz")
test_labels_dict = np.load("test_labels.npz")

train_images = []
train_labels = []
test_images = []
test_labels = []

for i in train_pics_dict.files:
    tmp=np.array(train_pics_dict[i])
    tmp=tmp.reshape((64,64,1))
    train_images.append(tmp)
    train_labels.append(int(train_labels_dict[i]))

for i in test_pics_dict.files:
    tmp=np.array(test_pics_dict[i])
    tmp=tmp.reshape((64,64,1))
    test_images.append(tmp)
    test_labels.append(int(test_labels_dict[i]))

train_images=np.array(train_images)
test_images=np.array(test_images)
train_labels=np.array(train_labels)
train_labels=train_labels.reshape((-1,1))
test_labels=np.array(test_labels)
test_labels=test_labels.reshape((-1,1))

#(train_images, train_labels), (test_images, test_labels) = datasets.cifar10.load_data()

# Normalize pixel values to be between 0 and 1
#train_images, test_images = train_images / 255.0, test_images / 255.0

class_names = [ "normal","ball_18","ball_36","ball_54","inner_18","inner_36",
                "inner_54","outer_18","outer_36","outer_54"]
plt.figure(figsize=(10,10))
for i in range(25):
    plt.subplot(5,5,i+1)
    plt.xticks([])
    plt.yticks([])
    plt.grid(False)
    plt.imshow(train_images[i],cmap='gray')
    # The CIFAR labels happen to be arrays,
    # which is why you need the extra index
    #plt.xlabel(class_names[train_labels[i][0]])
plt.show()

model = models.Sequential()
model.add(layers.Conv2D(32, (5, 5),padding='same', activation='relu', input_shape=(64, 64,1)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3),padding='same', activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3),padding='same', activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(256, (3, 3),padding='same', activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(2560, activation='relu'))
model.add(layers.Dense(10))


model.compile(optimizer='adam',
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])

history = model.fit(train_images, train_labels, epochs=10,
                    validation_data=(test_images, test_labels))

plt.plot(history.history['accuracy'], label='accuracy')
plt.plot(history.history['val_accuracy'], label = 'val_accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.ylim([0.5, 1])
plt.legend(loc='lower right')
plt.savefig("result.png")
plt.show()

test_loss, test_acc = model.evaluate(test_images,  test_labels, verbose=2)

print(test_acc)


  • 7
    点赞
  • 112
    收藏
    觉得还不错? 一键收藏
  • 18
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 18
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值