一维时序信号转化成灰度图,基于DCGAN的时序信号数据集扩充

to以前看过的hxd:参考tensorflow官方教程重新写的一个版本。

本文以西储大学CWRU轴承故障的振动信号数据库作为模型的训练集

灰度图转化实现思路:

 参考博客:

 一种基于卷积神经网络的数据驱动故障预测方法(含代码)_XD_onmyway的博客-CSDN博客

 GAN网络之入门教程(四)之基于DCGAN动漫头像生成 - 段小辉 - 博客园 (cnblogs.com)

    如图所示,将一段一维时序信号转化为若干个64*64的图像。下面为将正常数据、滚动体故障数据集、外圈故障、内圈故障的一维时序信号转化为灰度图的具体代码。

import numpy as np
import sys
import scipy.io as io
import random

# ball_18
ball_18_0 = io.loadmat("./CWRU/ball/ball18/118")["X118_DE_time"].tolist()
ball_18_1 = io.loadmat("./CWRU/ball/ball18/119")["X119_DE_time"].tolist()
ball_18_2 = io.loadmat("./CWRU/ball/ball18/120")["X120_DE_time"].tolist()
ball_18_3 = io.loadmat("./CWRU/ball/ball18/121")["X121_DE_time"].tolist()
ball_18 = [ball_18_0, ball_18_1, ball_18_2, ball_18_3]

# ball_36
ball_36_0 = io.loadmat("./CWRU/ball/ball36/185")["X185_DE_time"].tolist()
ball_36_1 = io.loadmat("./CWRU/ball/ball36/186")["X186_DE_time"].tolist()
ball_36_2 = io.loadmat("./CWRU/ball/ball36/187")["X187_DE_time"].tolist()
ball_36_3 = io.loadmat("./CWRU/ball/ball36/188")["X188_DE_time"].tolist()
ball_36 = [ball_36_0, ball_36_1, ball_36_2, ball_36_3]

# ball_54
ball_54_0 = io.loadmat("./CWRU/ball/ball54/222")["X222_DE_time"].tolist()
ball_54_1 = io.loadmat("./CWRU/ball/ball54/223")["X223_DE_time"].tolist()
ball_54_2 = io.loadmat("./CWRU/ball/ball54/224")["X224_DE_time"].tolist()
ball_54_3 = io.loadmat("./CWRU/ball/ball54/225")["X225_DE_time"].tolist()
ball_54 = [ball_54_0, ball_54_1, ball_54_2, ball_54_3]

# inner_18
inner_18_0 = io.loadmat("./CWRU/inner/inner18/105")["X105_DE_time"].tolist()
inner_18_1 = io.loadmat("./CWRU/inner/inner18/106")["X106_DE_time"].tolist()
inner_18_2 = io.loadmat("./CWRU/inner/inner18/107")["X107_DE_time"].tolist()
inner_18_3 = io.loadmat("./CWRU/inner/inner18/108")["X108_DE_time"].tolist()
inner_18 = [inner_18_0, inner_18_1, inner_18_2, inner_18_3]

# inner_36
inner_36_0 = io.loadmat("./CWRU/inner/inner36/169")["X169_DE_time"].tolist()
inner_36_1 = io.loadmat("./CWRU/inner/inner36/170")["X170_DE_time"].tolist()
inner_36_2 = io.loadmat("./CWRU/inner/inner36/171")["X171_DE_time"].tolist()
inner_36_3 = io.loadmat("./CWRU/inner/inner36/172")["X172_DE_time"].tolist()
inner_36 = [inner_36_0, inner_36_1, inner_36_2, inner_36_3]

# inner_54
inner_54_0 = io.loadmat("./CWRU/inner/inner54/209")["X209_DE_time"].tolist()
inner_54_1 = io.loadmat("./CWRU/inner/inner54/210")["X210_DE_time"].tolist()
inner_54_2 = io.loadmat("./CWRU/inner/inner54/211")["X211_DE_time"].tolist()
inner_54_3 = io.loadmat("./CWRU/inner/inner54/212")["X212_DE_time"].tolist()
inner_54 = [inner_54_0, inner_54_1, inner_54_2, inner_54_3]

# outer_18
outer_18_0 = io.loadmat("./CWRU/outer/outer18/130")["X130_DE_time"].tolist()
outer_18_1 = io.loadmat("./CWRU/outer/outer18/131")["X131_DE_time"].tolist()
outer_18_2 = io.loadmat("./CWRU/outer/outer18/132")["X132_DE_time"].tolist()
outer_18_3 = io.loadmat("./CWRU/outer/outer18/133")["X133_DE_time"].tolist()
outer_18 = [outer_18_0, outer_18_1, outer_18_2, outer_18_3]

# outer_36
outer_36_0 = io.loadmat("./CWRU/outer/outer36/197")["X197_DE_time"].tolist()
outer_36_1 = io.loadmat("./CWRU/outer/outer36/198")["X198_DE_time"].tolist()
outer_36_2 = io.loadmat("./CWRU/outer/outer36/199")["X199_DE_time"].tolist()
outer_36_3 = io.loadmat("./CWRU/outer/outer36/200")["X200_DE_time"].tolist()
outer_36 = [outer_36_0, outer_36_1, outer_36_2, outer_36_3]

# outer_54
outer_54_0 = io.loadmat("./CWRU/outer/outer54/234")["X234_DE_time"].tolist()
outer_54_1 = io.loadmat("./CWRU/outer/outer54/235")["X235_DE_time"].tolist()
outer_54_2 = io.loadmat("./CWRU/outer/outer54/236")["X236_DE_time"].tolist()
outer_54_3 = io.loadmat("./CWRU/outer/outer54/237")["X237_DE_time"].tolist()
outer_54 = [outer_54_0, outer_54_1, outer_54_2, outer_54_3]

# normal
normal_0 = io.loadmat("./CWRU/normal/97")["X097_DE_time"].tolist()
normal_1 = io.loadmat("./CWRU/normal/98")["X098_DE_time"].tolist()
normal_2 = io.loadmat("./CWRU/normal/99")["X099_DE_time"].tolist()
normal_3 = io.loadmat("./CWRU/normal/100")["X100_DE_time"].tolist()
normal = [normal_0, normal_1, normal_2, normal_3]

# all_data
all_data = [
    normal,
    ball_18,
    ball_36,
    ball_54,
    inner_18,
    inner_36,
    inner_54,
    outer_18,
    outer_36,
    outer_54,
]

normal_imgs=[]
inner_imgs=[]
outer_imgs=[]
ball_imgs=[]


for index in range(10):
    data = all_data[index]
    for load_type in range(4):
        load_data = data[load_type]
        max_start = len(load_data) - 4096
        starts = []
        for i in range(500):
            # 随机一个start,不在starts里,就加入
            while True:
                start = random.randint(0, max_start)
                if start not in starts:
                    starts.append(start)
                    break
            # 将4096个数据点转化成64×64的二维图
            temp = load_data[start: start + 4096]
            temp = np.array(temp)
            temp = temp.reshape(64, 64)
            max = -2
            min = 2
            for i in range(64):
                for j in range(64):
                    if (temp[i][j] > max):
                        max = temp[i][j]

                    if (temp[i][j] < min):
                        min = temp[i][j]
            for i in range(64):
                for j in range(64):
                    temp[i][j] = 255 * (temp[i][j] - min) / (max - min)

            if(index==0):
                normal_imgs.append(temp)
            if(index==1 or index==2 or index==3):
                ball_imgs.append(temp)
            if(index==4 or index==5 or index==6):
                inner_imgs.append(temp)
            if (index ==7 or index == 8 or index == 9):
                outer_imgs.append(temp)

np.savez("normal_imgs", *normal_imgs)
np.savez("ball_imgs", *ball_imgs)
np.savez("inner_imgs", *inner_imgs)
np.savez("outer_imgs", *outer_imgs)

上述代码执行完后将生成normal_imgs、ball_imgs、inner_imgs、outer_imgs四个npz格式的文件。手动解压放到新的文件夹中。

 之后就是将这些npy文件读入,转化成图像格式输入DCGAN中。

npy文件转化为jpg格式

import numpy as np
from PIL import Image

def load_imgs(npzfile,path,savepath):
    images=np.load(npzfile)
    i=0
    for file in images:
        i+=1
        image = np.load(path + '/' + file + '.npy')
        image = Image.fromarray(image)
        image = image.convert('L')
        # image.show()
        image.save(savepath+'/'+'array_%d.jpg'%(i))

load_imgs('normal_imgs.npz','normal_imgs','GAN_imgs/normal_imgs')
load_imgs('ball_imgs.npz','ball_imgs','GAN_imgs/ball_imgs')
load_imgs('inner_imgs.npz','inner_imgs','GAN_imgs/inner_imgs')
load_imgs('outer_imgs.npz','outer_imgs','GAN_imgs/outer_imgs')


DCGAN

输入DCGAN模型进行训练(以outer_54_imgs为例)

import tensorflow as tf

tf.__version__

'2.4.0'

import imageio
import matplotlib.pyplot as plt
import numpy as np
import os
from tensorflow.keras import layers
import time
from IPython import display


# 数据集的位置
avatar_img_path ="./data/outer_imgs/outer_54_imgs"
train_images = []
for image_name in os.listdir(avatar_img_path):
    # 加载图片
    image =  imageio.imread(os.path.join(avatar_img_path,image_name))
    image=np.array(image)
    image=image.reshape((64,64,1))
    train_images.append(image)
train_images = np.array(train_images)
train_images = train_images.astype('float32')
train_images=(train_images - 127.5) / 127.5

BUFFER_SIZE = 1000
BATCH_SIZE = 32

# 批量化和打乱数据
train_dataset = tf.data.Dataset.from_tensor_slices(train_images).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)

#创建模型,生成器
def make_generator_model():
    model = tf.keras.Sequential()
    model.add(layers.Dense(4 * 4 * 1024, use_bias=False, input_shape=(100,)))
    model.add(layers.BatchNormalization())
    model.add(layers.ReLU())

    model.add(layers.Reshape((4, 4, 1024)))

    model.add(layers.Conv2DTranspose(512, (2, 2), strides=(2, 2), padding='same', use_bias=False))
    model.add(layers.BatchNormalization())
    model.add(layers.ReLU())

    model.add(layers.Conv2DTranspose(256, (2, 2), strides=(2, 2), padding='same', use_bias=False))
    model.add(layers.BatchNormalization())
    model.add(layers.ReLU())

    model.add(layers.Conv2DTranspose(128, (2, 2), strides=(2, 2), padding='same', use_bias=False))
    model.add(layers.BatchNormalization())
    model.add(layers.ReLU())

    model.add(layers.Conv2DTranspose(1, (2, 2), strides=(2, 2), padding='same', use_bias=False,activation='tanh'))

    return model

#使用(尚未训练的)生成器创建一张图片。
generator =make_generator_model()

noise = tf.random.normal([1, 100])
generated_image = generator(noise, training=False)
plt.imshow(generated_image[0, :, :, 0], cmap='gray')

time.sleep(10)



def make_discriminator_model():
    model = tf.keras.Sequential()
#64,64,1
    model.add(layers.Conv2D(128, (2, 2), strides=(2, 2), padding='same',
                                     input_shape=[64, 64, 1]))
    model.add(layers.LeakyReLU())
    model.add(layers.Dropout(0.3))
#32,32,128
    model.add(layers.Conv2D(256, (2, 2), strides=(2, 2), padding='same'))
    model.add(layers.LeakyReLU())
    model.add(layers.Dropout(0.3))
#16,16,256
    model.add(layers.Conv2D(512, (2, 2), strides=(2, 2), padding='same'))
    model.add(layers.LeakyReLU())
    model.add(layers.Dropout(0.3))
#8,8,512
    model.add(layers.Conv2D(1024, (2, 2), strides=(2, 2), padding='same'))
    model.add(layers.LeakyReLU())
    model.add(layers.Dropout(0.3))
#4,4,1024
    model.add(layers.Flatten())
    model.add(layers.Dense(1))

    return model


#使用(尚未训练的)判别器来对图片的真伪进行判断。
# 模型将被训练为为真实图片输出正值,为伪造图片输出负值。
discriminator = make_discriminator_model()
#decision = discriminator(generated_image)
#print (decision)

##定义损失函数和优化器
# 该方法返回计算交叉熵损失的辅助函数
cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True)

#判别器损失
def discriminator_loss(real_output, fake_output):
    real_loss = cross_entropy(tf.ones_like(real_output), real_output)
    fake_loss = cross_entropy(tf.zeros_like(fake_output), fake_output)
    total_loss = real_loss + fake_loss
    return total_loss

#生成器损失
def generator_loss(fake_output):
    return cross_entropy(tf.ones_like(fake_output), fake_output)
#由于我们需要分别训练两个网络,判别器和生成器的优化器是不同的。
generator_optimizer = tf.keras.optimizers.Adam(lr=0.0002,beta_1=0.5,beta_2=0.999)
discriminator_optimizer = tf.keras.optimizers.Adam(lr=0.0002,beta_1=0.5,beta_2=0.999)

#定义训练循环
EPOCHS = 5000#50
noise_dim = 100
num_examples_to_generate = 16

seed = tf.random.normal([num_examples_to_generate, noise_dim])

#训练循环在生成器接收到一个随机种子作为输入时开始。
# 该种子用于生产一张图片。
# 判别器随后被用于区分真实图片(选自训练集)和伪造图片(由生成器生成)。
def train_step(images):
    noise = tf.random.normal([BATCH_SIZE, noise_dim])

    with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
      generated_images = generator(noise, training=True)

      real_output = discriminator(images, training=True)
      fake_output = discriminator(generated_images, training=True)

      gen_loss = generator_loss(fake_output)
      disc_loss = discriminator_loss(real_output, fake_output)

    gradients_of_generator = gen_tape.gradient(gen_loss, generator.trainable_variables)
    gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.trainable_variables)

    generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.trainable_variables))
    discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.trainable_variables))

def train(dataset, epochs):
  his_gen_loss=[]
  his_dis_loss=[]
  for epoch in range(epochs):
    start = time.time()

    for image_batch in dataset:
        train_step(image_batch)
        final_batch=image_batch
    #计算每一次迭代的损失函数值
    noise = tf.random.normal([BATCH_SIZE, noise_dim])
    generated_images = generator(noise, training=True)

    real_output = discriminator(final_batch, training=True)
    fake_output = discriminator(generated_images, training=True)

    gen_loss = generator_loss(fake_output)
    disc_loss = discriminator_loss(real_output, fake_output)
    his_gen_loss.append(gen_loss)
    his_dis_loss.append(disc_loss)

    # 继续进行时为 GIF 生成图像
    #display.clear_output(wait=True)
    #generate_and_save_images(generator,
    #                         epoch + 1,
    #                         seed)
    print ('Time for epoch {} is {} sec'.format(epoch + 1, time.time()-start))
    #每隔50代更新一次损失函数图
    if(epoch%100==0):
        plt.plot(his_gen_loss, label='generator_loss')
        plt.xlabel('Epoch')
        plt.ylabel('gen_loss')
        plt.title("generator_loss")
        plt.ylim([0, 10])
        plt.savefig("gen_loss.png")
        plt.show()

        plt.plot(his_dis_loss, label='discraminator_loss')
        plt.xlabel('Epoch')
        plt.ylabel('dis_loss')
        plt.ylim([0, 3])
        plt.title("discraminator_loss")
        plt.savefig("dis_loss.png")
        plt.show()


  # 最后一个 epoch 结束后生成图片
  display.clear_output(wait=True)
  generate_and_save_images(generator,
                           epochs,
                           seed)





#生成与保存图片
def generate_and_save_images(model, epoch, test_input):
  # 注意 training` 设定为 False
  # 因此,所有层都在推理模式下运行(batchnorm)。
  predictions = model(test_input, training=False)

  fig = plt.figure(figsize=(4,4))

  for i in range(predictions.shape[0]):
      plt.subplot(4, 4, i+1)
      plt.imshow(predictions[i, :, :, 0] * 127.5 + 127.5, cmap='gray')
      plt.axis('off')

  plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))
  #plt.show()


# 训练模型
train(train_dataset, EPOCHS)

generator.save('saved_model/gen_out_54')
discriminator.save('saved_model/dis_out_55')

效果如下:

最左边为真实图像 

  • 32
    点赞
  • 333
    收藏
    觉得还不错? 一键收藏
  • 65
    评论
一维信号化为二维灰度图可以通过使用Python中的numpy和matplotlib库实现。 首先,通过numpy库创建一个一维信号的数组。例如,可以使用numpy的linspace函数生成一个在0到1之间均匀分布的100个数据点的一维信号。 接下来,将这个一维信号换为二维灰度图。可以使用numpy的reshape函数将一维信号数组重新构造为一个二维数组。可以选择将其设置为指定的行数和列数,以决定生成的灰度图的大小。例如,可以将100个数据点重新排列为10行10列的二维数组。 然后,使用matplotlib库的imshow函数显示生成的二维灰度图。imshow函数可以接收二维数组作为输入,并将其显示为对应的灰度图。需要注意,为了将一维信号正确地化为灰度图,还需要设置cmap参数为'gray',以确保图像以灰度的形式呈现。 最后,使用matplotlib库的show函数显示生成的灰度图。show函数会将生成的图像显示在屏幕上。 下面是一个具体的示例代码: ```python import numpy as np import matplotlib.pyplot as plt # 创建一维信号 signal = np.linspace(0, 1, 100) # 将一维信号换为二维灰度图 gray_image = np.reshape(signal, (10, 10)) # 显示灰度图 plt.imshow(gray_image, cmap='gray') plt.show() ``` 运行上述代码后,将会生成一个大小为10x10的灰度图,其中灰度级别由一维信号的数值决定。可以根据需要调整一维信号灰度图的大小,以及修改imshow函数的参数来自定义生成的灰度图的外观。
评论 65
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值