DEAP脑电数据集的数据增强扩充

16 篇文章 47 订阅 ¥89.90 ¥99.00

首先将数据从dat文件加载到1280 x 32 x 8064大小的np数组中(也就1280=32*40)

对每个EEG信号进行Z-score归一化,得到的np数组是所有的子数据

标签加载到1280 x 4命名子标签的np数组中

进行one-hot编码

训练测试拆分:

训练集 : (1152,32,8064), (1152,4)
测试集 : (128,32,8064), (128,4)

重复标签以打开训练数据

(13824, 4) (6912, 4) (4608, 4)

13824=1152*12

6912=1154*6

4608=1152*4

12个子信号,每个长度为672,总计13824个实例

(32,13824,672,1)

6个子信号,每个长度1344,总共6912个实例

(32,6912,1344,1)

4个子信号,每个长度为2016,总共4608个实例

(32,4608,2016,1)

  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 1
    评论
以下是基于Keras编写的CGAN扩增DEAP脑电数据集的代码: ```python import numpy as np from keras.layers import Input, Dense, Reshape, Flatten, Dropout, concatenate, Conv1D, Conv2DTranspose from keras.layers import BatchNormalization, Activation, Embedding, ZeroPadding2D from keras.layers.advanced_activations import LeakyReLU from keras.models import Sequential, Model from keras.optimizers import Adam from keras.utils import np_utils from keras.datasets import mnist from keras import backend as K # Load DEAP dataset def load_data(): # Load data from file data = np.load('data.npy') labels = np.load('labels.npy') # Normalize data data = (data - np.min(data)) / (np.max(data) - np.min(data)) # Reshape data to (samples, channels, samples_per_channel) data = np.reshape(data, (data.shape[0], data.shape[1], -1)) # Convert labels to one-hot encoding labels = np_utils.to_categorical(labels) return data, labels # Define generator def build_generator(): # Neural network architecture model = Sequential() model.add(Dense(256, input_dim=100)) model.add(LeakyReLU(alpha=0.2)) model.add(BatchNormalization(momentum=0.8)) model.add(Dense(512)) model.add(LeakyReLU(alpha=0.2)) model.add(BatchNormalization(momentum=0.8)) model.add(Dense(1024)) model.add(LeakyReLU(alpha=0.2)) model.add(BatchNormalization(momentum=0.8)) model.add(Dense(28*28*1, activation='tanh')) model.add(Reshape((28, 28, 1))) # Output noise = Input(shape=(100,)) label = Input(shape=(10,)) label_embedding = Flatten()(Embedding(10, 100)(label)) model_input = multiply([noise, label_embedding]) img = model(model_input) return Model([noise, label], img) # Define discriminator def build_discriminator(): # Neural network architecture model = Sequential() model.add(Conv1D(32, kernel_size=3, strides=2, input_shape=(40, 8064))) model.add(LeakyReLU(alpha=0.2)) model.add(Dropout(0.25)) model.add(Conv1D(64, kernel_size=3, strides=2)) model.add(ZeroPadding2D(padding=((0,1),(0,1)))) model.add(BatchNormalization(momentum=0.8)) model.add(Conv1D(128, kernel_size=3, strides=2)) model.add(LeakyReLU(alpha=0.2)) model.add(BatchNormalization(momentum=0.8)) model.add(Conv1D(256, kernel_size=3, strides=1)) model.add(LeakyReLU(alpha=0.2)) model.add(BatchNormalization(momentum=0.8)) model.add(Conv1D(512, kernel_size=3, strides=1)) model.add(LeakyReLU(alpha=0.2)) model.add(BatchNormalization(momentum=0.8)) model.add(Flatten()) model.add(Dense(1, activation='sigmoid')) # Output img = Input(shape=(40, 8064)) label = Input(shape=(10,)) label_embedding = Flatten()(Embedding(10, 40*8064)(label)) label_embedding = Reshape((40, 8064, 1))(label_embedding) concat = concatenate([img, label_embedding], axis=3) validity = model(concat) return Model([img, label], validity) # Define CGAN model def build_cgan(generator, discriminator): # Discriminator is not trainable during generator training discriminator.trainable = False # Model architecture model = Sequential() model.add(generator) model.add(discriminator) # Compile model optimizer = Adam(0.0002, 0.5) model.compile(loss='binary_crossentropy', optimizer=optimizer) return model # Train CGAN model def train_cgan(generator, discriminator, cgan, data, labels, epochs, batch_size): # Adversarial ground truth valid = np.ones((batch_size, 1)) fake = np.zeros((batch_size, 1)) for epoch in range(epochs): # --------------------- # Train Discriminator # --------------------- # Select a random batch of data and labels idx = np.random.randint(0, data.shape[0], batch_size) real_data = data[idx] real_labels = labels[idx] # Sample noise and generate a batch of new data noise = np.random.normal(0, 1, (batch_size, 100)) fake_labels = np_utils.to_categorical(np.random.randint(0, 10, batch_size), 10) gen_data = generator.predict([noise, fake_labels]) # Train the discriminator on real and fake data d_loss_real = discriminator.train_on_batch([real_data, real_labels], valid) d_loss_fake = discriminator.train_on_batch([gen_data, fake_labels], fake) d_loss = 0.5 * np.add(d_loss_real, d_loss_fake) # --------------------- # Train Generator # --------------------- # Sample noise and generate a batch of new data noise = np.random.normal(0, 1, (batch_size, 100)) fake_labels = np_utils.to_categorical(np.random.randint(0, 10, batch_size), 10) # Train the generator to fool the discriminator g_loss = cgan.train_on_batch([noise, fake_labels], valid) # Print progress print ("%d [D loss: %f] [G loss: %f]" % (epoch, d_loss, g_loss)) # Load data data, labels = load_data() # Build generator and discriminator generator = build_generator() discriminator = build_discriminator() # Build CGAN model cgan = build_cgan(generator, discriminator) # Train CGAN model train_cgan(generator, discriminator, cgan, data, labels, epochs=2000, batch_size=32) ``` 注意:以上代码仅供参考,可能需要根据具体情况进行调整和修改。
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

脑电情绪识别

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值