python自编码器_python-keras中的一维卷积变分自编码器

我正在尝试通过使用来自同一repo here (which uses deconvolution)的其他示例来适应this example from the git repo.

我无法弄清楚哪里出了问题,但这似乎很基本.我们来了:

import numpy as np

import matplotlib.pyplot as plt

from scipy.stats import norm

# Keras uses TensforFlow backend as default

from keras.layers import Input, Dense, Lambda, Flatten, Reshape

from keras.layers import Conv1D,UpSampling1D

from keras.models import Model

from keras import backend as K

from keras import metrics

from keras.datasets import mnist

# Input image dimensions

steps, original_dim = 1, 28*28 # Take care here since we are changing this according to the data

# Number of convolutional filters to use

filters = 64

# Convolution kernel size

num_conv = 6

# Set batch size

batch_size = 100

# Decoder output dimensionality

decOutput = 10

latent_dim = 20

intermediate_dim = 256

epsilon_std = 1.0

epochs = 5

x = Input(batch_shape=(batch_size,steps,original_dim))

# Play around with padding here, not sure what to go with.

conv_1 = Conv1D(1,

kernel_size=num_conv,

padding='same',

activation='relu')(x)

conv_2 = Conv1D(filters,

kernel_size=num_conv,

padding='same',

activation='relu',

strides=1)(conv_1)

flat = Flatten()(conv_2) # Since we are passing flat data anyway, we probably don't need this.

hidden = Dense(intermediate_dim, activation='relu')(flat)

z_mean = Dense(latent_dim)(hidden)

z_log_var = Dense(latent_dim)(hidden)

def sampling(args):

z_mean, z_log_var = args

epsilon = K.random_normal(shape=(batch_size, latent_dim),

mean=0., stddev=epsilon_std)

return z_mean + K.exp(z_log_var ) * epsilon # the original VAE divides z_log_var with two -- why?

# note that "output_shape" isn't necessary with the TensorFlow backend

# so you could write `Lambda(sampling)([z_mean, z_log_var])`

z = Lambda(sampling, output_shape=(latent_dim,))([z_mean, z_log_var])

# we instantiate these layers separately so as to reuse them later

decoder_h = Dense(intermediate_dim, activation='relu')

decoder_mean = Dense(original_dim, activation='sigmoid')

h_decoded = decoder_h(z)

x_decoded_mean = decoder_mean(h_decoded)

def vae_loss(x, x_decoded_mean):

xent_loss = original_dim * metrics.binary_crossentropy(x, x_decoded_mean)

kl_loss = - 0.5 * K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1) # Double check wtf this is supposed to be

return xent_loss + kl_loss

vae = Model(x, x_decoded_mean)

vae.compile(optimizer='adam', loss=vae_loss) # 'rmsprop'

vae.summary()

结果如下:

____________________________________________________________________________________________________

Layer (type) Output Shape Param # Connected to

====================================================================================================

input_31 (InputLayer) (100, 1, 784) 0

____________________________________________________________________________________________________

conv1d_87 (Conv1D) (100, 1, 1) 4705

____________________________________________________________________________________________________

conv1d_88 (Conv1D) (100, 1, 64) 448

____________________________________________________________________________________________________

flatten_29 (Flatten) (100, 64) 0

____________________________________________________________________________________________________

dense_134 (Dense) (100, 256) 16640

____________________________________________________________________________________________________

dense_135 (Dense) (100, 20) 5140

____________________________________________________________________________________________________

dense_136 (Dense) (100, 20) 5140

____________________________________________________________________________________________________

lambda_24 (Lambda) (100, 20) 0

____________________________________________________________________________________________________

dense_137 (Dense) (100, 256) 5376

____________________________________________________________________________________________________

dense_138 (Dense) (100, 784) 201488

====================================================================================================

Total params: 238,937.0

Trainable params: 238,937.0

Non-trainable params: 0.0

然后,如果我尝试运行此命令,如下所示:

from keras.datasets import mnist

img_rows, img_cols = 1,28*28

original_img_size = (img_rows, img_cols)

# train the VAE on MNIST digits

(x_train, _), (x_test, y_test) = mnist.load_data()

x_train = x_train.astype('float32') / 255.

x_train = x_train.reshape((x_train.shape[0],) + original_img_size)

print('x_train.shape:', x_train.shape)

N = 1000

epochs = 2

batch_size = int(N/10)

vae.fit(x_train[0:N,:], x_train[0:N,:],

shuffle=True,

epochs=epochs,

batch_size=batch_size)

我收到此错误,但是我无法完全弄清楚如何克服它.从Conv1D到Dense都有关系…

ValueError: Cannot feed value of shape (100, 1, 784) for Tensor u'dense_138_target:0', which has shape '(?, ?)'

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
好的,下面是一个基于PythonKeras一维卷积降噪自编码器的代码示例: ```python from keras.layers import Input, Conv1D, MaxPooling1D, UpSampling1D from keras.models import Model # 定义输入数据的形状 input_shape = (None, 1) # 定义编码器 inputs = Input(shape=input_shape) x = Conv1D(32, 3, activation='relu', padding='same')(inputs) x = MaxPooling1D(2, padding='same')(x) x = Conv1D(16, 3, activation='relu', padding='same')(x) x = MaxPooling1D(2, padding='same')(x) x = Conv1D(8, 3, activation='relu', padding='same')(x) encoded = MaxPooling1D(2, padding='same')(x) # 定义解码器 x = Conv1D(8, 3, activation='relu', padding='same')(encoded) x = UpSampling1D(2)(x) x = Conv1D(16, 3, activation='relu', padding='same')(x) x = UpSampling1D(2)(x) x = Conv1D(32, 3, activation='relu', padding='same')(x) x = UpSampling1D(2)(x) decoded = Conv1D(1, 3, activation='sigmoid', padding='same')(x) # 构建自编码器模型 autoencoder = Model(inputs, decoded) autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy') # 打印模型概述 autoencoder.summary() ``` 这个示例代码,我们使用了Keras库来实现一维卷积降噪自编码器。我们首先定义了编码器和解码器的层,然后使用Keras的Model类将它们连接起来构建一个完整的自编码器模型。我们使用Adadelta优化器和二元交叉熵损失函数来训练模型。 这个示例代码使用的是卷积层和池化层来实现编码器和解码器。我们使用了ReLU作为激活函数,并在解码器的最后一层使用了sigmoid激活函数来保证输出值在0到1之间。因为我们的输入数据是一维的,所以我们使用了Conv1D和MaxPooling1D层来处理数据。 这个示例代码使用的是二元交叉熵损失函数,因为我们的输入数据是二进制数据。如果你的输入数据不是二进制数据,你可以使用其他类型的损失函数。 以上就是一个基于PythonKeras一维卷积降噪自编码器的代码示例,希望对你有所帮助!

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值