自编码器是神经网络的一种基本结构,通过使神经网络的输出等于输入,迫使隐层学习到最能表征原数据的特征。自编码器属于一种无监督学习方法,也可称为自监督学习(网络的输入等于输出)。能对数据进行非线性降维。同时,可以作为一种特征提取工具,在训练自编码器的同时,提取出数据的特征,可以使用这些特征做后续的分类问题。也可以使用自编码器训练的到的权重,去初始化深度神经网络,使得深度神经网络更容易收敛到最小值。
class AdditiveGaussianNoiseAutoencoder(object):#定义一个自编码器的类
def __init__(self, n_input, n_hidden, transfer_function = tf.nn.sigmoid, optimizer = tf.train.AdamOptimizer(),
scale = 0.1):
self.n_input = n_input
self.n_hidden = n_hidden
self.transfer = transfer_function
self.scale = tf.placeholder(tf.float32)
self.training_scale = scale
network_weights = self._initialize_weights()
self.weights = network_weights
# model
self.x = tf.placeholder(tf.float32, [None, self.n_input])
self.hidden = self.transfer(tf.add(tf.matmul(self.x + scale * tf.random_normal((n_input,)),
self.weights['w1']),
self.weights['b1']))
self.reconstruction = self.transfer(tf.add(tf.matmul(self.hidden, self.weights['w2']), self.weights['b2']))
# cost
regularizer=tf.contrib.layers.l2_regularizer(0.05)
self.cost = 0.5 * tf.reduce_sum(tf.pow(tf.subtract(self.reconstruction, self.x), 2.0))+regularizer(self.weights['w1'])
self.optimizer = optimizer.minimize(self.cost)
init = tf.global_variables_initializer()
self.sess = tf.Session()
self.sess.run(init)
def _initialize_weights(self):
all_weights = dict()
all_weights['w1'] = tf.Vari