Autoencode --wikipedia

转自wikipedia,原文地址:http://en.wikipedia.org/wiki/Autoencoder



From Wikipedia, the free encyclopedia

An auto-encoder is an artificial neural network used for learning efficient codings.[1] The aim of an auto-encoder is to learn a compressed representation (encoding) for a set of data. This means it is being used for dimensionality reduction. Auto-encoders use three or more layers:

  • An input layer. For example, in a face recognition task, the neurons in the input layer could map to pixels in the photograph.
  • A number of considerably smaller hidden layers, which will form the encoding.
  • An output layer, where each neuron has the same meaning as in the input layer.

If linear neurons are used, or only a single sigmoid hidden layer, then the optimal solution to an auto-encoder is strongly related to PCA.[2]

Auto-encoders can also be used to learn overcomplete feature representations of data.

Training[edit]

An auto-encoder is often trained using one of the many backpropagation variants (conjugate gradient methodsteepest descent, etc.) Though often reasonably effective, there are fundamental problems with using backpropagation to train networks with many hidden layers. Once the errors get backpropagated to the first few layers, they are minuscule, and quite ineffectual. This causes the network to almost always learn to reconstruct the average of all the training data. Though more advanced backpropagation methods (such as the conjugate gradient method) help with this to some degree, it still results in very slow learning and poor solutions. This problem is remedied by using initial weights that approximate the final solution. The process to find these initial weights is often called pretraining.

A pretraining technique developed by Geoffrey Hinton for training many-layered "deep" auto-encoders involves treating each neighboring set of two layers like a restricted Boltzmann machine for pre-training to approximate a good solution and then using a backpropagation technique to fine-tune.[3]

References[edit]

  1. ^ Modeling word perception using the Elman network, Liou, C.-Y., Huang, J.-C. and Yang, W.-C. Neurocomputing, Volume 71, 3150–3157 (2008),doi:10.1016/j.neucom.2008.04.030
  2. ^ Auto-association by multilayer perceptrons and singular value decomposition, H. Bourlard and Y. Kamp Biological, Cybernetics Volume 59, Numbers 4-5, 291-294 (1988), doi:10.1007/BF00332918
  3. ^ Reducing the Dimensionality of Data with Neural Networks (Science, 28 July 2006, Hinton & Salakhutdinov)

See also[edit]


From Wikipedia, the free encyclopedia

An auto-encoder is an artificial neural network used for learning efficient codings.[1] The aim of an auto-encoder is to learn a compressed representation (encoding) for a set of data. This means it is being used for dimensionality reduction. Auto-encoders use three or more layers:

  • An input layer. For example, in a face recognition task, the neurons in the input layer could map to pixels in the photograph.
  • A number of considerably smaller hidden layers, which will form the encoding.
  • An output layer, where each neuron has the same meaning as in the input layer.

If linear neurons are used, or only a single sigmoid hidden layer, then the optimal solution to an auto-encoder is strongly related to PCA.[2]

Auto-encoders can also be used to learn overcomplete feature representations of data.

Training[edit]

An auto-encoder is often trained using one of the many backpropagation variants (conjugate gradient methodsteepest descent, etc.) Though often reasonably effective, there are fundamental problems with using backpropagation to train networks with many hidden layers. Once the errors get backpropagated to the first few layers, they are minuscule, and quite ineffectual. This causes the network to almost always learn to reconstruct the average of all the training data. Though more advanced backpropagation methods (such as the conjugate gradient method) help with this to some degree, it still results in very slow learning and poor solutions. This problem is remedied by using initial weights that approximate the final solution. The process to find these initial weights is often called pretraining.

A pretraining technique developed by Geoffrey Hinton for training many-layered "deep" auto-encoders involves treating each neighboring set of two layers like a restricted Boltzmann machine for pre-training to approximate a good solution and then using a backpropagation technique to fine-tune.[3]

References[edit]

  1. ^ Modeling word perception using the Elman network, Liou, C.-Y., Huang, J.-C. and Yang, W.-C. Neurocomputing, Volume 71, 3150–3157 (2008),doi:10.1016/j.neucom.2008.04.030
  2. ^ Auto-association by multilayer perceptrons and singular value decomposition, H. Bourlard and Y. Kamp Biological, Cybernetics Volume 59, Numbers 4-5, 291-294 (1988), doi:10.1007/BF00332918
  3. ^ Reducing the Dimensionality of Data with Neural Networks (Science, 28 July 2006, Hinton & Salakhutdinov)

See also[edit]

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值