theano学习指南--深度置信网络(DBN)(源码)

本文介绍了作者在学习Git过程中,将theano实现的深度置信网络(DBN)代码进行重写,并提供了详细的注释和理解。通过阅读原文并实践,作者旨在加深对深度学习原理的理解。感兴趣的读者可以访问作者的GitHub仓库进行交流和合作。
摘要由CSDN通过智能技术生成

欢迎fork我的github:https://github.com/zhaoyu611/DeepLearningTutorialForChinese

最近在学习Git,所以正好趁这个机会,把学习到的知识实践一下~ 看完DeepLearning的原理,有了大体的了解,但是对于theano的代码,还是自己撸一遍印象更深 所以照着deeplearning.net上的代码,重新写了一遍,注释部分是原文翻译和自己的理解。 感兴趣的小伙伴可以一起完成这个工作哦~ 有问题欢迎联系我 Email: zhaoyuafeu@gmail.com QQ: 3062984605


#-*-coding: utf-8 -*-
__author__ = 'Administrator'

import cPickle
import gzip
import sys
import time
import numpy
import os
import theano
import theano.tensor as T
from theano.tensor.shared_randomstreams import RandomStreams

from logistic_sgd import load_data,LogisticRegression
from mlp import HiddenLayer
from rbm import RBM

class DBN(object):
    """
    深度置信网络
    深度置信网络是将若干RBMs堆叠组成的。第i层RBM的隐层是第i+1层的输入。
    第一层RBM的输入是网络的输入,最后一层RBM的隐层是网络的输出。当用于分类时,
    DBN顶部添加一个logistic回归,变成了MLP。
    """
    def __init__(self,numpy_rng,theano_rng=None,n_ins=784,
                 hidden_layers_sizes=[500,500],n_outs=10):
        """
        该类可实现可变层数的DBN

        :param numpy_rng: numpy.random.RandomState  用于初始化权重的numpy随机数
        :param theano_rng: theano.tensor.shared_randomstreams.RandomStreams
                            如果输入为None
        :param n_ins: int DBN输入量的维度
        :param hidden_layers_size: list 隐层输入量的维度
        :param n_outs: int 网络输出量的维度
        :return:
        """
        self.sigmoid_layers=[]
        self.rbm_layers=[]
        self.params=[]
        self.n_layers=len(hidden_layers_sizes)
        assert self.n_layers>0

        if not theano_rng:
            theano_rng=RandomStreams(numpy_rng.randint(2**30))
        #设置符号变量
        self.x=T.matrix('x')
        self.y=T.ivector('y')

        #DBN是一个MLP,中间层的权重是在不同的RBM之间共享的。
        #首先构造DBN为一个深层多感知器。在构造每个sigmoid层时,
        #同样构造RBM与之共享变量。在预训练阶段,需要训练三个RBM(同样改变MLP的权重,
        #微调阶段,通过在MLP上随机梯度下降法完成DBN训练。

        for i in xrange(self.n_layers):
            #构造sigmoid层,
            #对于第一层,输入量大小是网络的输入量大小
            #对于其它层,输入量大小是下层隐层单元的数量
            if i==0:
                input_size=n_ins
            else:
                input_size
Code provided by Ruslan Salakhutdinov and Geoff Hinton Permission is granted for anyone to copy, use, modify, or distribute this program and accompanying programs and documents for any purpose, provided this copyright notice is retained and prominently displayed, along with a note saying that the original programs are available from our web page. The programs and documents are distributed without any warranty, express or implied. As the programs were written for research purposes only, they have not been tested to the degree that would be advisable in any important application. All use of these programs is entirely at the user's own risk. How to make it work: 1. Create a separate directory and download all these files into the same directory 2. Download from http://yann.lecun.com/exdb/mnist the following 4 files: o train-images-idx3-ubyte.gz o train-labels-idx1-ubyte.gz o t10k-images-idx3-ubyte.gz o t10k-labels-idx1-ubyte.gz 3. Unzip these 4 files by executing: o gunzip train-images-idx3-ubyte.gz o gunzip train-labels-idx1-ubyte.gz o gunzip t10k-images-idx3-ubyte.gz o gunzip t10k-labels-idx1-ubyte.gz If unzipping with WinZip, make sure the file names have not been changed by Winzip. 4. Download Conjugate Gradient code minimize.m 5. Download Autoencoder_Code.tar which contains 13 files OR download each of the following 13 files separately for training an autoencoder and a classification model: o mnistdeepauto.m Main file for training deep autoencoder o mnistclassify.m Main file for training classification model o converter.m Converts raw MNIST digits into matlab format o rbm.m Training RBM with binary hidden and binary visible units o rbmhidlinear.m Training RBM with Gaussian hidden and binary visible units o backprop.m Backpropagation for fine-tuning an autoencoder o backpropclassify.m Backpropagation for classification using "encoder" network o CG_MNIST.m Conjugate Gradient optimization for fine-tuning an autoencoder o CG_CLASSIFY_INIT.m Co
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值