深度学习--bolzmann machine

bm可以看做是hopfield的一个特例。rbm又是bm的一个特例。下面的代码,看了很久才恍然大悟,好还前面看过bm的理论文章。

def sample_h_given_v(self, v0_sample):

''' This function infers state of hidden units given visible units '''

        # compute the activation of the hidden units given a sample of the visibles

        pre_sigmoid_h1, h1_mean = self.propup(v0_sample)

        # get a sample of the hiddens given their activation

        # Note that theano_rng.binomial returns a symbolic sample of dtype

        # int64 by default. If we want to keep our computations in floatX

        # for the GPU we need to specify to return the dtype floatX

        h1_sample = self.theano_rng.binomial(size = h1_mean.shape, n = 1, p = h1_mean,

                dtype = theano.config.floatX)

        return [pre_sigmoid_h1, h1_mean, h1_sample]

 

这里对h1_sample采用2项分布,根据概率p做采用,是融合了模拟退火的思想的。

大家可以用sigmoid参数一个近似的概率[0,1],然后用binomial做个简单实验。

def propup(self, vis):

        ''' This function propagates the visible units activation upwards to

        the hidden units

 

        Note that we return also the pre-sigmoid activation of the layer. As

        it will turn out later, due to how Theano deals with optimizations,

        this symbolic variable will be needed to write down a more

        stable computational graph (see details in the reconstruction cost function)

        '''

        pre_sigmoid_activation = T.dot(vis, self.W) + self.hbias

        return [pre_sigmoid_activation, T.nnet.sigmoid(pre_sigmoid_activation)]

 

这个函数和上面的函数一起实现Gibbs 采样。不过文章中对一个样本各个分量的采样不是一起做的。这里用wx+b来一起实现。还得再看下文章。

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值