开源项目domain-transfer-network-master学习

date:2017.5.6

第一次用markdown写这玩意儿,屏幕太小,编辑起来感觉怪怪的,如果不好用,之后换回原版。

源域

关于conv2d的疑问

net = slim.conv2d(net, 128, [3, 3], scope='conv2')     # (batch_size, 8, 8, 128)
                    net = slim.batch_norm(net, scope='bn2')
                    net = slim.conv2d(net, 256, [3, 3], scope='conv3')     # (batch_size, 4, 4, 256)

一个8x8的图片经过3x3的核卷积之后新的特征大小不应该是6x6吗?

tensorboard相关

            loss_summary = tf.summary.scalar('classification_loss', self.loss)
            accuracy_summary = tf.summary.scalar('accuracy', self.accuracy)
            self.summary_op = tf.summary.merge([loss_summary, accuracy_summary])

上面的代码是用于可视化

训练

    elif self.mode == 'train':
        self.src_images = tf.placeholder(tf.float32, [None, 32, 32, 3], 'svhn_images')
        self.trg_images = tf.placeholder(tf.float32, [None, 32, 32, 1], 'mnist_images')

        # source domain (svhn to mnist)
        self.fx = self.content_extractor(self.src_images)
        self.fake_images = self.generator(self.fx)
        self.logits = self.discriminator(self.fake_images)
        self.fgfx = self.content_extractor(self.fake_images, reuse=True)

        # loss
        self.d_loss_src = slim.losses.sigmoid_cross_entropy(self.logits, tf.zeros_like(self.logits))
        self.g_loss_src = slim.losses.sigmoid_cross_entropy(self.logits, tf.ones_like(self.logits))
        self.f_loss_src = tf.reduce_mean(tf.square(self.fx - self.fgfx)) * 15.0

        # optimizer
        self.d_optimizer_src = tf.train.AdamOptimizer(self.learning_rate)
        self.g_optimizer_src = tf.train.AdamOptimizer(self.learning_rate)
        self.f_optimizer_src = tf.train.AdamOptimizer(self.learning_rate)

        t_vars = tf.trainable_variables()
        d_vars = [var for var in t_vars if 'discriminator' in var.name]
        g_vars = [var for var in t_vars if 'generator' in var.name]
        f_vars = [var for var in t_vars if 'content_extractor' in var.name]

        # train op
        with tf.name_scope('source_train_op'):
            self.d_train_op_src = slim.learning.create_train_op(self.d_loss_src, self.d_optimizer_src, variables_to_train=d_vars)
            self.g_train_op_src = slim.learning.create_train_op(self.g_loss_src, self.g_optimizer_src, variables_to_train=g_vars)
            self.f_train_op_src = slim.learning.create_train_op(self.f_loss_src, self.f_optimizer_src, variables_to_train=f_vars)

        # summary op
        d_loss_src_summary = tf.summary.scalar('src_d_loss', self.d_loss_src)
        g_loss_src_summary = tf.summary.scalar('src_g_loss', self.g_loss_src)
        f_loss_src_summary = tf.summary.scalar('src_f_loss', self.f_loss_src)
        origin_images_summary = tf.summary.image('src_origin_images', self.src_images)
        sampled_images_summary = tf.summary.image('src_sampled_images', self.fake_images)
        self.summary_op_src = tf.summary.merg
  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值